All papers (Page 19 of 24087 results)
Accelerating pairings on BW10 and BW14 Curves
Since the advent of pairing based cryptography, many researchers have developed several techniques and variants of pairings to optimise the speed of pairing computations. The selection of the elliptic curve for a given pairing based protocol is crucial for operations in the first and second pairing groups of points of the elliptic curve and for many cryptographic schemes. A new variant of superoptimal pairing was proposed in 2023, namely x-superoptimal pairing on curves with odd prime embedding degrees BW13-310 and BW19-286. This paper extends the definition of the x-superoptimal pairing on elliptic curves with even embedding degrees BW10-511 and BW14-351 at 128 bits security level. We provide a suitable formula of the x-superoptimal pairing on BW10-511 and BW14-351 where the Miller loop is about $13.5\%$ and $21.6\%$ faster than the optimal ate pairing on BW10-511 and BW14-351 respectively. The correctness of the x-superoptimal pairing on BW10-511 and BW14-351 and bilinearity has been verified by a Magma code.
A Succinct Range Proof for Polynomial-based Vector Commitment
Range proofs serve as a protocol for the prover to prove to the verifier that a committed number resides within a specified range, such as $[0,2^n)$, without disclosing the actual value. These proofs find extensive application in various domains, including anonymous cryptocurrencies, electronic voting, and auctions. However, the efficiency of many existing schemes diminishes significantly when confronted with batch proofs encompassing multiple elements.
The pivotal challenge arises from their focus on the commitment to a singular element rather than a vector. Addressing this gap, our paper introduces MissileProof, a zero-knowledge, succinct, non-interactive argument of knowledge tailored for the range proof of a vector commitment. Our core contribution lies in reducing this argument to a bi-to-uni variate SumCheck problem and the bivariate polynomial ZeroTest problem, and design two polynomial interactive oracle proofs (PIOPs) for each problem.
Our principal innovation involves the transformation of this argument into a bi-to-uni variate SumCheck problem and the bivariate polynomial ZeroTest problem. To tackle these challenges, we devise two Polynomial Interactive Oracle Proofs (PIOPs) for each problem.
As far as we know, our scheme has the optimal proof size ($O(1)$), the optimal statement length ($O(1)$), and the optimal verification time ($O(1)$), at the expense of slightly sacrificing proof time ($O(l\log l\cdot n\log n)$ operations on the prime field for FFT and $O(ln)$ group exponentiations in $\mathbb{G}$). We prove the security of this scheme. Experimental data shows for a committed vector of length $l = 16384$ and $n=64$, our work has the best performance in terms of the statement length (0.03125KB), proof size (1.375KB) and verification time (0.01s) with a slightly increased proof time (614s).
Expediting Homomorphic Computation via Multiplicative Complexity-aware Multiplicative Depth Minimization
Fully homomorphic encryption (FHE) enables secure data processing without compromising data access, but its computational cost and slower execution compared to plaintext operations pose challenges. The growing interest in FHE-based secure computation necessitates the acceleration of homomorphic computations. While existing research primarily targets the reduction of the multiplicative depth (MD) of homomorphic circuits, this paper addresses the trade-off between MD reduction and the increase in multiplicative complexity (MC), a critical gap often overlooked during circuit optimization and potentially resulting in suboptimal outcomes. Three contributions are presented: (a) an exact synthesis paradigm for optimal homomorphic circuit implementations, (b) an efficient heuristic algorithm named MC-aware MD minimization, and (c) a homomorphic circuit optimization flow combining MC-aware MD minimization with existing MD reduction techniques. Experimental results demonstrate a 21.32% average reduction in homomorphic computation time and showcase significantly improved efficiency in circuit optimization.
Grafting: Decoupled Scale Factors and Modulus in RNS-CKKS
The CKKS Fully Homomorphic Encryption (FHE) scheme enables approximate arithmetic on encrypted complex numbers for a desired precision. Most implementations use RNS with carefully chosen parameters to balance precision, efficiency, and security. However, a key limitation in RNS-CKKS is the rigid coupling between the scale factor, which determines numerical precision, and the modulus, which ensures security. Since these parameters serve distinct roles—one governing arithmetic correctness and the other defining cryptographic structure—this dependency imposes design constraints, such as a lack of suitable NTT primes and limited precision flexibility, ultimately leading to inefficiencies.
We propose Grafting, a novel approach to decouple scale factors from the modulus by introducing (universal) sprouts, reusable modulus factors that optimize word-sized packing while allowing flexible rescaling. With the universal property, sprouts allow rescaling by arbitrary bit-lengths and key-switching at any modulus bit-length without requiring additional key-switching keys. Decoupling the scale factor from the modulus in Grafting yields significant efficiency gains: (1) Optimized RNS packing by decomposing the modulus into machine word-sized components, accelerating computations and reducing the ciphertext and encryption/evaluation key sizes; and (2) A freely adjustable scale factor independent of the modulus, unifying the ring structure across applications and reducing modulus consumption through adaptive scalings.
Our experiments demonstrate that Grafting improves performance across standard SHE/FHE parameter sets for ring dimensions $2^{14}$-$2^{16}$ by up to $1.83\times$ and $2.01\times$ for key-switchings and multiplications, respectively, and up to $1.92\times$ for bootstrapping. Grafting also reduces public key and ciphertext sizes by up to $62\%$ without compressions, maintaining the same number of public keys as before. As an application, we showcase the CKKS gate bootstrapping for bits (Bae et al.; Eurocrypt'24), achieving $1.89\times$ speed-up due to the reduced number of RNS factors. Finally, we revisit the homomorphic comparison (Cheon et al.; Asiacrypt'20), evaluating it with carefully chosen scale factors for each iteration, reporting up to $204$-bit fewer modulus consumption ($27\%$ reduction) in the standard parameter set, without precision loss.
Tempora-Fusion: Time-Lock Puzzle with Efficient Verifiable Homomorphic Linear Combination
Uncategorized
Uncategorized
To securely transmit sensitive information into the future, Time-Lock Puzzles (TLPs) have been developed. Their applications include scheduled payments, timed commitments, e-voting, and sealed-bid auctions. Homomorphic TLP is a key variant of TLP that enables computation on puzzles from different clients. This allows a solver/server to tackle only a single puzzle encoding the computation's result. However, existing homomorphic TLPs lack support for verifying the correctness of the computation results. We address this limitation by introducing Tempora-Fusion, a TLP that allows a server to perform homomorphic linear combinations of puzzles from different clients while ensuring verification of computation correctness. This scheme avoids asymmetric-key cryptography for verification, thus paving the way for efficient implementations. We discuss our scheme's application in various domains, such as federated learning, scheduled payments in online banking, and e-voting.
Supersonic OT: Fast Unconditionally Secure Oblivious Transfer
Oblivious Transfer (OT) is a fundamental cryptographic protocol with applications in secure Multi-Party Computation, Federated Learning, and Private Set Intersection. With the advent of quantum computing, it is crucial to develop unconditionally secure core primitives like OT to ensure their continued security in the post-quantum era. Despite over four decades since OT's introduction, the literature has predominantly relied on computational assumptions, except in cases using unconventional methods like noisy channels or a fully trusted party. Introducing “Supersonic OT”, a highly efficient and unconditionally secure OT scheme that avoids public-key-based primitives, we offer an alternative to traditional approaches. Supersonic OT enables a receiver to obtain a response of size O(1). Its simple (yet non-trivial) design facilitates easy security analysis and implementation. The protocol employs a basic secret-sharing scheme, controlled swaps, the one-time pad, and a third-party helper who may be corrupted by a semi-honest adversary. Our implementation and runtime analysis indicate that a single instance of Supersonic OT completes in 0.35 milliseconds, making it up to 2000 times faster than the state-of-the-art base OT.
Secure Vickrey Auctions with Rational Parties
In this work, we construct a second price (Vickrey) auction protocol (SPA), which does not require any auctioneers and ensures total privacy in the presence of rational parties participating in auction. In particular, the confidentiality of the highest bid and the identity of the second highest bidder are protected. We model the bidders participating in the second price auction as rational, computationally bounded and privacy-sensitive parties. These are self-interested agents who care about winning the auction more than learning about the private bids of other parties. A rational party does not deviate from the protocol arbitrarily but does so only for its own individual `advantage' -- without any consideration for others. Such an advantage is modeled using suitable utility functions.
We show that for rational and computationally bounded parties participating in our second-price auctions protocol, there exists a privacy-preserving dominant strategy equilibrium in which every party prefers to follow the protocol rather than to deviate.
Our protocol is implemented using open-source cryptographic constructs. Running our SPA protocol on commodity hardware with $15$ bidders, with bids of length $10$ bits, completes in $1.26$sec and has total communication of $0.77$MB whereas, under similar conditions, Atlas (semi-honest) protocol takes $40\%$ more time ($2.11$ sec) and $87\%$ more communication ($6.09$MB).
FSSiBNN: FSS-based Secure Binarized Neural Network Inference with Free Bitwidth Conversion
Neural network inference as a service enables a cloud server to provide inference services to clients. To ensure the privacy of both the cloud server's model and the client's data, secure neural network inference is essential. Binarized neural networks (BNNs), which use binary weights and activations, are often employed to accelerate inference. However, achieving secure BNN inference with secure multi-party computation (MPC) is challenging because MPC protocols cannot directly operate on values of different bitwidths and require bitwidth conversion. Existing bitwidth conversion schemes expand the bitwidths of weights and activations, leading to significant communication overhead.
To address these challenges, we propose FSSiBNN, a secure BNN inference framework featuring free bitwidth conversion based on function secret sharing (FSS). By leveraging FSS, which supports arbitrary input and output bitwidths, we introduce a bitwidth-reduced parameter encoding scheme. This scheme seamlessly integrates bitwidth conversion into FSS-based secure binary activation and max pooling protocols, thereby eliminating the additional communication overhead. Additionally, we enhance communication efficiency by combining and converting multiple BNN layers into fewer matrix multiplication and comparison operations. We precompute matrix multiplication tuples for matrix multiplication and FSS keys for comparison during the offline phase, enabling constant-round online inference.
In our experiments, we evaluated various datasets and models, comparing our results with state-of-the-art frameworks. Compared with the two-party framework XONN (USENIX Security '19), FSSiBNN achieves approximately 7$\times$ faster inference times and reduces communication overhead by about 577$\times$. Compared with the three-party frameworks SecureBiNN (ESORICS '22) and FLEXBNN (TIFS '23), FSSiBNN is approximately 2.5$\times$ faster in inference time and reduces communication overhead by 1.3$\times$ to 16.4$\times$.
Improved Reductions from Noisy to Bounded and Probing Leakages via Hockey-Stick Divergences
There exists a mismatch between the theory and practice of cryptography in the presence of leakage. On the theoretical front, the bounded leakage model, where the adversary learns bounded-length but noiseless information about secret components, and the random probing model, where the adversary learns some internal values of a leaking implementation with some probability, are convenient abstractions to analyze the security of numerous designs. On the practical front, side-channel attacks produce long transcripts which are inherently noisy but provide information about all internal computations, and this noisiness is usually evaluated with closely related metrics like the mutual information or statistical distance. Ideally, we would like to claim that resilience to bounded leakage or random probing implies resilience to noisy leakage evaluated according to these metrics. However, prior work (Duc, Dziembowski and Faust, Eurocrypt 2014; Brian et al., Eurocrypt 2021) has shown that proving such reductions with useful parameters is challenging.
In this work, we study noisy leakage models stemming from hockey-stick divergences, which generalize statistical distance and are also the basis of differential privacy. First, we show that resilience to bounded leakage and random probing implies resilience to our new noisy leakage model with improved parameters compared to models based on the statistical distance or mutual information. Second, we establish composition theorems for our model, showing that these connections extend to a setting where multiple leakages are obtained from a leaking implementation. We complement our theoretical results with a discussion of practical relevance, highlighting that (i) the reduction to bounded leakage applies to realistic leakage functions with noise levels that are decreased by several orders of magnitude compared to Brian et al., and (ii) the reduction to random probing usefully generalizes the seminal work of Duc, Dziembowski, and Faust, although it remains limited when the field size in which masking operates grows (i.e., hockey-stick divergences can better hide the field size dependency of the noise requirements, but do not annihilate it).
Impossible Boomerang Distinguishers Revisited
The Impossible Boomerang Attack (IBA) has shown significant power in evaluating the security of block ciphers, such as AES. However, current studies still lack foundational theory, user guild and universal method for constructing IBDs. This paper addresses these gaps through comprehensive research. Theoretically, we establish a new framework for constructing a series of IBDs by differential propagation, state propagation, and generalized boomerang tables. We rigorously prove their inclusion relations, resulting in a complete theory and hierarchical apply strategy for both single-key and related-key settings. We further analyze IBD constructions in two types of related-key settings: two-related-keys with arbitrary schedules and four-related-keys with linear schedules, structurally in a unified way. Technically, we develop a scheduling algorithm and a general SAT-based method to search for IBDs across various block cipher designs, including SPN, Feistel, and ARX. Additionally, we propose several strategies to enhance the search process. As applications, we derive (RK-)IBDs for 10 block ciphers, almost for the first time. Compared to impossible differentials, our IBDs are at least as effective, such as DES and PRESENT. Notably, we achieve 1 more round on PRINTcipher48 in single-key setting; 2 more rounds on AES-128, and 1 or 2 more rounds on SPECK variants in two-related-keys settings; 1, 4, 2 more rounds on GIFT-64, CHAM-64/128 and CHAM-128/256 in four-related-keys settings. We also obtain full-round RK-IBDs on GOST. Compared to current IBDs, we achieve 1, 1 more rounds on SKINNY-64/192 and SKINNYee. Furthermore, as an applied case of derived IBDs, we present a 31-round IBA on SKINNYee, which is the first 31-round attack on SKINNYee and the best result to date.
On the vector subspaces of $\mathbb{F}_{2^n}$ over which the multiplicative inverse function sums to zero
We study the behavior of the multiplicative inverse function (which plays an important role in cryptography and in the study of finite fields), with respect to a recently introduced generalization of almost perfect nonlinearity (APNness), called $k$th-order sum-freedom, that extends a classic characterization of APN functions, and has also some relationship with integral attacks. This generalization corresponds to the fact that a vectorial function $F:\mathbb F_2^n\mapsto \mathbb F_2^m$ sums to a nonzero value over every $k$-dimensional affine subspace of $\mathbb F_2^n$, for some $k\leq n$ (APNness corresponds to $k=2$). The sum of the values of the inverse function $x\in \mathbb F_{2^n}\mapsto x^{2^n-2}\in \mathbb F_{2^n}$ over any affine subspace $A$ of $\mathbb{F}_{2^n}$ not containing 0 ({\em i.e.} being not a vector space) has been addressed, thanks to a simple expression of such sum, which shows that it never vanishes. We study in the present paper the case of vector (i.e. linear) subspaces, which is much less simple to handle. The sum depends on a coefficient in subspace polynomials.
We study for which values of $k$ the multiplicative inverse function can sum to nonzero values over all $k$-dimensional vector subspaces. We show that, for every $k$ not co-prime with $n$, it sums to zero over at least one $k$-dimensional $\mathbb{F}_2$-subspace of $\mathbb{F}_{2^n}$. We study the behavior of the inverse function over direct sums of vector spaces and we deduce that the property of the inverse function to be $k$th-order sum-free happens for $k$ if and only if it happens for $n-k$. We derive several other results and we show that the set of values $k$ such that the inverse function is not $k$th-order sum-free is stable when adding two values of $k$ whose product is smaller than $n$ (and when subtracting two values under some conditions). We clarify the case of dimension at most 4 (equivalently, of co-dimension at most 4) and this allows to address, for every $n$, all small enough values of $k$ of the form $3a+4b$.
Delegated-Query Oblivious Transfer and its Practical Applications
Databases play a pivotal role in the contemporary World Wide Web and the world of cloud computing. Unfortunately, numerous privacy violations have recently garnered attention in the news. To enhance database privacy, we consider Oblivious Transfer (OT), an elegant cryptographic technology. Our observation reveals that existing research in this domain primarily concentrates on theoretical cryptographic applications, overlooking various practical aspects:
- OTs assume parties have direct access to databases. Our "1-out-of-2 Delegated-Query OT" enables parties to privately query a database, without direct access.
- With the rise of cloud computing, physically separated databases may no longer remain so. Our "1-out-of-2 Delegated-Query Multi-Receiver OT" protects privacy in such evolving scenarios.
- Research often ignores the limitations of thin clients, e.g., Internet of Things devices. To address this, we propose a compiler that transforms any 1-out-of-n OT into a thin client version.
Differential Fault Attack on HE-Friendly Stream Ciphers: Masta, Pasta and Elisabeth
In this paper, we propose the Differential Fault Attack (DFA) on three Homomorphic Encryption (HE) friendly stream ciphers \textsf{Masta}, \textsf{Pasta}, and \textsf{Elisabeth}. Both \textsf{Masta} and \textsf{Pasta} are \textsf{Rasta}-like ciphers with publicly derived and pseudorandom affine layers. The design of \textsf{Elisabeth} is an extension of \textsf{FLIP} and \textsf{FiLIP}, following the group filter permutator paradigm. All these three ciphers operate on elements over $\mathbb{Z}_p$ or $\mathbb{Z}_{2^n}$, rather than $\mathbb{Z}_2$. We can recover the secret keys of all the targeted ciphers through DFA. In particular, for \textsf{Elisabeth}, we present a new method to determine the filtering path, which is vital to make the attack practical. Our attacks on various instances of \textsf{Masta} are practical and require only one block of keystream and a single word-based fault. By injecting three word-based faults, we can theoretically mount DFA on two instances of \textsf{Pasta}, \textsf{Pasta}-3 and \textsf{Pasta}-4. For \textsf{Elisabeth}-4, the only instance of the \textsf{Elisabeth} family, we present two DFAs in which we inject four bit-based faults or a single word-based fault. With 15000 normal and faulty keystream words, the DFA on \textsf{Elisabeth}-4 can be completed in just a few minutes.
Relaxed Vector Commitment for Shorter Signatures
MPC-in-the-Head (MPCitH) has recently gained traction as a foundation for post-quantum signature schemes, offering robust security without trapdoors. Despite its strong security profile, MPCitH-based schemes suffer from high computational overhead and large signature sizes, limiting their practical application.
This work addresses these inefficiencies by relaxing vector commitments within MPCitH-based schemes. We introduce the concept of vector semi-commitment, which relaxes the binding property of traditional vector commitment. Vector semi-commitment schemes may allow an adversary to find more than one preimage of a commitment. We instantiate vector semi-commitment schemes in both the random oracle model and the ideal cipher model, leveraging recent optimizations on GGM tree such as correlated GGM tree.
We apply the ideal-cipher-based vector semi-commitment scheme to the BN++ signature scheme and prove it almost fully secure in the ideal cipher model. Implementing these improvements in the AIMer v2.0 signature scheme, we achieve up to 18% shorter signatures and up to 112% faster signing and verification speeds, setting new benchmarks for MPCitH-based schemes.
zkVoting : Zero-knowledge proof based coercion-resistant and E2E verifiable e-voting system
We introduce ${zkVoting}$, a coercion-resistant e-voting system that utilizes a fake keys approach based on a novel nullifiable commitment scheme. This scheme allows voters to receive both real and fake commitment keys from a registrar. Each ballot includes this commitment, but only the tallier can efficiently discern the fake ballots, simplifying the tally process to $\mathcal{O}(n)$ and ensuring coercion resistance. ${zkVoting}$ also preserves voter anonymity by ensuring each ballot conceals the voter's identity. Additionally, by integrating zero-knowledge proofs, ${zkVoting}$ achieves end-to-end (E2E) verifiability. We formally prove its security and demonstrate its practicality for real-world applications, with a ballot casting time of 2.3 seconds and a tally time of 3.9 milliseconds per ballot.
Elementary Formulas for Greatest Common Divisors and Semiprime Factors
We conjecture new elementary formulas for computing the greatest common divisor (GCD) of two integers, alongside an elementary formula for extracting the prime factors of semiprimes. These formulas are of fixed-length and require only the basic arithmetic operations of: addition, subtraction, multiplication, division with remainder, and exponentiation. Our GCD formulas result from simplifying a formula of Mazzanti and are derived using Kronecker substitution techniques from our earlier research. By applying these GCD formulas together with our recent discovery of an arithmetic expression for $\sqrt{n}$, we are able to derive explicit elementary formulas for the prime factors of a semiprime $n=p q$.
Guidance for Efficient Selection of Secure Parameters for Fully Homomorphic Encryption
The field of Fully Homomorphic Encryption (FHE) has seen many theoretical and computational advances in recent years, bringing the technology closer to practicality than ever before. For this reason, practitioners from neighbouring fields such as machine learning have sought to understand FHE to provide privacy to their work. Unfortunately, selecting secure and efficient parameters in FHE is a daunting task due to the many interdependencies between the parameters involved. In this work, we solve this problem by moving away from the standard parameter selection procedure, introducing formulas which provide secure and optimal parameters for any lattice-based scheme. We build our formulas from a strong theoretical foundation based on cryptanalysis against LWE.
File-Injection Attacks on Searchable Encryption, Based on Binomial Structures
One distinguishable feature of file-inject attacks on searchable encryption schemes is the 100% query recovery rate, i.e., confirming the corresponding keyword for each query. The main efficiency consideration of file-injection attacks is the number of injected files. In the work of Zhang et al. (USENIX 2016), $|\log_2|K||$ injected files are required, each of which contains $|K|/2$ keywords for the keyword set $K$. Based on the construction of the uniform $(s,n)$-set, Wang et al. need fewer injected files when considering the threshold countermeasure. In this work, we propose a new attack that further reduces the number of injected files where Wang et al. need up to 38% more injections to achieve the same results. The attack is based on an increment $(s,n)$-set, which is also defined in this paper.
ProxCode: Efficient Biometric Proximity Searchable Encryption from Error Correcting Codes
This work builds approximate proximity searchable encryption. Secure biometric databases are the primary application. Prior work (Kuzu, Islam, and Kantarcioglu, ICDE 2012) combines locality-sensitive hashes, or LSHs, (Indyk, STOC ’98), and oblivious multimaps. The multimap associates LSH outputs as keywords to biometrics as values.
When the desired result set is of size at most one, we show a new preprocessing technique and system called ProxCode that inserts shares of a linear secret sharing into the map instead of the full biometric. Instead of choosing shares independently, shares are correlated so exactly one share is associated with each keyword/LSH output. As a result, one can rely on a map instead of a multimap. Secure maps are easier to construct with low leakage than multimaps.
For many parameters, this approach reduces the required number of LSHs for a fixed accuracy. Our scheme yields the most improvement when combining a high accuracy requirement with a biometric with large underlying noise. Our approach builds on any secure map. We evaluate the scheme accuracy for both iris data and random data.
Measuring Conditional Anonymity - A Global Study
The realm of digital health is experiencing a global surge, with mobile applications extending their reach into various facets of daily life. From tracking daily eating habits and vital functions to monitoring sleep patterns and even the menstrual cycle, these apps have become ubiquitous in their pursuit of comprehensive health insights.
Many of these apps collect sensitive data and promise users to protect their privacy - often through pseudonymization. We analyze the real anonymity that users can expect by this approach and report on our findings. More concretely:
1. We introduce the notion of conditional anonymity sets derived from statistical properties of the population.
2. We measure anonymity sets for two real-world applications and present overarching findings from 39 countries.
3. We develop a graphical tool for people to explore their own anonymity set.
One of our case studies is a popular app for tracking the menstruation cycle. Our findings for this app show that, despite their promise to protect privacy, the collected data can be used to identify users up to groups of 5 people in 97% of all the US counties, allowing the de-anonymization of the individuals. Given that the US Supreme Court recently overturned abortion rights, the possibility
of determining individuals is a calamity.
Dishonest Majority Multi-Verifier Zero-Knowledge Proofs for Any Constant Fraction of Corrupted Verifiers
In this work we study the efficiency of Zero-Knowledge (ZK) arguments of knowledge, particularly exploring Multi-Verifier ZK (MVZK) protocols as a midway point between Non-Interactive ZK and Designated-Verifier ZK, offering versatile applications across various domains. We introduce a new MVZK protocol designed for the preprocessing model, allowing any constant fraction of verifiers to be corrupted, potentially colluding with the prover. Our contributions include the first MVZK over rings. Unlike recent prior works on fields in the dishonest majority case, our protocol demonstrates communication complexity independent of the number of verifiers, contrasting the linear complexity of previous approaches. This key advancement ensures improved scalability and efficiency. We provide an end-to-end implementation of our protocol. The benchmark shows that it achieves a throughput of 1.47 million gates per second for 64 verifiers with $50\%$ corruption, and 0.88 million gates per second with $75\%$ corruption.
Great-LaKeys: An Improved Threshold-PRF and a Novel Exponent-VRF from LWR
Building on the recently proposed LWR-based threshold-PRF LaKey, we propose two new constructions. First, we propose an optimized threshold-PRF with significantly reduced round and communication complexity. We achieve this by improving the underlying bit truncation protocol, as well as the lower bound on the required number of LWR instances. Second, we show that the same underlying PRF construction lends itself as a basis for a novel and efficient exponent-VRF. We implement prototypes of both of our contributions and demonstrate their practical performance.
Cross-chain bridges via backwards-compatible SNARKs
In recent years, SNARKs have shown great promise as a tool for building trustless bridges to connect the heterogeneous ecosystem of blockchains. Unfortunately, the parameters hardwired for many of the widely used blockchains are incongruous with the conventional SNARKs, which results in unsatisfactory performance. This bottleneck necessitates new proof systems tailored for efficiency in these environments.
The primary focus of this paper is on succinct bridges from Cosmos to Ethereum, which largely boils down to efficient proofs of multiple Ed25519 signatures. However, these techniques can be ported to settings that require succinct proofs of multiple secp256k1 or BLS12-381 signatures.
We describe our succinct validity bridging scheme Overfly, which uses a field-agnostic SNARK to circumvent the huge overhead of non-native field arithmetic arising from Ed25519 scalar multiplications in the circuit. We also explore the schemes deVirgo and zkTree, which exploit the parallelization of proof generation and the subsequent aggregation of proofs.
Our benchmarks indicate that it is crucial to sidestep non-native arithmetic to the extent that it is possible. We also found that two or more proof systems need to be securely amalgamated to optimize a succinct validity bridging scheme.
On Knowledge-Soundness of Plonk in ROM from Falsifiable Assumptions
Lipmaa, Parisella, and Siim [Eurocrypt, 2024] proved the extractability of the KZG polynomial commitment scheme under the falsifiable assumption ARSDH. They also showed that variants of real-world zk-SNARKs like Plonk can be made knowledge-sound in the random oracle model (ROM) under the ARSDH assumption. However, their approach did not consider various batching optimizations, resulting in their variant of Plonk having approximately $3.5$ times longer argument. Our contributions are: (1) We prove that several batch-opening protocols for KZG, used in modern zk-SNARKs, have computational special-soundness under the ARSDH assumption. (2) We prove that interactive Plonk has computational special-soundness under the ARSDH assumption and a new falsifiable assumption SplitRSDH. We also prove that two minor modifications of the interactive Plonk have computational special-soundness under only the ARSDH and a simpler variant of SplitRSDH. We define a new type-safe oracle framework of the AGMOS (AGM with oblivious sampling) and prove SplitRSDH is secure in it. The Fiat-Shamir transform can be applied to obtain non-interactive versions, which are secure in the ROM under the same assumptions.
Limits on the Power of Prime-Order Groups: Separating Q-Type from Static Assumptions
Subgroup decision techniques on cryptographic groups and pairings have been critical for numerous applications. Originally conceived in the composite-order setting, there is a large body of work showing how to instantiate subgroup decision techniques in the prime-order setting as well. In this work, we demonstrate the first barrier to this research program, by demonstrating an important setting where composite-order techniques cannot be replicated in the prime-order setting.
In particular, we focus on the case of $q$-type assumptions, which are ubiquitous in group- and pairing-based cryptography, but unfortunately are less desirable than the more well-understood static assumptions. Subgroup decision techniques have had great success in removing $q$-type assumptions, even allowing $q$-type assumptions to be generically based on static assumptions on composite-order groups. Our main result shows that the same likely does not hold in the prime order setting. Namely, we show that a large class of $q$-type assumptions, including the security definition of a number of cryptosystems, cannot be proven secure in a black box way from any static assumption.
An analysis of the Crossbred Algorithm for the MQ Problem
The Crossbred algorithm is currently the state-of-the-art method for solving overdetermined multivariate polynomial systems over $\mathbb{F}_2$. Since its publication in 2015, several record breaking implementations have been proposed and demonstrate the power of this hybrid approach. Despite these practical results, the complexity of this algorithm and the choice of optimal parameters for it are difficult open questions. In this paper, we prove a bivariate generating series for potentially admissible parameters of the Crossbred algorithm.
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
Perfectly-secure Network-agnostic MPC with Optimal Resiliency
We study network-agnostic secure multiparty computation with perfect security. Traditionally MPC is studied assuming the underlying network is either synchronous or asynchronous. In a network-agnostic setting, the parties are unaware of whether the underlying network is synchronous or asynchronous.
The feasibility of perfectly-secure MPC in synchronous and asynchronous networks has been settled a long ago. The landmark work of [Ben-Or, Goldwasser, and Wigderson, STOC'88] shows that $n > 3t_s$ is necessary and sufficient for any MPC protocol with $n$-parties over synchronous network tolerating $t_s$ active corruptions. In yet another foundational work, [Ben-Or, Canetti, and Goldreich, STOC'93] show that the bound for asynchronous network is $n > 4t_a$, where $t_a$ denotes the number of active corruptions. However, the same question remains unresolved for network-agnostic setting till date. In this work, we resolve this long-standing question.
We show that perfectly-secure network-agnostic $n$-party MPC tolerating $t_s$ active corruptions when the network is synchronous and $t_a$ active corruptions when the network is asynchronous is possible if and only if $n > 2 \max(t_s,t_a) + \max(2t_a,t_s)$.
When $t_a \geq t_s$, our bound reduces to $n > 4t_a$, whose tightness follows from the known feasibility results for asynchronous MPC. When $t_s > t_a$, our result gives rise to a new bound of $n > 2t_s + \max(2t_a,t_s)$. Notably, the previous network-agnostic MPC in this setting [Appan, Chandramouli, and Choudhury, PODC'22] only shows sufficiency for a loose bound of $n > 3t_s + t_a$. When $t_s > 2t_a$, our result shows tightness of $ n > 3t_s$, whereas the existing work shows sufficiency for $n > 3t_s+t_a$.
A Formal Treatment of End-to-End Encrypted Cloud Storage
Users increasingly store their data in the cloud, thereby benefiting from easy access, sharing, and redundancy. To additionally guarantee security of the outsourced data even against a server compromise, some service providers have started to offer end-to-end encrypted (E2EE) cloud storage. With this cryptographic protection, only legitimate owners can read or modify the data. However, recent attacks on the largest E2EE providers have highlighted the lack of solid foundations for this emerging type of service.
In this paper, we address this shortcoming by initiating the formal study of E2EE cloud storage. We give a formal syntax to capture the core functionality of a cloud storage system, capturing the real-world complexity of such a system's constituent interactive protocols. We then define game-based security notions for confidentiality and integrity of a cloud storage system against a fully malicious server. We treat both selective and fully adaptive client compromises. Our notions are informed by recent attacks on E2EE cloud storage providers. In particular we show that our syntax is rich enough to capture the core functionality of MEGA and that recent attacks on it arise as violations of our security notions. Finally, we present an E2EE cloud storage system that provides all core functionalities and that is both efficient and provably secure with respect to our selective security notions. Along the way, we discuss challenges on the path towards bringing the security of cloud storage up to par with other end-to-end primitives, such as secure messaging and TLS.
Privacy-Preserving Dijkstra
Given a graph $G(V,E)$, represented as a secret-sharing of an adjacency list, we show how to obliviously convert it into an alternative, MPC-friendly secret-shared representation, so-called $d$-normalized replicated adjacency list (which we abbreviate to $d$-normalized), where the size of our new data-structure is only 4x larger -- compared to the original (secret-shared adjacency list) representation of $G$. Yet, this new data structure enables us to execute oblivious graph algorithms that simultaneously improve underlying graph algorithms' round, computation, and communication complexity. Our $d$-normalization proceeds in two steps:
First, we show how for any graph $G$, given a secret-shared adjacency list, where vertices are arbitrary alphanumeric strings that fit into a single RAM memory word, we can efficiently and securely rename vertices to integers from $1$ to $V$ that will appear in increasing order in the resulting secret-shared adjacency list. The secure renaming takes $O(\log V)$ rounds and $O((V+E)\log V)$ secure operations. Our algorithm also outputs two secret-shared arrays: a mapping from integers to alphanumeric names and its sorted inverse. Of course, if the adjacency list is already in such a format, this step could be omitted.
Second, given a secret-shared adjacency list for any graph $G$, where vertices are integers from $1$ to $V$ and are sorted, we show an oblivious $d$-normalization algorithm that takes $O(1)$ rounds and $O(V+E)$ secure operations.
We believe that both conversions are of independent interest. We demonstrate the power of our data structures by designing a privacy-preserving Dijkstra's single-source shortest-path algorithm that simultaneously achieves $O((V +E) \cdot \log V)$ secure operations and $O(V \cdot \log V \cdot \log \log\log V)$ rounds. Our secure Dijkstra algorithm works for any adjacency list representation as long as all vertex labels and weights can individually fit into a constant number of RAM memory words. Our algorithms work for two or a constant number of servers in the honest but curious setting. The limitation of our result (to only a constant number of servers) is due to our reliance on linear work and constant-round secure shuffle.
CoGNN: Towards Secure and Efficient Collaborative Graph Learning
Collaborative graph learning represents a learning paradigm where multiple parties jointly train a graph neural network (GNN) using their own proprietary graph data. To honor the data privacy of all parties, existing solutions for collaborative graph learning are either based on federated learning (FL) or secure machine learning (SML). Although promising in terms of efficiency and scalability due to their distributed training scheme, FL-based approaches fall short in providing provable security guarantees and achieving good model performance. Conversely, SML-based solutions, while offering provable privacy guarantees, are hindered by their high computational and communication overhead, as well as poor scalability as more parties participate.
To address the above problem, we propose CoGNN, a novel framework that simultaneously reaps the benefits of both FL-based and SML-based approaches. At a high level, CoGNN is enabled by (i) a novel message passing mechanism that can obliviously and efficiently express the vertex data propagation/aggregation required in GNN training and inference and (ii) a two-stage Dispatch-Collect execution scheme to securely decompose and distribute the GNN computation workload for concurrent and scalable executions. We further instantiate the CoGNN framework, together with customized optimizations, to train Graph Convolutional Network (GCN) models. Extensive evaluations on three graph datasets demonstrate that compared with the state-of-the-art (SOTA) SML-based approach, CoGNN reduces up to $123$x running time and up to $522$x communication cost per party. Meanwhile, the GCN models trained using CoGNN have nearly identical accuracies as the plaintext global-graph training, yielding up to $11.06\%$ accuracy improvement over the GCN models trained via federated learning.
FABESA: Fast (and Anonymous) Attribute-Based Encryption under Standard Assumption
Attribute-Based Encryption (ABE) provides fine-grained access control to encrypted data and finds applications in various domains. The practicality of ABE schemes hinges on the balance between security and efficiency. The state-of-the-art adaptive secure ABE scheme, proven to be adaptively secure under standard assumptions (FAME, CCS'17), is less efficient compared to the fastest one (FABEO, CCS'22) which is only proven secure under the Generic Group Model (GGM). These traditional ABE schemes focus solely on message privacy. To address scenarios where attribute value information is also sensitive, Anonymous ABE (${\rm A}^{\rm 2}$BE) ensures the privacy of both the message and attributes. However, most ${\rm A}^{\rm 2}$BE schemes suffer from intricate designs with low efficiency, and the security of the fastest key-policy ${\rm A}^{\rm 2}$BE (proposed in FEASE, USENIX'24) relies on the GGM.
In this paper, we propose novel fast key-policy and ciphertext-policy ABE schemes that (1) support both AND and OR gates for access policies, (2) have no restriction on the size and type of policies or attributes, (3) achieve adaptive security under the standard DLIN assumption, and (4) only need 4 pairings for decryption. As our ABE constructions automatically provide ciphertext anonymity, we easily transform our ABE schemes to ${\rm A}^{\rm 2}$BE schemes while maintaining the same features and high-level efficiency.
The implementation results show that all our schemes achieve the best efficiency comparing to other schemes with adaptive security proven under standard assumptions. Specifically, our ABE schemes perform better than FAME and are close to FABEO. Our key-policy ${\rm A}^{\rm 2}$BE scheme performs close to the one in FEASE and our ciphertext-policy ${\rm A}^{\rm 2}$BE outperforms the state-of-the-art (Cui et al., ProvSec'16).
DualRing-PRF: Post-Quantum (Linkable) Ring Signatures from Legendre and Power Residue PRFs
Ring signatures are one of the crucial cryptographic primitives used in the design of privacy-preserving systems. Such a signature scheme allows a signer to anonymously sign a message on behalf of a spontaneously formed group. It not only ensures the authenticity of the message but also conceals the true signer within the group. An important extension of ring signatures is linkable ring signatures, which prevent a signer from signing twice without being detected (under some constraints). Linkable ring signatures offer advantages in applications where full anonymity might jeopardise the intended purpose, such as privacy-oriented cryptocurrencies like Monero.
In this work, we introduce post-quantum ring signature (DualRing-PRF) and linkable ring signature (DualRingL-PRF) schemes whose security solely rely on symmetric-key primitives (namely, Legendre PRF and power residue PRF). Our construction of the ring signature departs from previous approaches with similar security assumptions, offering the most competitive signature sizes for small and medium-sized rings. In particular, for a ring size of 16, DualRing-PRF has a communication overhead 1.4 times smaller than the state-of-the-art scheme proposed by Goel et al. (PETS’21). Furthermore, we demonstrate the extension of DualRing-PRF to incorporate linkability and non-slanderability. Compared to the existing one-time traceable ring signature (a variant of linkable ring signature) by Scafuro and Zhang (ESORICS’21), our construction supports many-time signing and achieves significantly smaller signature sizes when ring size exceeds 16. This advantage becomes more pronounced as the ring size increases.
Side-Channel and Fault Resistant ASCON Implementation: A Detailed Hardware Evaluation (Extended Version)
In this work, we present various hardware implementations for the lightweight cipher ASCON, which was recently selected as the winner of the NIST organized Lightweight Cryptography (LWC) competition. We cover encryption + tag generation and decryption + tag verification for the ASCON AEAD and also the ASCON hash function. On top of the usual (unprotected) implementation, we present side-channel protection (threshold countermeasure) and triplication/majority-based fault protection. To the best of our knowledge, this is the first protected hardware implementation of ASCON with respect to side-channel and fault inject protection. The side-channel and fault protections work orthogonal to each other (i.e., either one can be turned on/off without affecting the other). We present ASIC and FPGA benchmarks for all our implementations (hash and AEAD) with/without countermeasures for varying input sizes.
SoCureLLM: An LLM-driven Approach for Large-Scale System-on-Chip Security Verification and Policy Generation
Contemporary methods for hardware security verification struggle with adaptability, scalability, and availability due to the increasing complexity of the modern system-on-chips (SoCs). Large language models (LLMs) have emerged as a viable approach to address these shortcomings in security verification because of their natural language understanding, advanced reasoning, and knowledge transfer capabilities. However, their application to large designs is limited by inherent token limitation and memorization constraints. In this paper, we introduce SoCureLLM, an LLM-based framework that excels in identifying security vulnerabilities within SoC designs and creating a comprehensive security policy database. Our framework is adaptable and adept at processing varied, large-scale designs, overcoming the abovementioned issues of LLM. In evaluations, SoCureLLM detected 76.47% of security bugs across three vulnerable RISC-V SoCs, outperforming the state-of-the-art security verification methods. Furthermore, assessing three additional large-scale RISC-V SoC designs against various threat models led to the formulation of 84 novel security policies, enriching the security policy database. Previously requiring extensive manual effort to craft, these newly generated security policies can be used as guidelines for developing secured SoC designs.
SoK: Programmable Privacy in Distributed Systems
This Systematization of Knowledge conducts a survey of contemporary distributed blockchain protocols, with the aim of identifying cryptographic and design techniques which practically enable both expressive programmability and user data confidentiality. To facilitate a framing which supports the comparison of concretely very different protocols, we define an epoch-based computational model in the form of a flexible UC-style ideal functionality which divides the operation of privacy-preserving networks into three phases: Independent, Mediated, and Global computation. Our analysis of protocols focuses in particular on features of the Mediated computation phase, which provides the facility to execute non-trivial program logic on private inputs from multiple users. Specifically, we compare implementations in different protocols for private limit order auctions, which we find to be a representative application which is common and relatively simple, but which exhibits adversarial dynamics which demonstrate the capabilities of a non-trivial Mediated computation mechanism. In our analysis, we identify four protocols representative of different high-level approaches used to implement Mediated computations. We compare protocols according to the degree and flexibility of programmability, the privacy properties achieved, and the security assumptions required for correct operation. We conclude by offering recommendations and best practices for future programmable privacy designs.
Hadamard Product Arguments and Their Applications
This paper introduces transparent and efficient arguments for Hadamard products between committed vectors from two source groups. For vectors of length $n$, the proofs consist of $\mathcal{O}(\log n)$ target group elements and $\mathcal{O}(1)$ additional elements. The verifier's workload is dominated by $\mathcal{O}(\log n)$ multi-exponentiations in the target group and $\mathcal{O}(1)$ pairings. We prove our security under the standard SXDH assumption. Additionally, we propose an aggregator for Groth16 pairing-based zk-SNARKs and a proof aggregation technique for the general case of the KZG pairing-based polynomial commitment scheme using our Hadamard product arguments. Both applications support logarithmic-sized aggregated proofs without requiring an additional trusted setup, significantly reducing the verifier’s pairing operations to $\mathcal{O}(1)$.
FaultyGarble: Fault Attack on Secure Multiparty Neural Network Inference
The success of deep learning across a variety of
applications, including inference on edge devices, has led to
increased concerns about the privacy of users’ data and deep
learning models. Secure multiparty computation allows parties
to remedy this concern, resulting in a growth in the number
of such proposals and improvements in their efficiency. The
majority of secure inference protocols relying on multiparty
computation assume that the client does not deviate from the
protocol and passively attempts to extract information. Yet
clients, driven by different incentives, can act maliciously to
actively deviate from the protocol and disclose the deep learning
model owner’s private information. Interestingly, faults are
well understood in multiparty computation-related literature,
although fault attacks have not been explored. Our paper
introduces the very first fault attack against secure inference
implementations relying on garbled circuits as a prime example
of multiparty computation schemes. In this regard, laser fault
injection coupled with a model-extraction attack is successfully
mounted against existing solutions that have been assumed to
be secure against active attacks. Notably, the number of queries
required for the attack is equal to that of the best model extraction
attack mounted against the secure inference engines
under the semi-honest scenario.
Volatile and Persistent Memory for zkSNARKs via Algebraic Interactive Proofs
In verifiable outsourcing, an untrusted server runs an expensive computation and produces a succinct proof (called a SNARK) of the results. In many scenarios, the computation accesses a RAM that the server maintains a commitment to (persistent RAM) or that is initially zero (volatile RAM). But, SNARKs for such scenarios are limited by the high overheads associated with existing techniques for RAM checking. We develop new proofs about volatile, persistent, and sparse persistent RAM that reduce SNARK proving times. Our results include both asymptotic and concrete improvements---including a proving time reduction of up to 51.3$\times$ for persistent RAM. Along the way, we apply two tools that may be of independent interest. First, we generalize an existing construction to convert any algebraic interactive proof (AIP) into a SNARK. An AIP is a public-coin, non-succinct, interactive proof with a verifier that is an arithmetic circuit. Second, we apply Bézout's identity for polynomials to construct new AIPs for uniqueness and disjointness. These are useful for showing the independence of accesses to different addresses.
Distributed PIR: Scaling Private Messaging via the Users' Machines
This paper presents a new architecture for metadata-private messaging that
counters scalability challenges by offloading most computations to the clients.
At the core of our design is a distributed private information retrieval (PIR)
protocol, where the responder delegates its work to alleviate PIR's
computational bottleneck and catches misbehaving delegates by efficiently
verifying their results. We introduce DPIR, a messaging system that uses
distributed PIR to let a server storing messages delegate the work to the
system's clients, such that each client contributes proportional processing to
the number of messages it reads. The server removes clients returning invalid
results, which DPIR leverages to integrate an incentive mechanism for honest
client behavior by conditioning messaging through DPIR on correctly processing
PIR requests from other users. The result is a metadata-private messaging system
that asymptotically improves scalability over prior work with the same threat
model. We show through experiments on a prototype implementation that DPIR
concretely improves performance by $3.25\times$ and $4.31\times$ over prior
work and that the performance gap grows
with the user base size.
Improved Boomerang Attacks on 6-Round AES
The boomerang attack is a cryptanalytic technique which allows combining two short high-probability differentials into a distinguisher for a large number of rounds. Since its introduction by Wagner in 1999, it has been applied to many ciphers. One of the best-studied targets is a 6-round variant of AES, on which the boomerang attack is outperformed only by the dedicated Square attack. Recently, two new variants of the boomerang attack were presented: retracing boomerang (Eurocrypt'20) and truncated boomerang (Eurocrypt'23). These variants seem incompatible: the former achieves lower memory complexity by throwing away most of the data in order to force dependencies, while the latter achieves lower time complexity by using large structures, which inevitably leads to a large memory complexity.
In this paper we show that elements of the two techniques can be combined to get `the best of the two worlds' – the practical memory complexity of the retracing attack and the lower time complexity of the truncated attack. We obtain an attack with data complexity of $2^{57}$ (compared to $2^{59}$ and $2^{55}$ of truncated and retracing boomerang, respectively), memory complexity of $2^{33}$ (compared to $2^{59}$ and $2^{31}$), and time complexity of $2^{61}$ (compared to $2^{61}$ and $2^{80}$). This is the second-best attack on 6-round AES, after the Square attack.
PIR with Client-Side Preprocessing: Information-Theoretic Constructions and Lower Bounds
It is well-known that classical Private Information Retrieval (PIR) schemes without preprocessing must suffer from linear server computation per query. Moreover, any such single-server PIR with sublinear bandwidth must rely on public-key cryptography. Several recent works showed that these barriers pertaining to classical PIR can be overcome by introducing a preprocessing phase where each client downloads a small hint that helps it make queries subsequently. Notably, the Piano PIR scheme (and subsequent improvements) showed that with such a client-side preprocessing, not only can we have PIR with sublinear computation and bandwidth per query, but somewhat surprisingly, we can also get it using only symmetric-key cryptography (i.e., one-way functions).
In this paper, we take the question of minimizing cryptographic assumptions to an extreme. In particular, we are the first to explore the landscape of information theoretic single-server preprocessing PIR. We make contributions on both the upper- and lower-bounds fronts. First, we show new information-theoretic constructions with various non-trivial performance tradeoffs between server computation, client space and bandwidth. Second, we prove a (nearly) tight lower bound on the tradeoff between the client space and bandwidth in information-theoretic constructions. Finally, we prove that any computational scheme that overcomes the information-theoretic lower bound and satisfies a natural syntactic requirement (which is met by all known constructions) would imply a hard problem in the complexity class SZK. In particular, this shows that Piano achieves (nearly) optimal client space and bandwidth tradeoff subject to making a black-box use of a one-way function. Some of the techniques we use for the above results can be of independent interest.
ZLR: a fast online authenticated encryption scheme achieving full security
Online authenticated encryption has been considered of practical relevance in light-weight environments due to low latency and constant memory usage. In this paper, we propose a new tweakable block cipher-based online authenticated encryption scheme, dubbed ZLR, and its domain separation variant, dubbed DS-ZLR. ZLR and DS-ZLR follow the Encrypt-MixEncrypt paradigm. However, in contrast to existing schemes using the same paradigm such as ELmE and CoLM, ZLR and DS-ZLR enjoy n-bit security by using larger internal states with an efficient ZHash-like hashing algorithm. In this way, 2n-bit blocks are processed with only a single primitive call for hashing and two primitive calls for encryption and decryption, when they are based on an n-bit tweakable block cipher using n-bit (resp. 2n-bit) tweaks for ZLR (resp. DS-ZLR). Furthermore, they support pipelined computation as well as online nonce-misuse resistance. To the best of our knowledge, ZLR and DS-ZLR are the first pipelineable tweakable block cipher-based online authenticated encryption schemes of rate 2/3 that provide n-bit security with online nonce-misuse resistance.
Towards Optimal Parallel Broadcast under a Dishonest Majority
The parallel broadcast (PBC) problem generalises the classic Byzantine broadcast problem to the setting where all $n$ nodes broadcast a message and deliver $O(n)$ messages. PBC arises naturally in many settings including multi-party computation. Recently, Tsimos, Loss, and Papamanthou (CRYPTO 2022) showed PBC protocols with improved communication, against an adaptive adversary who can corrupt all but a constant fraction $\epsilon$ of nodes (i.e., $f < (1 - \epsilon)n$). However, their study is limited to single-bit messages, and their protocols have large polynomial overhead in the security parameter $\kappa$: their TrustedPBC protocol achieves $\tilde{O}(n^2 \kappa^4)$ communication and $O(\kappa\log n)$ rounds. Since these factors of $\kappa$ are in practice often close (or at least polynomially related) to $n$, they add a significant overhead. In this work, we propose three parallel broadcast protocols for $L$-bit messages, for any size $L$, that significantly improve the communication efficiency of the state-of-the-art.
We first propose a new extension protocol that uses a $\kappa$-bit PBC as a black box and achieves i) communication complexity of $O(L n^2 + \mathcal{P}(\kappa))$, where $\mathcal{P}(\kappa)$ is the communication complexity of the $\kappa$-bit PBC, and ii) round complexity same as the $\kappa$-bit PBC. By comparison, the state-of-the-art extension protocol for regular broadcast (Nayak et al., DISC 2020) incurs $O(n)$ additional rounds of communication. Next, we propose a protocol that is secure against a static adversary, for $\kappa$-bit messages with $O(n^2 \kappa^{1+K} + n\kappa^3 + \kappa^4)$ communication and $O(\kappa)$ round complexity, where $K$ is an arbitrarily small constant such that $0<K<1$. Finally, we propose an adaptively-secure protocol for $\kappa$-bit messages with $\tilde{O}(n^2\kappa^2 + n\kappa^3)$ communication overhead and $O(\kappa \log{n})$ round complexity by modifying and improving the next-best protocol TrustedPBC in several key ways. Notably, our latter two protocols are $\tilde{O}(\kappa^{2 - K})$ and $O(\kappa^2)$ times more communication-efficient, respectively, than the state-of-the-art protocols while achieving the same round complexity.
ICICLE v2: Polynomial API for Coding ZK Provers to Run on Specialized Hardware
Polynomials play a central role in cryptography. In the context of Zero Knowledge Proofs (ZKPs), protocols can be exclusively expressed using polynomials, making them a powerful abstraction tool, as demonstrated in most ZK research papers. Our first contribution is a high-level framework that enables practitioners to implement ZKPs in a more natural way, based solely on polynomial primitives.
ZK provers are considered computationally intensive algorithms with a high degree of parallelization. These algorithms benefit significantly from hardware acceleration, and deployed ZK systems typically include specialized hardware to optimize the performance of the prover code. Our second contribution is leveraging our polynomial API to abstract away low-level hardware primitives and automate their memory management. This device-agnostic design allows ZK engineers to prototype and build solutions while taking advantage of the performance gains offered by specialized hardware, such as GPUs and FPGAs, without needing to understand the hardware implementation details.
Finally, our polynomial API is integrated into version 2 of the ICICLE library and is running in production. This paper also serves as a comprehensive documentation for the ICICLE v2 polynomial API.
Efficient Secure Communication Over Dynamic Incomplete Networks With Minimal Connectivity
We study the problem of implementing unconditionally secure reliable and private communication (and hence secure computation) in dynamic incomplete networks.
Our model assumes that the network is always $k$-connected, for some $k$, but the concrete connection graph is adversarially chosen in each round of interaction.
We show that, with $n$ players and $t$ malicious corruptions, perfectly secure communication is possible if and only if $k > 2t$. This disproves a conjecture from earlier work, that $k> 3t$ is necessary. Our new protocols are much more efficient than previous work; in particular, we improve the round and communication complexity by an exponential factor (in $n$) in both the semi-honest and the malicious corruption setting, leading to protocols with polynomial complexity.
A Note on (2, 2)-isogenies via Theta Coordinates
In this paper, we revisit the algorithm for computing chains of $(2, 2)$-isogenies between products of elliptic curves via theta coordinates proposed by Dartois et al. For each fundamental block of this algorithm, we provide a explicit inversion-free version. Besides, we exploit a novel technique of $x$-only ladder to speed up the computation of gluing isogeny. Finally, we present a mixed optimal strategy, which combines the inversion-elimination tool with the original methods together to execute a chain of $(2, 2)$-isogenies.
We make a cost analysis and present a concrete comparison between ours and the previously known methods for inversion elimination. Furthermore, we implement the mixed optimal strategy for benchmark. The results show that when computing $(2, 2)$-isogeny chains with lengths of 126, 208 and 632, compared to Dartois, Maino, Pope and Robert's original implementation, utilizing our techniques can reduce $30.8\%$, $20.3\%$ and $9.9\%$ multiplications over the base field $\mathbb{F}_p$, respectively. Even for the updated version which employs their inversion-free methods, our techniques still possess a slight advantage.
Cryptography at the Crossroads: Ethical Responsibility, the Cypherpunk Movement and Institutions
This paper explores the intersection of cryptographic work with ethical responsibility and political activism, inspired by the Cypherpunk Manifesto and Phillip Rogaway's analysis of the moral character of cryptography. The discussion encompasses the historical context of cryptographic development, the philosophical underpinnings of the cypherpunk ideology, and contemporary challenges posed by mass surveillance and privacy concerns. By examining these facets, the paper calls for a renewed commitment to developing cryptographic solutions that prioritize human rights and societal good.
Probabilistic Attacks and Enhanced Security for "Private Set Intersection in the Internet Setting from Lightweight Oblivious PRF"
Privacy Set Intersection (PSI) has been an important research topic within privacy computation. Its main function is to allow two parties to compute the intersection of their private sets without revealing any other private information. Therefore, PSI can be applied to various real-world scenarios.
Chase and Miao presented an impressive construction ``Private set intersection in the Internet setting from lightweight oblivious prf'' (CM20 for short) at Crypto 2020, highlighting its convenient structure and optimal communication cost. However, it does have some security vulnerabilities. Let $X$ be the privacy set of user $P_1$, $Y$ be the privacy set of user $P_2$. The CM20 protocol uses a pseudorandom function (PRF) to encrypt the privacy $x\in X$ of $P_1$ into $D_1$ and the privacy $y\in Y$ of $P_2$ into $D_2$, $D_1 = D_2$ as $x=y$. It then sends random data $F_1$ to user $P_1$ and random data $F_2$ to user $P_2$ using a random oblivious transfer technique. User $P_2$ computes $\delta=D_2\oplus F_2$ and sends $\delta$ to user $P_1$, and user $P_1$ uses $\delta$ and $F_1$ to re-encrypt $D_1$. Repeat this until $P_1$ re-encrypts all the privacy in all the privacy sets $X$, packages them up and sends them to $P_2$, who completes the privacy set intersection. However, if an adversary obtains $\delta$ and $F_2$ by any means, they can recover the PRF's encryption of the user's privacy, and the recovery process is non-trivial. This significantly weakens the security of the CM20 protocol.
In this paper, we make three main contributions. First, based on the above analysis, we present a method for attacking CM20, called {\em probabilistic attacks}. This attack is based on estimating and analysing the frequency distribution of the encrypted data from the PRF and the probability distribution of the original private data, and determining the relationship between the two. Although not 100\% effective, this method of attack poses a significant threat to the security of user data.
Secondly, we introduce a new tool called the {\em perturbed pseudorandom generator} (PPRG). We show that the PPRG can overcome probabilistic attacks by replacing the random oblivious transfer and one of the hash functions (originally there were two) in CM20.
Finally, we provide a dedicated indistinguishability against chosen-plaintext attack (IND-CPA) security model for this PSI protocol. The efficiency analysis shows that the proposed PSI is comparable to CM20's PSI, whether on a PC, MAC, pad or mobile phone.
Fast SNARK-based Non-Interactive Distributed Verifiable Random Function with Ethereum Compatibility
Distributed randomness beacons (DRBs) are fundamental for various decentralised applications, such as consensus protocols, decentralised gaming and lotteries, and collective governance protocols. These applications are heavily used on modern blockchain platforms.
This paper presents the so far most efficient direct construction and implementation of a non-interactive distributed verifiable random function (NI-DVRF) that is fully compatible with Ethereum. Our NI-DVRF scheme adopts pairings and combines techniques from secret sharing, SNARKs, and BLS signatures. The security properties of the resulting NI-DVRF scheme are formally modelled and proven in the random oracle model under standard pairing-based assumptions.
To justify the efficiency and cost claims and more generally its adoption potential in practice, the proposed NI-DVRF scheme was implemented in Rust and Solidity. Our implementation is highly optimised and is currently being investigated for deployment on the multichain layer-2 scaling solution provided by Boba Network to power its DRB service zkRand. Our experimental analysis, therefore, also evaluates performance and scalability properties of the proposed NI-DVRF and its implementation.
Consolidated Linear Masking (CLM): Generalized Randomized Isomorphic Representations, Powerful Degrees of Freedom and Low(er)-cost
Masking is a widely adopted countermeasure against side-channel analysis (SCA) that protects cryptographic implementations from information leakage. However, current masking schemes often incur significant overhead in terms of electronic cost. RAMBAM, a recently proposed masking technique that fits elegantly with the AES algorithm, offers ultra-low latency/area by utilizing redundant representations of finite field elements. This paper presents a comprehensive generalization of RAMBAM and various other masking schemes within a unified framework and a mathematical representation known as Consolidated Linear Masking (CLM), where masking schemes are formalized by their encoding. We establish a theoretical foundation for CLM linking randomized isomorphic (code) representations and the entropy provided by the redundancy to a revised notion of masking order. Our analysis reveals that RAMBAM is a specific instance of CLM as well as other masking constructions, thus paving the way for significant enhancements. For example, a $1^{st}$-order secure design can be achieved almost without increasing the size of the representation of the variables. This property scales up to any order and is versatile. We demonstrate how CLM enables: (1) randomized selection of the isomorphic field for improved security; (2) flexible choice of the randomization polynomial; (3) embedded mask-refreshing via the randomized isomorphic representation that reduces randomness requirements significantly as well as improves performance; (4) a wider range of isomorphic randomized mappings that significantly increases the available randomization space compared to RAMBAM; (5) considerable improvement in securing fault-injection attacks and inherent security against probing adversaries, i.e., more required probes. In addition, our framework addresses ways to improve the brute-force parameter choices in the original RAMBAM. By offering a unifying theoretical perspective for masking and practical enhancements, this work advances the design of efficient and secure masking countermeasures against SCA threats.
Diffuse Some Noise: Diffusion Models for Measurement Noise Removal in Side-channel Analysis
Resilience against side-channel attacks is an important consideration for cryptographic implementations deployed in devices with physical access to the device. However, noise in side-channel measurements has a significant impact on the complexity of these attacks, especially when an implementation is protected with masking. Therefore, it is important to assess the ability of an attacker to deal with noise. While some previous works have considered approaches to remove (some) noise from measurements, these approaches generally require considerable expertise to be effectively employed or necessitate the ability of the attacker to capture a 'clean' set of traces without the noise.
In this paper, we introduce a method for utilizing diffusion models to remove measurement noise from side-channel traces in a fully non-profiled setting. Denoising traces using our method considerably lowers the complexity of mounting attacks in both profiled and non-profiled settings. For instance, for a collision attack against the ASCADv2 dataset, we reduced the number of traces required to retrieve the key by 40%, and we showed similar improvements for ESHARD using a state-of-the-art MORE attack. Furthermore, we provide analyses into the scenarios where our method is useful and generate insights into how the diffusion networks denoise traces.
Efficient and Secure Post-Quantum Certificateless Signcryption for Internet of Medical Things
Internet of Medical Things (IoMT) has gained significant research focus in both academic and medical institutions. Nevertheless, the sensitive data involved in IoMT raises concerns regarding user validation and data privacy. To address these concerns, certificateless signcryption (CLSC) has emerged as a promising solution, offering authenticity, confidentiality, and unforgeability. Unfortunately, most existing CLSC schemes are impractical for IoMT due to their heavy computational and storage requirements. Additionally, these schemes are vulnerable to quantum computing attacks. Therefore, research focusing on designing an efficient post-quantum CLSC scheme is still far-reaching.
In this work, we propose PQ-CLSC, a novel post-quantum CLSC scheme that ensures quantum safety for IoMT. Our proposed design facilitates secure transmission of medical data between physicians and patients, effectively validating user legitimacy and minimizing the risk of private information leakage. To achieve this, we leverage lattice sampling algorithms and hash functions to generate the particial secret key and then employ the sign-then-encrypt method to obtain the ciphertext.
We also formally and prove the security of our design, including indistinguishability against chosen-ciphertext attacks (IND-CCA2) and existential unforgeability against chosen-message attacks (EU-CMA) security. Finally, through comprehensive performance evaluation, our signcryption overhead is only 30%-55% compared to prior arts, while our computation overhead is just around 45% of other existing schemes. The evaluation results demonstrate that our solution is practical and efficient.
Malicious Security for PIR (almost) for Free
Private Information Retrieval (PIR) enables a client to retrieve a database element from a semi-honest server while hiding the element being queried from the server. Maliciously-secure PIR (mPIR) [Colombo et al., USENIX Security '23] strengthens the guarantees of plain (i.e., semi-honest) PIR by ensuring that even a misbehaving server (a) cannot compromise client privacy via selective-failure attacks, and (b) must answer every query *consistently* (i.e., with respect to the same database). These additional security properties are crucial for many real-world applications.
In this work we present a generic compiler that transforms any PIR scheme into an mPIR scheme in a black-box manner, minimal overhead, and without requiring additional cryptographic assumptions. Since mPIR trivially implies PIR, our compiler establishes the equivalence of mPIR and PIR. By instantiating our compiler with existing PIR schemes, we immediately obtain mPIR schemes with $O(N^\epsilon)$ communication cost. In fact, by applying our compiler to a recent doubly-efficient PIR [Lin et al., STOC '23], we are able to construct a *doubly-efficient* mPIR scheme that requires only $\text{polylog}(N)$ communication and server and client computation. In comparison, all prior work incur a $\Omega(\sqrt{N})$ cost in these metrics.
Our compiler makes use of smooth locally-decodable codes (LDCs) that have a robust decoding procedure. We term these codes "subcode"-LDCs, because they are LDCs where the query responses are from an error-correcting code. This property is shared by Reed--Muller codes (whose query responses are Reed--Solomon codewords) and more generally lifted codes.
Applying our compiler requires us to consider decoding in the face of *non-signaling adversaries*, for reasons analogous to the need for non-signaling PCPs in the succinct-argument literature. We show how to construct such decoders for Reed--Muller codes, and more generally for smooth locally-decodable codes that have a robust decoding procedure.
Shared OT and Its Applications to Unconditional Secure Integer Equality, Comparison and Bit-Decomposition
We present unconditionally perfectly secure protocols in the
semi-honest setting for several functionalities: (1) private elementwise
equality; (2) private bitwise integer comparison; and (3) bit-decomposition.
These protocols are built upon a new concept called Shared Oblivious Transfer (Shared OT). Shared OT extends the one-out-of-N String OT by replacing strings with integers modulo $M$ and allowing additive secret-sharing of all inputs and outputs. These extensions can be implemented
by simple local computations without incurring additional OT invocations. We believe our Shared OT may be of independent interest.
Our protocols demonstrate the best round, communication, and computational complexities compared to all other protocols secure in a similar setting. Moreover, all of our protocols involve either 2 or 3 rounds.
Secure Account Recovery for a Privacy-Preserving Web Service
If a web service is so secure that it does not even know—and does not want to know—the identity and contact info of its users, can it still offer account recovery if a user forgets their password? This paper is the culmination of the authors' work to design a cryptographic protocol for account recovery for use by a prominent secure matching system: a web-based service that allows survivors of sexual misconduct to become aware of other survivors harmed by the same perpetrator. In such a system, the list of account-holders must be safeguarded, even against the service provider itself.
In this work, we design an account recovery system that, on the surface, appears to follow the typical workflow: the user types in their email address, receives an email containing a one-time link, and answers some security questions. Behind the scenes, the defining feature of our recovery system is that the service provider can perform email-based account validation without knowing, or being able to learn, a list of users' email addresses. Our construction uses standardized cryptography for most components, and it has been deployed in production at the secure matching system.
As a building block toward our main construction, we design a new cryptographic primitive that may be of independent interest: an oblivious pseudorandom function that can either have a fully-private input or a partially-public input, and that reaches the same output either way. This primitive allows us to perform online rate limiting for account recovery attempts, without imposing a bound on the creation of new accounts. We provide an open-source implementation of this primitive and provide evaluation results showing that the end-to-end interaction time takes 8.4-60.4 ms in fully-private input mode and 3.1-41.2 ms in partially-public input mode.
Efficient Execution Auditing for Blockchains under Byzantine Assumptions
Security of blockchain technologies primarily relies on decentralization making them resilient against a subset of entities being taken down or corrupt. Blockchain scaling, crucial to decentralisation, has been addressed by architectural changes: i.e., the load of the nodes is reduced by parallelisation, called sharding or by taking computation load off the main blockchain via rollups. Both sharding and rollups have limitations in terms of decentralization and security.
A crucial component in these architectures is a layer that allows to efficiently check the validity of incoming blocks in the system. We provide the first formalization and analysis of ELVES, the auditing layer that is currently deployed in the Polkadot and Kusama blockchains.
In this layer, “auditing committees” are formed independently for each block, and security relies on the fact that it is prohibitively expensive in expectation for an adversary to make ELVES to accept a block that is not valid. In addition, ELVES has the following characteristics: 1) Auditing committees wind up orders of magnitude smaller than pre-assigned committees. In fact, the size of the committees adapts automatically to network conditions but remains a low constant in expectation, in the order of tens or low hundreds; 2) Although synchronous per se, ELVES tolerates instant adaptive crashes, mirroring realistic network capabilities. Surprisingly, the committee-size analysis of our protocol is ’all but simple’ and involves a novel strengthening of Cantelli’s inequality, which may be of independent interest.
Designs for practical SHE schemes based on Ring-LWR
The Learning with Errors problem (LWE) and its variants are among the most popular assumptions underlying lattice-based cryptography. The Learning with Rounding problem (LWR) can be thought of as a deterministic variant of LWE. While lattice-based cryptography is known to enable many advanced constructions, constructing Fully Homomorphic Encryption schemes based on LWR remains an under-explored part of the literature. In this work, we present a thorough study of Somewhat Homomorphic Encryption schemes based on Ring-LWR that are the analogue of the Ring-LWE-based BFV scheme. Our main contribution is to present and analyse two new schemes, in the LPR and Regev paradigms. The Regev-type scheme can be seen as a generalisation of the only prior work in this direction (Costache-Smart, 2017). Both our schemes present several im- provements compared to this prior work, and in particular we resolve the “tangled modulus” issue in the Costache-Smart scheme that led to unmanageable noise growth. Our schemes inherit the many benefits of being based on LWR, including ease of implementation, avoiding the need for expensive Gaussian sampling, improved resistance to side channels, suitability for hardware, and improved ciphertext size. Indeed, we give a detailed comparison showing that the LPR and Regev-type schemes marginally outperform the BFV scheme in terms of ciphertext size. Moreover, we show that both our schemes support RNS variants, which would make their practical performance competitive with BFV.
Flood and Submerse: Distributed Key Generation and Robust Threshold Signature from Lattices
We propose a new framework based on random submersions — that is projection over a random subspace blinded by a small Gaussian noise — for constructing verifiable short secret sharing and showcase it to construct efficient threshold lattice-based signatures in the hash-and-sign paradigm, when based on noise flooding. This is, to our knowledge, the first hash-and-sign lattice-based threshold signature. Our threshold signature enjoys the very desirable property of robustness, including at key generation. In practice, we are able to construct a robust hash-and-sign threshold signature for threshold and provide a typical parameter set for threshold T = 16 and signature size 13kB. Our constructions are provably secure under standard MLWE assumption in the ROM and only require basic primitives as building blocks. In particular, we do not rely on FHE-type schemes.
Signer Revocability for Threshold Ring Signatures
t-out-of-n threshold ring signature (TRS) is a type of anonymous signature designed for t signers to jointly sign a message while hiding their identities among n parties that include themselves. However, can TRS address those needs if one of the signers wants to revoke his signature or, additively, sign separately later? Can non-signers be revoked without compromising anonymity? Previous research has only discussed opposing situations. The present study introduces a novel property for TRS- revocability- addressing the need for improved flexibility and privacy security in TRS. Our proposed revocable threshold ring signature (RTRS) scheme is innovative in several ways: (1) It allows a signer to non-interactively revoke their identity and update the signature from t-out-of-n to t − 1-out-of-n; (2) It is possible to reduce the ring size and clip non-signers along with revoked signers while maintaining the anonymity level. We analyze and define the boundaries for these operations and implement and evaluate our structure. With a sufficiently large ring size, we can optimize the signature size, resulting in better signing performance as compared to the extensible signature scheme.
VRaaS: Verifiable Randomness as a Service on Blockchains
Web3 applications, such as on-chain games, NFT minting, and leader elections necessitate access to unbiased, unpredictable, and publicly verifiable randomness. Despite its broad use cases and huge demand, there is a notable absence of comprehensive treatments of on-chain verifiable randomness services. To bridge this, we offer an extensive formal analysis of on-chain verifiable randomness services.
We present the $first$ formalization of on-chain verifiable randomness in the blockchain setting by introducing the notion of Verifiable Randomness as a Service (VRaaS). We formally define VRaaS using an ideal functionality $\mathcal{F}_{\text{VRaaS}}$ in the Universal Composability model. Our definition not only captures the core features of randomness services, such as unbiasability, unpredictability, and public verifiability, but also accounts for many other crucial nuances pertaining to different entities involved, such as smart contracts.
Within our framework we study a generic design of Verifiable Random Function (VRF)-based randomness service - where the randomness requester provides an input on which the randomness is evaluated as VRF output. We show that it does satisfy our formal VRaaS definition. Furthermore, we show that the generic protocol captures many real-world randomness services like Chainlink VRF and Supra dVRF.
Moreover, we investigate the minimalism of the framework. Towards that, first we show that, the two transactions in-built in our framework are actually $necessary$ for any randomness service to support the essential qualities. We also discover $practical$ $vulnerabilities$ in other designs such as Algorand beacon, Pyth VRF and Band VRF, captured within our framework.
SNARGs under LWE via Propositional Proofs
We construct a succinct non-interactive argument (SNARG) system for every NP language $\mathcal{L}$ that has a propositional proof of non-membership for each $x\notin \mathcal{L}$. The soundness of our SNARG system relies on the hardness of the learning with errors (LWE) problem. The common reference string (CRS) in our construction grows with the space required to verify the propositional proof, and the size of the proof grows poly-logarithmically in the length of the propositional proof.
Unlike most of the literature on SNARGs, our result implies SNARGs for languages $\mathcal L$ with proof length shorter than logarithmic in the deterministic time complexity of $\mathcal L$. Our SNARG improves over prior SNARGs for such ``hard'' NP languages (Sahai and Waters, STOC 2014, Jain and Jin, FOCS 2022) in several ways:
- For languages with polynomial-length propositional proofs of non-membership, our SNARGs are based on a single, polynomial-time falsifiable assumption, namely LWE.
- Our construction handles propositional proofs of super-polynomial length, as long as they have bounded space, under the subexponential LWE assumption.
- Our SNARGs have a transparent setup, meaning that no private randomness is required to generate the CRS.
Moreover, our approach departs dramatically from these prior works: we show how to design SNARGs for hard languages without publishing a program (in the CRS) that has the power to verify $\mathsf{NP}$ witnesses.
The key new idea in our cryptographic construction is what we call a ``locally unsatisfiable extension'' of the $\mathsf{NP}$ verification circuit $\{C_x\}_x$. We say that an $\mathsf{NP}$ verifier has a locally unsatisfiable extension if for every $x\not\in \mathcal L$, there exists an extension $E_x$ of $C_x$ that is not even locally satisfiable in the sense of a local assignment generator [Paneth-Rothblum, TCC 2017]. Crucially, we allow $E_x$ to be depend arbitrarily on $x$ rather than being efficiently constructible.
In this work, we show -- via a ``hash-and-BARG'' for a hidden, encrypted computation -- how to build SNARGs for all languages with locally unsatisfiable extensions. We additionally show that propositional proofs of unsatisfiability generically imply the existence of locally unsatisfiable extensions, which allows us to deduce our main results.
As an illustrative example, our results imply a SNARG for the decisional Diffie-Hellman (DDH) language under the LWE assumption.
ElectionGuard: a Cryptographic Toolkit to Enable Verifiable Elections
ElectionGuard is a flexible set of open-source tools that---when used with traditional election systems---can produce end-to-end verifiable elections whose integrity can be verified by observers, candidates, media, and even voters themselves. ElectionGuard has been integrated into a variety of systems and used in actual public U.S. elections in Wisconsin, California, Idaho, Utah, and Maryland as well as in caucus elections in the U.S. Congress. It has also been used for civic voting in the Paris suburb of Neuilly-sur-Seine and for an online election by a Switzerland/Denmark-based organization.
The principal innovation of ElectionGuard is the separation of the cryptographic tools from the core mechanics and user interfaces of voting systems. This separation allows the cryptography to be designed and built by security experts without having to re-invent and replace the existing infrastructure. Indeed, in its preferred deployment, ElectionGuard does not replace the existing vote counting infrastructure but instead runs alongside and produces its own independently-verifiable tallies. Although much of the cryptography in ElectionGuard is, by design, not novel, some significant innovations are introduced which greatly simplify the process of verification.
This paper describes the design of ElectionGuard, its innovations, and many of the learnings from its implementation and growing number of real-world deployments.
Arithmetisation of computation via polynomial semantics for first-order logic
We propose a compositional shallow translation from a first-order logic with equality, into polynomials; that is, we arithmetise the semantics of first-order logic. Using this, we can translate specifications of mathematically structured programming into polynomials, in a form amenable to succinct cryptographic verification. We give worked example applications, and we propose a proof-of-concept succinct verification scheme based on inner product arguments.
First-order logic is widely used, because it is simple, highly expressive, and has excellent mathematical properties. Thus a compositional shallow embedding into polynomials suggests a simple, high-level, yet potentially very practical new method for expressing verifiable computation.
MixBuy: Contingent Payment in the Presence of Coin Mixers
A contingent payment protocol involves two mutually distrustful parties, a buyer and a seller, operating on the same blockchain, and a digital product, whose ownership is not tracked on a blockchain (e.g. a digital book). The buyer holds coins on the blockchain and transfers them to the seller in exchange for the product. However, if the blockchain does not hide transaction details, any observer can learn that a buyer purchased some product from a seller.
In this work, we take contingent payment a step further: we consider a buyer who wishes to buy a digital product from a seller routing the payment via an untrusted mixer.
Crucially, we require that said payment is unlinkable, meaning that the mixer (or any other observer) does not learn which buyer is paying which seller. We refer to such setting as unlinkable contingent payment (UCP).
We present MixBuy, a system that realizes UCP. Mixbuy relies on oracle-based unlinkable contingent payment (O-UCP), a novel four-party cryptographic protocol where the mixer pays the seller and the seller provides the buyer with the product only if a semi-trusted notary attests that the buyer has paid the mixer. More specifically, we require four security notions: (i) mixer security that guarantees that if the mixer pays the seller, the mixer must get paid from the buyer; (ii) seller security that guarantees that if the seller delivers the product to the buyer, the seller must get paid from the mixer; (iii) buyer security that guarantees that if the buyer pays the mixer, the buyer must obtain the product; and (iv) unlinkability that guarantees that given a set of buyers and sellers, the mixer should not learn which buyer paid which seller.
We present a provably secure and efficient cryptographic construction for O-UCP. Our construction can be readily used to realize UCP on most blockchains, as it has minimal functionality requirements (i.e., digital signatures and timelocks). To demonstrate the practicality of our construction, we provide a proof of concept for O-UCP and our benchmarks in commodity hardware show that the communication overhead is small (a few kB per message) and the running time is below one second.
Communication Complexity vs Randomness Complexity in Interactive Proofs
In this note, we study the interplay between the communication from a verifier in a general private-coin interactive protocol and the number of random bits it uses in the protocol. Under worst-case derandomization assumptions, we show that it is possible to transform any $I$-round interactive protocol that uses $\rho$ random bits into another one for the same problem with the additional property that the verifier's communication is bounded by $O(I\cdot \rho)$. Importantly, this is done with a minor, logarithmic, increase in the communication from the prover to the verifier and while preserving the randomness complexity. Along the way, we introduce a new compression game between computationally-bounded compressor and computationally-unbounded decompressor and a new notion of conditioned efficient distributions that may be of independent interest. Our solutions are based on a combination of perfect hashing and pseudorandom generators.
Notes on (failed) attempts to instantiate TLR3
In this short paper we share our experience on instantiating the width-extension construct TLR3, based on a variety of tweakable block cipher constructs. As many of our attempts failed, we highlight the complexity of getting a practical tweakable block cipher and the gap between theory and practice.
DISCO: Dynamic Searchable Encryption with Constant State
Dynamic searchable encryption (DSE) with forward and backward privacy reduces leakages in early-stage schemes. Security enhancement comes with a price -- maintaining updatable keyword-wise state information. State information, if stored locally, incurs significant client-side storage overhead for keyword-rich datasets, potentially hindering real-world deployments.
We propose DISCO, a simple and efficient framework for designing DSE schemes using constant client state. DISCO combines range-constrained pseudorandom functions (RCPRFs) over a global counter and leverages nice properties from the underlying primitives and index structure to simultaneously achieve forward-and-backward privacy and constant client state. To configure DISCO concretely, we identify a set of RCPRF properties that are vital for the resulting DISCO instantiations. By configuring DISCO with different RCPRFs, we resolve efficiency and usability issues in existing schemes. We further optimize DISCO's concrete efficiency without downgrading security. We implement DISCO constructions and report performance, showing trade-offs from different DISCO constructions. Besides, we compare the practical efficiency of DISCO with existing non-constant-state DSE schemes, demonstrating DISCO's competitive efficiency.
Efficient 2PC for Constant Round Secure Equality Testing and Comparison
Secure equality testing and comparison are two important primitives widely used in many secure computation scenarios, such as privacy-preserving machine learning, private set intersection, and secure data mining, etc. This work proposes new constant-round two-party computation (2PC) protocols for secure equality testing and comparison.
Our protocols are designed in the online/offline paradigm.
For 32-bit inputs, the online communication cost of our equality testing protocol and secure comparison protocol are as low as 76 bits (1\% of ABY) and 384 bits (5\% of ABY) , respectively.
Our benchmarks show that (i) for 32-bit equality testing, our scheme performs $9 \times$ faster than the Guo \emph{et al.} (EUROCRYPT 2023) and $15 \times$ of the garbled circuit (GC) with the half-gate optimization (CRYPTO 2015). (ii) for 32-bit secure comparison, our scheme performs $3 \times$ faster than Guo \emph{et al.} (EUROCRYPT 2023), $6 \times$ faster than both Rathee \emph{et al.} (CCS 2020) and GC with the half-gate optimization.
Return of the Kummer: a Toolbox for Genus-2 Cryptography
This work expands the machinery we have for isogeny-based cryptography in genus 2 by developing a toolbox of several essential algorithms for Kummer surfaces, the dimension-2 analogue of $x$-only arithmetic on elliptic curves. Kummer surfaces have been suggested in hyper-elliptic curve cryptography since at least the 1980s and recently these surfaces have reappeared to efficiently compute $(2,2)$-isogenies. We construct several essential analogues of techniques used in one-dimensional isogeny-based cryptography, such as pairings, deterministic point sampling and point compression and give an overview of $(2,2)$-isogenies on Kummer surfaces. We furthermore show how Scholten's construction can be used to transform isogeny-based cryptography over elliptic curves over $\mathbb{F}_{p^2}$ into protocols over Kummer surfaces over $\mathbb{F}_{p}$
As an example of this approach, we demonstrate that SQIsign verification can be performed completely on Kummer surfaces, and, therefore, that one-dimensional SQIsign verification can be viewed as a two-dimensional isogeny between products of elliptic curves. Curiously, the isogeny is then defined over $\mathbb{F}_{p}$ rather than $\mathbb{F}_{p^2}$. Contrary to expectation, the cost of SQIsign verification using Kummer surfaces does not explode: verification costs only 1.5$\times$ more in terms of finite field operations than the SQIsign variant AprèsSQI, optimised for fast verification. Furthermore, it is plausible that arithmetic on Kummer surfaces can be efficiently vectorised, giving Kummer-based protocols over $\mathbb{F}_{p}$ a potential performance boost on modern architectures, possibly surpassing the performance of elliptic-curve analogues over $\mathbb{F}_{p^2}$
A Modular Approach to Registered ABE for Unbounded Predicates
Registered attribute-based encryption (Reg-ABE), introduced by Hohenberger et al. (Eurocrypt’23), emerges as a pivotal extension of attribute-based encryption (ABE), aimed at mitigating the key-escrow problem. Although several Reg-ABE schemes with black-box use of cryptography have been proposed so far, there remains a significant gap in the class of achievable predicates between vanilla ABE and Reg-ABE. To narrow this gap, we propose a modular framework for constructing Reg-ABE schemes for a broader class of predicates. Our framework
is a Reg-ABE analog of the predicate transformation framework for ABE introduced by Attrapadung (Eurocrypt’19) and later refined by Attrapadung and Tomida (Asiacrypt’20) to function under the standard MDDH assumption. As immediate applications, our framework implies the following new Reg-ABE schemes under the standard MDDH assumption:
– the first Reg-ABE scheme for (non-)monotone span programs with the traditional completely unbounded property.
– the first Reg-ABE scheme for general non-monotone span programs (also with the completely unbounded property) as defined in the case of vanilla ABE by Attrapadung and Tomida (Asiacrypt’20).
Here, the term “completely unbounded” signifies the absence of restrictions on attribute sets for users and policies associated with ciphertexts. From a technical standpoint, we first substantially modify pair encoding schemes (PES), originally devised for vanilla ABE by Attrapadung (Eurocrypt’14), to make them compatible with
Reg-ABE. Subsequently, we present a series of predicate transformations through which we can construct complex predicates, particularly those with an “unbounded” characteristic, starting from simple ones. Finally, we define new properties of PES necessary for constructing Reg-ABE schemes and prove that these properties are preserved through the transformations. This immediately implies that we can obtain Reg-ABE schemes for any predicates derived via predicate
transformations.
Provably Secure Butterfly Key Expansion from the CRYSTALS Post-Quantum Schemes
This work presents the first provably secure protocol for Butterfly Key Expansion (BKE) -- a tripartite protocol for provisioning users with pseudonymous certificates -- based on post-quantum cryptographic schemes. Our work builds upon the CRYSTALS family of post-quantum algorithms that have been selected for standardization by NIST. We extend those schemes by imbuing them with the additional functionality of public key expansion: a process by which pseudonymous public keys can be derived by a single public key. Our work is the most detailed analysis yet of BKE: we formally define desired properties of BKE -- unforgeability and unlinkability -- as cryptographic games, and prove that BKE implemented with our modified CRYSTALS schemes satisfy those properties. We implemented our scheme by modifying the Kyber and Dilithium algorithms from the LibOQS project, and we report on our parameter choices and the performance of the schemes.
Quantum-Safe Public Key Blinding from MPC-in-the-Head Signature Schemes
Key blinding produces pseudonymous digital identities by rerandomizing public keys of a digital signature scheme. It is used in anonymous networks to provide the seemingly contradictory goals of anonymity and authentication. Current key blinding schemes are based on the discrete log assumption. Eaton, Stebila and Stracovsky (LATINCRYPT 2021) proposed the first key blinding schemes from lattice assumptions. However, the large public keys and lack of QROM security means they are not ready to replace existing solutions.
We present a new way to build key blinding schemes form any MPC-in-the-Head signature scheme. These schemes rely on well-studied symmetric cryptographic primitives and admit short public keys. We prove a general framework for constructing key blinding schemes and for proving their security in the quantum random oracle model (QROM).
We instantiate our framework with the recent AES-based Helium signature scheme (Kales and Zaverucha, 2022). Blinding Helium only adds a minor overhead to the signature and verification time. Both Helium and the aforementioned lattice-based key blinding schemes were only proven secure in the ROM. This makes our results the first QROM proof of Helium and the first fully quantum-safe public key blinding scheme.
Quantum CCA-Secure PKE, Revisited
Security against chosen-ciphertext attacks (CCA) concerns privacy of messages even if the adversary has access to the decryption oracle. While the classical notion of CCA security seems to be strong enough to capture many attack scenarios, it falls short of preserving the privacy of messages in the presence of quantum decryption queries, i.e., when an adversary can query a superposition of ciphertexts.
Boneh and Zhandry (CRYPTO 2013) defined the notion of quantum CCA (qCCA) security to guarantee privacy of messages in the presence of quantum decryption queries. However, their construction is based on an exotic cryptographic primitive (namely, identity-based encryption with security against quantum queries), for which only one instantiation is known. In this work, we comprehensively study qCCA security for public-key encryption (PKE) based on both generic cryptographic primitives and concrete assumptions, yielding the following results:
* We show that key-dependent message secure encryption (along with PKE) is sufficient to realize qCCA-secure PKE. This yields the first construction of qCCA-secure PKE from the LPN assumption.
* We prove that hash proof systems imply qCCA-secure PKE, which results in the first instantiation of PKE with qCCA security from (isogeny-based) group actions.
* We extend the notion of adaptive TDFs (ATDFs) to the quantum setting by introducing quantum ATDFs, and we prove that quantum ATDFs are sufficient to realize qCCA-secure PKE. We also show how to instantiate quantum ATDFs from the LWE assumption.
* We show that a single-bit qCCA-secure PKE is sufficient to realize a multi-bit qCCA-secure PKE by extending the completeness of bit encryption for CCA security to the quantum setting.
Dual Polynomial Commitment Schemes and Applications to Commit-and-Prove SNARKs
In this work, we introduce a primitive called a dual polynomial commitment scheme that allows linking together a witness committed to using a univariate polynomial commitment scheme with a witness inside a multilinear polynomial commitment scheme. This yields commit-and-prove (CP) SNARKs with the flexibility of going back and forth between univariate and multilinear encodings of witnesses. This is in contrast to existing CP frameworks that assume compatible polynomial commitment schemes between different components of the proof systems. In addition to application to CP, we also show that our notion yields a version of Spartan with better proof size and verification complexity, at the cost of a more expensive prover.
We achieve this via a combination of the following technical contributions: (i) we construct a new univariate commitment scheme in the updatable SRS setting that has better prover complexity than KZG (ii) we construct a new multilinear commitment scheme in the updatable setting that is compatible for linking with our univariate scheme (iii) we construct an argument of knowledge to prove a given linear relationship between two witnesses committed using a two-tiered commitment scheme (Pedersen+AFG) using Dory as a black-box. These constructions are of independent interest.
We implement our commitment schemes and report on performance. We also implement the version of Spartan with our dual polynomial commitment scheme and demonstrate that it outperforms Spartan in proof size and verification complexity.
Let Them Drop: Scalable and Efficient Federated Learning Solutions Agnostic to Client Stragglers
Secure Aggregation (SA) stands as a crucial component in modern Federated Learning (FL) systems, facilitating collaborative training of a global machine learning model while protecting the privacy of individual clients' local datasets. Many existing SA protocols described in the FL literature operate synchronously, leading to notable runtime slowdowns due to the presence of stragglers (i.e. late-arriving clients).
To address this challenge, one common approach is to consider stragglers as client failures and use SA solutions that are robust against dropouts. While this approach indeed seems to work, it unfortunately affects the performance of the protocol as its cost strongly depends on the dropout ratio and this ratio has increased significantly when taking stragglers into account.
Another approach explored in the literature to address stragglers is to introduce asynchronicity into the FL system. Very few SA solutions exist in this setting and currently suffer from high overhead.
In this paper, similar to related work, we propose to handle stragglers as client failures but design SA solutions that do not depend on the dropout ratio so that an unavoidable increase on this metric does not affect the performance of the solution. We first introduce Eagle, a synchronous SA scheme designed not to depend on the client failures but on the online users' inputs only. This approach offers better computation and communication costs compared to existing solutions under realistic settings where the number of stragglers is high. We then propose Owl, the first SA solution that is suitable for the asynchronous setting and once again considers online clients' contributions only.
We implement both solutions and show that: (i) in a synchronous FL with realistic dropout rates (taking potential stragglers into account), Eagle outperforms the best SA solution, namely Flamingo, by X4; (ii) In the asynchronous setting, Owl exhibits the best performance compared to the state-of-the-art solution LightSecAgg.
SmartZKCP: Towards Practical Data Exchange Marketplace Against Active Attacks
The trading of data is becoming increasingly important as it holds substantial value. A blockchain-based data marketplace can provide a secure and transparent platform for data exchange. To facilitate this, developing a fair data exchange protocol for digital goods has garnered considerable attention in recent decades. The Zero Knowledge Contingent Payment (ZKCP) protocol enables trustless fair exchanges with the aid of blockchain and zero-knowledge proofs. However, applying this protocol in a practical data marketplace is not trivial.
In this paper, several potential attacks are identified when applying the ZKCP protocol in a practical public data marketplace. To address these issues, we propose SmartZKCP, an enhanced solution that offers improved security measures and increased performance. The protocol is formalized to ensure fairness and secure against potential attacks. Moreover, SmartZKCP offers efficiency optimizations and minimized communication costs. Evaluation results show that SmartZKCP is both practical and efficient, making it applicable in a data exchange marketplace.
Scalable Collaborative zk-SNARK and Its Application to Efficient Proof Outsourcing
Collaborative zk-SNARK (USENIX'22) allows multiple parties to jointly create a zk-SNARK proof over distributed secrets (also known as the witness). It provides a promising approach to proof outsourcing, where a client wishes to delegate the tedious task of proof generation to many servers from different locations, while ensuring no corrupted server can learn its witness (USENIX'23). Unfortunately, existing work remains a significant efficiency problem, as the protocols rely heavily on a particularly powerful server, and thus face challenges in achieving scalability for complex applications.
In this work, we address this problem by extending the existing zk-SNARKs Libra (Crypto'19) and HyperPlonk (Eurocrypt'23) into scalable collaborative zk-SNARKs. Crucially, our collaborative proof generation does not require a powerful server, and all servers take up roughly the same proportion of the total workload. In this way, we achieve privacy and scalability simultaneously for the first time in proof outsourcing. To achieve this, we develop an efficient MPC toolbox for a number of useful multivariate polynomial primitives, including sumcheck, productcheck, and multilinear polynomial commitment, which can also be applied to other applications as independent interests. For proof outsourcing purposes, when using $128$ servers to jointly generate a proof for a circuit size of $2^{24}$ gates, our benchmarks for these two collaborative proofs show a speedup of $21\times$ and $24\times$ compared to a local prover, respectively. Furthermore, we are able to handle enormously large circuits, making it practical for real-world applications.
Two RSA-based Cryptosystems
The cryptosystem RSA is a very popular cryptosystem in the study of Cryptography. In this article, we explore how the idea of a primitive m th root of unity in a ring can be integrated into the Discrete Fourier Transform, leading to the development of new cryptosystems known as RSA-DFT and RSA-HGR.
Certifying Private Probabilistic Mechanisms
In past years, entire research communities have arisen to address concerns of privacy and fairness in data analysis. At present, however, the public must trust that institutions will re-implement algorithms voluntarily to account for these social concerns. Due to additional cost, widespread adoption is unlikely without effective legal enforcement. A technical challenge for enforcement is that the methods proposed are often probabilistic mechanisms, whose output must be drawn according to precise, and sometimes secret, distributions. The Differential Privacy (DP) case is illustrative: if a cheating curator answers queries according to an overly-accurate mechanism, privacy violations could go undetected. The need for effective enforcement raises the central question of our paper: Can we efficiently certify the output of a probabilistic mechanism enacted by an untrusted party? To this end:
(1) We introduce two new notions: Certified Probabilistic Mechanisms (CPM) and Random Variable Commitment Schemes (RVCS). A CPM is an interactive protocol that forces a prover to enact a given probabilistic mechanism or be caught; importantly, the interaction does not reveal secret parameters of the mechanism. An RVCS—a key primitive for constructing CPMs—is a commitment scheme where the verifier is convinced that the commitment is to an RV sampled according to an agreed-upon distribution, but learns nothing else.
(2) We instantiate the general notion of CPM for the special case of Certifying DP. We build a lightweight, doubly-efficent interactive proof system to certify arbitrary-predicate counting queries released via the DP Binomial mechanism. The construction relies on a commitment scheme with perfect hiding and additive homomorphic properties that can be used to release a broad class of queries about a committed database, which we construct on top of Pedersen commitments.
(3) Finally, we demonstrate the immediate feasibility of Certified DP via a highly-efficient and scalable prototype implementation to answer counting queries of arbitrary predicates. The mechanism is composed of an offline and online stage, where the online phase allows for non-interactive certification of queries. For example, we show that CDP queries over a US Census Public Use Microdata Sample (PUMS) ($n=7000$) can be completed in only 1.6 ms and verified in just 38 $\mu \text{s}$. Our implementation is available in open source at https://github.com/jlwatson/certified-dp.
Distributed Point Function with Constraints, Revisited
Distributed Point Function (DPF) provides a way for a dealer to split a point function $f_{\alpha, \beta}$ into multiple succinctly described function-shares, where the function $f_{\alpha, \beta}$ for a special input $\alpha$, returns a special output value $\beta$, and returns a fixed value $0$ otherwise. As the security requirement, any strict subset of the function-shares reveals nothing about the function $f_{\alpha,\beta}$. However, each function-share can be individually evaluated on the common input $x$, and these evaluation results can then be merged together to reconstruct the value $f_{\alpha, \beta}(x)$.
Recently, Servan-Schreiber et al. (S&P 2023) investigate the access control problem for DPF; namely, the DPF evaluators can ensure that the DPF dealer is authorized to share the given function with privacy assurance. In this work, we revisit this problem, introducing a new notion called DPF with constraints; meanwhile, we identify that there exists a subtle flaw in their privacy definition as well as a soundness issue in one of their proposed schemes due to the lack of validation of the special output value $\beta$. Next, we show how to reduce both the storage size of the constraint representation and the server's computational overhead from $O(N)$ to $O(\log N)$, where $N$ is the number of authorized function sets. In addition, we show how to achieve fine-grained private access control, that is, the wildcard-style constraint for the choice of the special output $\beta$. Our benchmarks show that the amortized running time of our logarithmic storage scheme is $2\times$ - $3\times$ faster than the state-of-the-art when $N=2^{15}$. Furthermore, we provide the first impossibility and feasibility results of the DPF with constraints where the evaluators do not need to communicate with each other.
Willow: Secure Aggregation with One-Shot Clients
A common drawback of secure vector summation protocols in the single-server model is that they impose at least one synchronization point between all clients contributing to the aggregation. This results in clients waiting on each other to advance through the rounds of the protocol, leading to large latency (or failures due to too many dropouts) even if the protocol is computationally efficient. In this paper we propose protocols in the single-server model where clients contributing data to the aggregation (i) send a single message to the server and (ii) can join aggregation sessions dynamically whenever they have resources, i.e., without the need for synchronizing their reporting time with any other clients. Our approach is based on a committee of parties that aid in the computation by running a setup phase before data collection starts, and a verification/decryption phase once it ends. Unlike existing committee-based protocols such as Flamingo (S\&P 2023), the cost for committee members can be made sub-linear in the number of clients, and does not depend on the size of the input client vectors. Our experimental evaluation shows that our protocol, even while allowing dynamic client participation, is competitive with the state of the art protocols that do not have that feature in both computation and communication.
MFKDF: Multiple Factors Knocked Down Flat
Nair and Song (USENIX 2023) introduce the concept of a Multi-Factor Key Derivation Function (MFKDF), along with constructions and a security analysis.
MFKDF integrates dynamic authentication factors, such as HOTP and hardware tokens, into password-based key derivation.
The aim is to improve the security of password-derived keys, which can then be used for encryption or as an alternative to multi-factor authentication.
The authors claim an exponential security improvement compared to traditional password-based key derivation functions (PBKDF).
We show that the MFKDF constructions proposed by Nair and Song fall short of the stated security goals.
Underspecified cryptographic primitives and the lack of integrity of the MFKDF state lead to several attacks, ranging from full key recovery when an HOTP factor is compromised, to bypassing factors entirely or severely reducing their entropy.
We reflect on the different threat models of key-derivation and authentication, and conclude that MFKDF is always weaker than plain PBKDF and multi-factor authentication in each setting.
An Explicit High-Moment Forking Lemma and its Applications to the Concrete Security of Multi-Signatures
In this work we first present an explicit forking lemma that distills the information-theoretic essence of the high-moment technique introduced by Rotem and Segev (CRYPTO '21), who analyzed the security of identification protocols and Fiat-Shamir signature schemes. Whereas the technique of Rotem and Segev was particularly geared towards two specific cryptographic primitives, we present a stand-alone probabilistic lower bound, which does not involve any underlying primitive or idealized model. The key difference between our lemma and previous ones is that instead of focusing on the tradeoff between the worst-case or expected running time of the resulting forking algorithm and its success probability, we focus on the tradeoff between higher moments of its running time and its success probability.
Equipped with our lemma, we then establish concrete security bounds for the BN and BLS multi-signature schemes that are significantly tighter than the concrete security bounds established by Bellare and Neven (CCS '06) and Boneh, Drijvers and Neven (ASIACRYPT '18), respectively. Our analysis does not limit adversaries to any idealized algebraic model, such as the algebraic group model in which all algorithms are assumed to provide an algebraic justification for each group element they produce. Our bounds are derived in the random-oracle model based on the standard-model second-moment hardness of the discrete logarithm problem (for the BN scheme) and the computational co-Diffie-Hellman problem (for the BLS scheme). Such second-moment assumptions, asking that the success probability of any algorithm in solving the underlying computational problems is dominated by the second moment of the algorithm's running time, are particularly plausible in any group where no better-than-generic algorithms are currently known.
A Pure Indistinguishability Obfuscation Approach to Adaptively-Sound SNARGs for NP
We construct an adaptively-sound succinct non-interactive argument (SNARG) for NP in the CRS model from sub-exponentially-secure indistinguishability obfuscation ($i\mathcal{O}$) and sub-exponentially-secure one-way functions. Previously, Waters and Wu (STOC 2024), and subsequently, Waters and Zhandry (CRYPTO 2024) showed how to construct adaptively-sound SNARGs for NP by relying on sub-exponentially-secure indistinguishability obfuscation, one-way functions, and an additional algebraic assumption (i.e., discrete log, factoring, or learning with errors). In this work, we show that no additional algebraic assumption is needed and vanilla (sub-exponentially-secure) one-way functions already suffice in combination with $i\mathcal{O}$.
CISELeaks: Information Leakage Assessment of Cryptographic Instruction Set Extension Prototypes
Software based cryptographic implementations provide flexibility but they face performance limitations. In contrast, hardware based cryptographic accelerators utilize application-specific customization to provide real-time security solutions.
Cryptographic instruction-set extensions (CISE) combine the advantages of both hardware and software based solutions to provide higher performance combined with the flexibility of atomic-level cryptographic operations. While CISE is widely used to develop security solutions, side-channel analysis of CISE-based devices is in its infancy. Specifically, it is important to evaluate whether the power usage and electromagnetic emissions of CISE-based devices have any correlation with its internal operations, which an adversary can exploit to deduce cryptographic secrets.
In this paper, we propose a test vector leakage assessment framework to evaluate the pre-silicon prototypes at the early stages of the design life-cycle. Specifically, we first identify functional units with the potential for leaking information through power side-channel signatures and then evaluate them on system prototypes by generating the necessary firmware to maximize the side-channel signature. Our experimental results on two RISC-V based cryptographic extensions, RISCV-CRYPTO and XCRYPTO, demonstrated that seven out of eight prototype AES- and SHA-related functional units are vulnerable to leaking cryptographic secrets through their power side-channel signature even in full system mode with a statistical significance of $\alpha = 0.05$.
Multi-Hop Multi-Key Homomorphic Signatures with Context Hiding from Standard Assumptions
Fully homomorphic signatures are a significant strengthening of digital signatures, enabling computations on \emph{secretly} signed data. Today, we have multiple approaches to design fully homomorphic signatures such as from lattices, or succinct functional commitments, or indistinguishability obfuscation, or mutable batch arguments. Unfortunately, all existing constructions for homomorphic signatures suffer from one or more limitations. We do not have homomorphic signatures with features such as multi-hop evaluation, context hiding, and fast amortized verification, while relying on standard falsifiable assumptions.
In this work, we design homomorphic signatures satisfying all above properties. We construct homomorphic signatures for polynomial-sized circuits from a variety of standard assumptions such as sub-exponential DDH, standard pairing-based assumptions, or learning with errors. We also discuss how our constructions can be easily extended to the multi-key setting.
Information-Theoretic Single-Server PIR in the Shuffle Model
We revisit the problem of private information retrieval (PIR) in the shuffle model, where queries can be made anonymously by multiple clients. We present the first single-server PIR protocol in this model that has sublinear per-client communication and information-theoretic security. Moreover, following one-time preprocessing on the server side, our protocol only requires sublinear per-client computation. Concretely, for every $\gamma>0$, the protocol has $O(n^{\gamma})$ communication and computation costs per (stateless) client, with $1/\text{poly}(n)$ statistical security, assuming that a size-$n$ database is simultaneously accessed by $\text{poly}(n)$ clients. This should be contrasted with the recent breakthrough result of Lin, Mook, and Wichs (STOC 2023) on doubly efficient PIR in the standard model, which is (inherently) limited to computational security.
Combining Outputs of a Random Permutation: New Constructions and Tight Security Bounds by Fourier Analysis
We consider constructions that combine outputs of a single permutation $\pi:\{0,1\}^n \rightarrow \{0,1\}^n$ using a public function. These are popular constructions for achieving security beyond the birthday bound when implementing a pseudorandom function using a block cipher (i.e., a pseudorandom permutation). One of the best-known constructions (denoted SXoP$[2,n]$) XORs the outputs of 2 domain-separated calls to $\pi$.
Modeling $\pi$ as a uniformly chosen permutation, several previous works proved a tight information-theoretic indistinguishability bound for SXoP$[2,n]$ of about $q/2^{n}$, where $q$ is the number of queries. However, tight bounds are unknown for the generalized variant (denoted SXoP$[r,n]$) which XORs the outputs of $r \geq 2$ domain-separated calls to a uniform permutation.
In this paper, we obtain two results. Our first result improves the known bounds for SXoP$[r,n]$ for all (constant) $r \geq 3$ (assuming $q \leq O(2^n/r)$ is not too large) in both the single-user and multi-user settings. In particular, for $r=3$, our bound is about $\sqrt{u}q_{\max}/2^{2.5n}$ (where $u$ is the number of users and $q_{\max}$ is the maximal number of queries per user), improving the best-known previous result by a factor of at least $2^n$.
For odd $r$, our bounds are tight for $q > 2^{n/2}$, as they match known attacks. For even $r$, we prove that our single-user bounds are tight by providing matching attacks.
Our second and main result is divided into two parts. First, we devise a family of constructions that output $n$ bits by efficiently combining outputs of 2 calls to a permutation on $\{0,1\}^n$, and achieve multi-user security of about $\sqrt{u} q_{\max}/2^{1.5n}$. Then, inspired by the CENC construction of Iwata [FSE'06], we further extend this family to output $2n$ bits by efficiently combining outputs of 3 calls to a permutation on $\{0,1\}^n$. The extended construction has similar multi-user security of $\sqrt{u} q_{\max}/2^{1.5n}$.
The new single-user ($u=1$) bounds of $q/2^{1.5n}$ for both families should be contrasted with the previously best-known bounds of $q/2^n$, obtained by the comparable constructions of SXoP$[2,n]$ and CENC.
All of our bounds are proved by Fourier analysis, extending the provable security toolkit in this domain in multiple ways.
The Committing Security of MACs with Applications to Generic Composition
Message Authentication Codes (MACs) are ubiquitous primitives deployed in multiple flavors through standards such as HMAC, CMAC, GMAC, LightMAC, and many others. Its versatility makes it an essential building block in applications necessitating message authentication and integrity checks, in authentication protocols, authenticated encryption schemes, or as a pseudorandom or key derivation function. Its usage in this variety of settings makes it susceptible to a broad range of attack scenarios. The latest attack trends leverage a lack of commitment or context-discovery security in AEAD schemes and these attacks are mainly due to the weakness in the underlying MAC part. However, these new attack models have been scarcely analyzed for MACs themselves. This paper provides a thorough treatment of MACs committing and context-discovery security. We reveal that commitment and context-discovery security of MACs have their own interest by highlighting real-world vulnerable scenarios. We formalize the required security notions for MACs, and analyze the security of standardized MACs for these notions. Additionally, as a constructive application, we analyze generic AEAD composition and provide simple and efficient ways to build committing and context-discovery secure AEADs.
MATHEMATICAL SPECULATIONS ON CRYPTOGRAPHY
The current cryptographic frameworks like RSA, ECC, and AES are potentially under quantum threat. Quantum cryptographic and post-quantum cryptography are being extensively researched for securing future information. The quantum computer and quantum algorithms are still in the early developmental stage and thus lack scalability for practical application. As a result of these challenges, most researched PQC methods are lattice-based, code-based, ECC isogeny, hash-based, and multivariate crypto schemes. In this paper, we explore other athematical topics such as stereographic projection, Mobius transformation, change of basis, Apollonian circle, Binary Quadratic form equivalence, Gauss composition law, and its conjunctions. It fulfills preliminary conditions like bijection, primality, and np-hard problems, and the feasibility of one-way functions along with its interconnection. Thus allowing the exploration of new realms of mathematics for the development of secure protocols for future communication.
Verifiable and Private Vote-by-Mail
Vote-by-mail is increasingly used in Europe and worldwide for government elections. Nevertheless, very few attempts have been made towards the design of verifiable vote-by-mail, and none of them come with a rigorous security analysis. Furthermore, the ballot privacy of the currently deployed (non-verifiable) vote-by-mail systems relies on procedural means that a single malicious operator can bypass.
We propose a verifiable vote-by-mail system that can accommodate the constraints of many of the large ballots that are common in Europe. Verifiability and privacy hold unless multiple system components collude to cheat on top of the postal channel. These security notions are expressed and analyzed in the simplified UC security framework.
Our vote-by-mail system only makes limited requirements on the voters: voters can verify their vote by copying and comparing short strings and verifying the computation of a single hash on a short input, and they can vote normally if they want to ignore the verification steps completely. Our system also relies on common cryptographic components, all available in the ElectionGuard verifiable voting SDK for instance, which limits the risks of errors in the implementation of the cryptographic aspects of the system.
Time Sharing - A Novel Approach to Low-Latency Masking
We present a novel approach to small area and low-latency first-order masking in hardware. The core idea is to separate the processing of shares in time in order to achieve non-completeness. Resulting circuits are proven first-order glitch-extended PINI secure. This means the method can be straightforwardly applied to mask arbitrary functions without constraints which the designer must take care of. Furthermore we show that an implementation can benefit from optimization through EDA tools without sacrificing security. We provide concrete results of several case studies. Our low-latency implementation of a complete PRINCE core shows a 32% area improvement (44% with optimization) over the state-of-the-art. Our PRINCE S-Box passes formal verification with a tool and the complete core on FPGA shows no first-order leakage in TVLA with 100 million traces. Our low-latency implementation of the AES S-Box costs roughly one third (one quarter with optimization) of the area of state-of-the-art implementations. It shows no first-order leakage in TVLA with 250 million traces.
Climbing and descending tall volcanos
We revisit the question of relating the elliptic curve discrete logarithm problem (ECDLP) between ordinary elliptic curves over finite fields with the same number of points. This problem was considered in 1999 by Galbraith and in 2005 by Jao, Miller, and Venkatesan. We apply recent results from isogeny cryptography and cryptanalysis, especially the Kani construction, to this problem. We improve the worst case bound in Galbraith's 1999 paper from $\tilde{O}( q^{1.5} )$ to (heuristically) $\tilde{O}( q^{0.4} )$ operations.
The two cases of main interest for discrete logarithm cryptography are random curves (flat volcanoes) and pairing-based crypto (tall volcanoes with crater of constant or polynomial size). In both cases we show a rigorous $\tilde{O}( q^{1/4})$ algorithm to compute an isogeny between any two curves in the isogeny class. We stress that this paper is motivated by pre-quantum elliptic curve cryptography using ordinary elliptic curves, which is not yet obsolete.
On Orchestrating Parallel Broadcasts for Distributed Ledgers
This paper introduces and develops the concept of ``ticketing'', through which atomic broadcasts are orchestrated by nodes in a distributed system. The paper studies different ticketing regimes that allow parallelism, yet prevent slow nodes from hampering overall progress. It introduces a hybrid scheme which combines managed and unmanaged ticketing regimes, striking a balance between adaptivity and resilience. The performance evaluation demonstrates how managed and unmanaged ticketing regimes benefit throughput in systems with heterogeneous resources both in static and dynamic scenarios, with the managed ticketing regime performing better among the two as it adapts better. Finally, it demonstrates how using the hybrid ticketing regime performance can enjoy both the adaptivity of the managed regime and the liveness guarantees of the unmanaged regime.
Scalable Private Set Union, with Stronger Security
Private Set Union (PSU) protocol allows parties, each holding an input set, to jointly compute the union of the sets without revealing anything else. In the literature, scalable PSU protocols follow the “split-execute-assemble” paradigm (Kolesnikov et al., ASIACRYPT 2019); in addition, those fast protocols often use Oblivious Transfer as building blocks. Kolesnikov et al. (ASIACRYPT 2019) and Jia et al. (USENIX Security 2022), pointed out that certain security issues can be introduced in the “split-execute-assemble” paradigm. In this work, surprisingly, we observe that the typical way of invoking Oblivious Transfer also causes unnecessary leakage, and only the PSU protocols based on additively homomorphic encryption (AHE) can avoid the leakage. However, the AHE-based PSU protocols are far from being practical.
To bridge the gap, we also design a new PSU protocol that can avoid the unnecessary leakage. Unlike the AHE-based PSU protocols, our new construction only relies on symmetric-key operations other than base OTs, thereby being much more scalable. The experimental results demonstrate that our protocol can obtain at least 873.74× speedup over the best-performing AHE-based scheme. Moreover, our performance is comparable to that of the state-of-the-art PSU protocol (Chen et al., USENIX Security 2023), which also suffers from the unnecessary leakage.
Simple Logarithmic-size LSAG signature
A number of existing cryptosystems use the well-known linear-size LSAG signature concept, extending it in many ways. This article presents a simple logarithmic-size signature LS-LSAG which, despite a radical reduction in size, retains the basic code block of LSAG. Therefore, substituting LS-LSAG for LSAG requires minimal changes to almost any existing LSAG/CLSAG-based solution, making it logarithmic instead of linear.
Leveraging Small Message Spaces for CCA1 Security in Additively Homomorphic and BGN-type Encryption
We show that the smallness of message spaces can be used as a checksum allowing to hedge against CCA1 attacks in additively homomorphic encryption schemes. We first show that the additively homomorphic variant of Damgård's Elgamal provides IND-CCA1 security under the standard DDH assumption. Earlier proofs either required non-standard assumptions or only applied to hybrid versions of Damgård's Elgamal, which are not additively homomorphic. Our security proof builds on hash proof systems and exploits the fact that encrypted messages must be contained in a polynomial-size interval in order to enable decryption. With $3$ group elements per ciphertext, this positions Damgård's Elgamal as the most efficient/compact DDH-based additively homomorphic CCA1 cryptosystem. Under the same assumption, the best candidate so far was the lite Cramer-Shoup cryptosystem, where ciphertexts consist of $4$ group elements. We extend this observation to build an IND-CCA1 variant of the Boneh-Goh-Nissim encryption scheme, which allows evaluating 2-DNF formulas on encrypted data. By computing tensor products of Damgård's Elgamal ciphertexts, we obtain product ciphertexts consisting of $9$ group elements (instead of $16$ elements if we were tensoring lite Cramer-Shoup ciphertexts) in the target group of a bilinear map. Using similar ideas, we also obtain a CCA1 variant of the Elgamal-Paillier cryptosystem by forcing $\lambda$ plaintext bits to be zeroes, which yields CCA1 security almost for free. In particular, the message space remains exponentially large and ciphertexts are as short as in the IND-CPA scheme. We finally adapt the technique to the Castagnos-Laguillaumie system.
Multi-Input Functional Encryption for Unbounded Inner Products
In this work, we propose a construction for $ Multi~Input~Inner ~Product ~Encryption$ (MIPFE) that can handle vectors of variable length in different encryption slots. This construction is the first of its kind, as all existing MIPFE schemes allow only equal length vectors. The scheme is constructed in the private key setting, providing privacy for both message as well as the function, thereby achieving the so-called $full-hiding$ security. Our MIPFE scheme uses bilinear groups of prime order and achieves security under well studied cryptographic assumptions, namely, the symmetric external Diffie-Hellman assumption.
Cryptographic Analysis of Delta Chat
We analyse the cryptographic protocols underlying Delta Chat, a decentralised messaging application which uses e-mail infrastructure for message delivery. It provides end-to-end encryption by implementing the Autocrypt standard and the SecureJoin protocols, both making use of the OpenPGP standard. Delta Chat's adoption by categories of high-risk users such as journalists and activists, but also more generally users in regions affected by Internet censorship, makes it a target for powerful adversaries. Yet, the security of its protocols has not been studied to date. We describe five new attacks on Delta Chat in its own threat model, exploiting cross-protocol interactions between its implementation of SecureJoin and Autocrypt, as well as bugs in rPGP, its OpenPGP library. The findings have been disclosed to the Delta Chat team, who implemented fixes.