Papers updated in last 365 days (Page 2 of 3024 results)
Exponent-VRFs and Their Applications
Verifiable random functions (VRFs) are pseudorandom functions where the function owner can prove that a generated output is correct relative to a committed key. In this paper we introduce the notion of an exponent-VRF (eVRF): a VRF that does not provide its output $y$ explicitly, but instead provides $Y = y \cdot G$, where $G$ is a generator of some finite cyclic group (or $Y=g^y$ in multiplicative notation). We construct eVRFs from the Paillier encryption scheme and from DDH, both in the random-oracle model. We then show that an eVRF is a powerful tool that has many important applications in threshold cryptography. In particular, we construct (1) a one-round fully simulatable distributed key-generation protocol (after a single two-round initialization phase), (2) a two-round fully simulatable signing protocol for multiparty Schnorr with a deterministic variant, (3) a two-party ECDSA protocol that has a deterministic variant, (4) a threshold Schnorr signing protocol where the parties can later prove that they signed without being able to frame another group, and (5) an MPC-friendly and verifiable HD-derivation. All these applications are derived from this single new eVRF abstraction, and the resulting protocols are concretely efficient.
On the Soundness of Algebraic Attacks against Code-based Assumptions
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Anamorphic-Resistant Encryption; Or Why the Encryption Debate is Still Alive
Ever since the introduction of encryption, society has been divided over whether the government (or law enforcement agencies) should have the capability to decrypt private messages (with or without a war- rant) of its citizens. From a technical viewpoint, the folklore belief is that semantic security always enables some form of steganography. Thus, adding backdoors to semantically secure schemes is pointless: it only weakens the security of the “good guys”, while “bad guys” can easily circumvent censorship, even if forced to hand over their decryption keys.
In this paper we put a dent in this folklore. We formalize three worlds: Dictatoria (“dictator wins”: no convenient steganography, no user co- operation needed), Warrantland (“checks-and-balances”: no convenient steganography, but need user’s cooperation) and Privatopia (“privacy wins”: built-in, high-rate steganography, even if giving away the decryption key). We give strong evidence that all these worlds are possible, thus reopening the encryption debate on a technical level.
Our main novelty is the definition and design of special encryption schemes we call anamorphic-resistant (AR). In contrast to so called “anamorphic schemes”, — which were studied in the literature and form the basis of Privatopia, — any attempt to steganographically communicate over an AR-encryption scheme will be either impossible or hugely slow (depending on the definitional details).
ColliderVM: Stateful Computation on Bitcoin without Fraud Proofs
Bitcoin script cannot easily access and store state information onchain without an upgrade such as BIP-347 (OP_CAT); this makes performing general (stateful) computation on Bitcoin impossible to do directly. Despite this limitation, several approaches have been proposed to bypass it, with BitVM being the closest to production. BitVM enables fraud-proof-based computation on Bitcoin, relying on a $1$-out-of-$n$ honesty assumption.
This left the question of whether it is possible to achieve computation under the same honesty assumption without requiring onlookers to ensure validity through fraud proofs. In this note, we answer this question affirmatively by introducing ColliderVM, a new approach for performing computation on Bitcoin today. Crucially, this approach eliminates some capital inefficiency concerns stemming from reliance on fraud proofs.
For our construction, a key point is to replace the Lamport or Winternitz signature-based storage component in contemporary protocols with a hash collision-based commitment. Our techniques are inspired by ColliderScript, but are more efficient, reducing the number of hash evaluations required by at least $\times 10000$. With it, we estimate that the Bitcoin script length for STARK proof verification becomes nearly practical, allowing it to be used alongside other, pairing-based proof systems common today in applications.
Scalable and Fine-Tuned Privacy Pass from Group Verifiable Random Functions
Abstract—Anonymous token schemes are cryptographic
protocols for limiting the access to online resources to
credible users. The resource provider issues a set of access
tokens to the credible user that they can later redeem
anonymously, i.e., without the provider being able to link
their redemptions. When combined with credibility tests such
as CAPTCHAs, anonymous token schemes can significantly
increase user experience and provider security, without
exposing user access patterns to providers.
Current anonymous token schemes such as the Privacy
Pass protocol by Davidson et al. rely on oblivious
pseudorandom functions (OPRFs), which let server and user
jointly compute randomly looking access tokens. For those
protocols, token issuing costs are linear in the number of
requested tokens.
In this work, we propose a new approach for building
anonymous token schemes. Instead of relying on two-party
computation to realize a privacy-preserving pseudorandom
function evaluation, we propose to offload token generation
to the user by using group verifiable random functions
(GVRFs). GVRFs are a new cryptographic primitive
that allow users to produce verifiable pseudorandomness.
Opposed to standard VRFs, verification is anonymous within
the group of credible users. We give a construction of group
VRFs from the Dodis-Yampolskiy VRF and Equivalence-
Class Signatures, based on pairings and a new Diffie-
Hellman inversion assumption that we analyze in the Generic
Group Model. Our construction enjoys compact public keys
and proofs, while evaluation and verification costs are only
slightly increased compared to the Dodis-Yampolskiy VRF.
By deploying a group VRF instead of a OPRF, we
obtain an anonymous token scheme where communication
as well as server-side computation during the issuing phase
is constant and independent of the number of tokens a
user requests. Moreover, by means of our new concept of updatable token policies, the number of unspent tokens in
circulation can retrospectively (i.e., even after the credibility
check) be decreased or increased in order to react to
the current or expected network situation. Our tokens are
further countable and publicly verifiable. This comes at the
cost of higher computational efforts for token redemption
and verification as well as somewhat weaker unlinkability
guarantees compared to Privacy Pass.
Efficient Verifiable Mixnets from Lattices, Revisited
Mixnets are powerful building blocks for providing anonymity
in applications like electronic voting and anonymous messaging. The en-
cryption schemes upon which traditional mixnets are built, as well as the
zero-knowledge proofs used to provide verifiability, will, however, soon
become insecure once a cryptographically-relevant quantum computer is
built. In this work, we construct the most compact verifiable mixnet that
achieves privacy and verifiability through encryption and zero-knowledge
proofs based on the hardness of lattice problems, which are believed to
be quantum-safe.
A core component of verifiable mixnets is a proof of shuffle. The starting
point for our construction is the proof of shuffle of Aranha et al. (CT-
RSA 2021). We first identify an issue with the soundness proof in that
work, which is also present in the adaptation of this proof in the mixnets
of Aranha et al. (ACM CCS 2023) and Hough et al. (IACR CiC 2025).
The issue is that one cannot directly adapt classical proofs of shuffle
to the lattice setting due to the splitting structure of the rings used in
lattice-based cryptography. This is not just an artifact of the proof, but
a problem that manifests itself in practice, and we successfully mount an
attack against the implementation of the first of the mixnets. We fix the
problem and introduce a general approach for proving shuffles in split-
ting rings that can be of independent interest.
The efficiency improvement of our mixnet over prior work is achieved by
switching from re-encryption mixnets (as in the works of Aranha et al.
and Hough et al.) to decryption mixnets with very efficient layering based
on the hardness of the LWE and LWR problems over polynomial rings.
The ciphertexts in our scheme are smaller by approximately a factor of
10X and 2X over the aforementioned instantiations, while the linear-size
zero-knowledge proofs are smaller by a factor of 4X and 2X.
A Systematic Analysis of Pseudorandom Generation for Masked Cryptographic Implementation
This study analyzes and investigates pseudorandom generation (PRG) in the context of masked cryptographic implementation. Although masking and PRGs have been distinctly studied for a long time, little literature studies how the randomness in the masked implementation should be generated. The lack of analysis on mask-bits generators makes the practical security of masked cryptographic implementation unclear, and practitioners (e.g., designers, implementers, and evaluators) may be confused about how to realize it. This study provides its systematic analysis by developing new three adversarial models, which correspond to respective practical scenarios of leakage assessment, quantitative evaluation of sidechannel security (e.g., success rate and key-lifetime), and practical deployment. We present the properties required for each scenario by a stage-by-stage analysis. In particular, we support a long-held belief/folklore with a proof: for the output of PRG for masking, cryptographic security is sufficient but unnecessary, but only a statistical uniformity (precisely, high-dimensional uniform equidistribution) is necessary. In addition, we thoroughly investigate the side-channel security of PRGs in the wild in the masking context. We conclude this study with some recommendations for practitioners and a proposal of leakage-resilient method of comparative performance.
Key Derivation Functions Without a Grain of Salt
Key derivation functions (KDFs) are integral to many cryptographic protocols. Their functionality is to turn raw key material, such as a Diffie-Hellman secret, into a strong cryptographic key that is indistinguishable from random. This guarantee was formalized by Krawczyk together with the seminal introduction of HKDF (CRYPTO 2010), in a model where the KDF only takes a single key material input. Modern protocol designs, however, regularly need to combine multiple secrets, possibly even from different sources, with the guarantee that the derived key is secure as long as at least one of the inputs is good. This is particularly relevant in settings like hybrid key exchange for quantum-safe migration. Krawczyk's KDF formalism does not capture this goal, and there has been surprisingly little work on the security considerations for KDFs since then.
In this work, we thus revisit the syntax and security model for KDFs to treat multiple, possibly correlated inputs. Our syntax is assertive: We do away with salts, which are needed in theory to extract from arbitrary sources in the standard model, but in practice, they are almost never used (or even available) and sometimes even misused, as we argue. We use our new model to analyze real-world multi-input KDFs—in Signal's X3DH protocol, ETSI's TS 103 744 standard, and MLS' combiner for pre-shared keys—as well as new constructions we introduce for specialized settings—e.g., a purely blockcipher-based one. We further discuss the importance of collision resistance for KDFs and finally apply our multi-input KDF model to show how hybrid KEM key exchange can be analyzed from a KDF perspective.
Unbounded Multi-Hop Proxy Re-Encryption with HRA Security: An LWE-Based Optimization
Proxy re-encryption (PRE) schemes enable a semi-honest proxy to transform a ciphertext of one user $i$ to another user $j$ while preserving the privacy of the underlying message. Multi-hop PRE schemes allow a legal ciphertext to undergo multiple transformations, but for lattice-based multi-hop PREs, the number of transformations is typically bounded due to the increase of error terms. Recently, Zhao et al. (Esorics 2024) introduced a lattice-based unbounded multi-hop (homomorphic) PRE scheme that supports an unbounded number of hops. Nevertheless, their scheme only achieves the selective CPA security. In contrast, Fuchsbauer et al. (PKC 2019) proposed a generic framework for constructing HRA-secure unbounded multi-hop PRE schemes from FHE. Despite this, when instantiated with state-of-the-art FHEW-like schemes, the overall key size and efficiency remain unsatisfactory.
In this paper, we present a lattice-based unbounded multi-hop PRE scheme with the stronger adaptive HRA security (i.e. security against honest re-encryption attacks), which is more suitable for practical applications. Our scheme features an optimized re-encryption process based on the FHEW-like blind rotation, which resolves the incompatibility between the noise flooding technique and Fuchsbauer et al. 's framework when instantiated with FHEW-like schemes. This results in reduced storage requirements for public keys and offers higher efficiency. Moreover, our optimized unbounded multi-hop PRE scheme can be modified to an unbounded homomorphic PRE, a scheme allowing for arbitrary homomorphic computations over fresh, re-encrypted, and evaluated ciphertexts.
Taking AI-Based Side-Channel Attacks to a New Dimension
This paper revisits the Hamming Weight (HW) labelling function for machine learning assisted side channel attacks. Contrary to what has been suggested by previous works, our investigation shows that, when paired with modern deep learning architectures, appropriate pre-processing and normalization techniques; it can perform as well as the popular identity labelling functions and sometimes even beat it. In fact, we hereby introduce a new machine learning method, dubbed, that helps solve the class imbalance problem associated to HW, while significantly improving the performance of unprofiled attacks. We additionally release our new, easy to use python package that we used in our experiments, implementing a broad variety of machine learning driven side channel attacks as open source, along with a new dataset AES_nRF, acquired on the nRF52840 SoC.
Hash-Based Multi-Signatures for Post-Quantum Ethereum
With the threat posed by quantum computers on the horizon, systems like Ethereum must transition to cryptographic primitives resistant to quantum attacks. One of the most critical of these primitives is the non-interactive multi-signature scheme used in Ethereum's proof-of-stake consensus, currently implemented with BLS signatures. This primitive enables validators to independently sign blocks, with their signatures then publicly aggregated into a compact aggregate signature.
In this work, we introduce a family of hash-based signature schemes as post-quantum alternatives to BLS. We consider the folklore method of aggregating signatures via (hash-based) succinct arguments, and our work is focused on instantiating the underlying signature scheme. The proposed schemes are variants of the XMSS signature scheme, analyzed within a novel and unified framework. While being generic, this framework is designed to minimize security loss, facilitating efficient parameter selection. A key feature of our work is the avoidance of random oracles in the security proof. Instead, we define explicit standard model requirements for the underlying hash functions. This eliminates the paradox of simultaneously treating hash functions as random oracles and as explicit circuits for aggregation. Furthermore, this provides cryptanalysts with clearly defined targets for evaluating the security of hash functions. Finally, we provide recommendations for practical instantiations of hash functions and concrete parameter settings, supported by known and novel heuristic bounds on the standard model properties.
Perfect 2-Party Computation from Additive Somewhat Homomorphic Encryption
Two-party computation has been an active area of research since Yao's breakthrough results on garbled circuits. We present secret key somewhat additive homomorphic schemes where the client has perfect privacy (server can be computationally unbounded). Our basic scheme is somewhat additive homomorphic and we extend it to support multiplication. The server handles circuit multiplication gates by returning the multiplicands to the client which updates the decryption key so that the original ciphertext vector includes the encrypted multiplication gate outputs. We give a 2-party protocol that also incorporates server inputs where the client has perfect privacy. Server privacy holds against a computationally bounded adversary since it depends on the hardness of the subset sum problem. Correctness of the server in the malicious model can be verified by a 3rd party where the client and server privacy are protected from the verifier.
ECDSA Cracking Methods
The ECDSA (Elliptic Curve Digital Signature Algorithm) is used in many blockchain networks for digital signatures. This includes the Bitcoin and the Ethereum blockchains. While it has good performance levels and as strong current security, it should be handled with care. This care typically relates to the usage of the nonce value which is used to create the signature. This paper outlines the methods that can be used to break ECDSA signatures, including revealed nonces, weak nonce choice, nonce reuse, two keys and shared nonces, and fault attack.
Fission: Distributed Privacy-Preserving Large Language Model Inference
The increased popularity of large language models (LLMs) raises serious privacy concerns, where users' private queries are sent to untrusted servers. Many cryptographic techniques have been proposed to provide privacy, such as secure multiparty computation (MPC), which enables the evaluation of LLMs directly on private data. However, cryptographic techniques have been deemed impractical as they introduce large communication and computation. On the other hand, many obfuscation techniques have been proposed, such as split inference, where part of the model is evaluated on edge devices to hide the input data from untrusted servers, but these methods provide limited privacy guarantees.
We propose Fission, a privacy-preserving framework that improves latency while providing strong privacy guarantees. Fission utilizes an MPC network for linear computations, while nonlinearities are computed on a separate evaluator network that receives shuffled values in the clear and returns nonlinear functions evaluated at these values back to the MPC network. As a result, each evaluator only gets access to parts of the shuffled data, while the model weights remain private. We evaluate fission on a wide set of LLMs and compare it against prior works. Fission results in up to eight times faster inference and eight times reduced bandwidth compared to prior works while retaining high accuracy. Finally, we construct an attack on obfuscation techniques from related works that show significant information leakage, and we demonstrate how Fission enhances privacy.
MultiCent: Secure and Scalable Centrality Measures on Multilayer Graphs
As real-world networks such as social networks and computer networks are often complex and distributed, modeling them as multilayer graphs is gaining popularity. For instance, when studying social interactions across platforms like LinkedIn, Facebook, TikTok, and Bluesky, users may be connected on several of these platforms. To identify important nodes/users, the platforms might wish to analyze user interactions using, e.g., centrality measures when accounting for connections across all platforms. That raises the challenge for platforms to perform such computation while simultaneously protecting their user data to shelter their own business as well as uphold data protection laws. This necessitates designing solutions that allow for performing secure
computation on a multilayer graph which is distributed among mutually distrusting parties while keeping each party's data hidden.
The work of Asharov et al. (WWW'17) addresses this problem by designing secure solutions for centrality measures that involve computing the truncated Katz score and reach score on multilayer graphs. However, we identify several limitations in that work which render the solution inefficient or even unfeasible for realistic networks with significantly more than 10k nodes. We address these limitations by designing secure solutions that are significantly more efficient and scalable. In more detail, given that real-world graphs are known to be sparse, our solutions move away from an expensive matrix-based representation to a more efficient list-based representation. We design novel, secure, and efficient solutions for computing centrality measures and prove their correctness. Our solutions drastically reduce the asymptotic complexity from the prohibitive $\mathcal{O}(|\mathsf{V}|^2)$ even for the fastest solution by Asharov et al. down to $\mathcal{O}(|\mathsf{V}|\log |\mathsf{V}|)$, for $|\mathsf{V}|$ nodes. To design our solutions, we extend upon the secure graph computation framework of Koti et al. (CCS'24), providing a novel framework with improved capabilities in multiple directions. Finally, we provide an end-to-end implementation of our secure graph analysis framework and establish concrete efficiency improvements over prior work, observing several orders of magnitude improvement.
Low-Latency Bootstrapping for CKKS using Roots of Unity
We introduce a new bootstrapping equation for the CKKS homomorphic encryption scheme of approximate numbers. The original bootstrapping approach for CKKS consists in homomorphically evaluating a polynomial that approximates the modular reduction modulo q. In contrast, our new bootstrapping equation directly embeds the additive group modulo q into the complex roots of unity, which can be evaluated natively in the CKKS scheme. Due to its reduced multiplicative depth, our new bootstrapping equation achieves a 7x latency improvement for a single slot compared to the original CKKS bootstrapping, though it scales less efficiently when applied to a larger number of slots.
ADC-BE: Optimizing Worst-Case Bandwidth in Broadcast Encryption with Boolean Functions
Recently, Dupin and Abelard proposed a broadcast encryption scheme which outperforms the Complete Subtree-based and Subset Difference broadcast encryption in terms of encryption cost and bandwidth requirement. However, Dupin and Abelard acknowledge that the worst-case bound for bandwidth requirement of Complete Subtree approach can be reached in their scheme as well. In this paper, we answer the call to further reduce this bandwidth bottleneck. We first provide concrete analysis to show how this worst-case upper-bound is reached from concrete Boolean functions. Then we present two improved broadcast encryption schemes to significantly reduce this worst-case bandwidth consumption for further optimization of Dupin and Abelard’s technique. Our proposed approach ADC-BE, composed of two algorithms, AD-BE and AC-BE, can significantly optimize this worst-case complexity from n/2 down to 1 for a system of n users. This is efficient especially for large number of users in the system. Our proposed schemes combines the algebraic normal form, disjunctive normal form, and conjunctive normal form to optimize a Boolean function to its minimized representation. In addition, our approaches can be made secure against quantum adversaries and are therefore post-quantum, where both algorithms AD-BE and AC-BE require minimal assumptions based on existence of one-way function.
Legacy Encryption Downgrade Attacks against LibrePGP and CMS
This work describes vulnerabilities in the specification of AEAD modes and Key Wrap in two cryptographic message formats. Firstly, this applies to AEAD packets as introduced in the novel LibrePGP specification that is implemented by the widely used GnuPG application. Secondly, we describe vulnerabilities in the AES-based AEAD schemes as well as the Key Wrap Algorithm specified in the Cryptographic Message Syntax (CMS). These new attacks exploit the possibility to downgrade AEAD or AES Key Wrap ciphertexts to valid legacy CFB- or CBC-encrypted related ciphertexts and require that the attacker learns the content of the legacy decryption result. This can happen in two principal ways: either due to the human recipient returning the decryption output to the attacker as a quote or due to a programmatic decryption oracle in the receiving system that reveals information about the plaintext. The attacks effect the decryption of low-entropy plaintext blocks in AEAD ciphertexts and, in the case of LibrePGP, also the manipulation of existing AEAD ciphertexts. For AES Key Wrap in CMS, full key decryption is possible. Some of the attacks require multiple successful oracle queries. The attacks thus demonstrate that CCA2 security is not achieved by the LibrePGP and CMS AEAD or Key Wrap encryption in the presence of a legacy cipher mode decryption oracle.
Quarantined-TreeKEM: a Continuous Group Key Agreement for MLS, Secure in Presence of Inactive Users
The recently standardized secure group messaging protocol Messaging Layer Security (MLS) is designed to ensure asynchronous communications within large groups, with an almost-optimal communication cost and the same security level as point-to-point se- cure messaging protocols such as Signal. In particular, the core sub-protocol of MLS, a Continuous Group Key Agreement (CGKA) called TreeKEM, must generate a common group key that respects the fundamental security properties of post-compromise security and forward secrecy which mitigate the effects of user corruption over time.
Most research on CGKAs has focused on how to improve these two security properties. However, post-compromise security and forward secrecy require the active participation of respectively all compromised users and all users within the group. Inactive users – who remain offline for long periods – do not update anymore their encryption keys and therefore represent a vulnerability for the entire group. This issue has already been identified in the MLS standard, but no solution, other than expelling these inactive users after some disconnection time, has been found.
We propose here a CGKA protocol based on TreeKEM and fully compatible with the MLS standard, that implements a quarantine mechanism for the inactive users in order to mitigate the risk induced by these users during their inactivity period and before they are removed from the group. That mechanism indeed updates the inactive users’ encryption keys on their behalf and secures these keys with a secret sharing scheme. If some of the inactive users eventually reconnect, their quarantine stops and they are able to recover all the messages that were exchanged during their offline period. Our Quarantined-TreeKEM protocol thus increases the security of original TreeKEM, with a very limited – and sometimes negative – communication overhead.
Guaranteed Termination Asynchronous Complete Secret Sharing with Lower Communication and Optimal Resilience
Asynchronous Complete Secret Sharing (ACSS) is a foundational module for asynchronous networks, playing a critical role in cryptography. It is essential for Asynchronous Secure Multi-Party Computation (AMPC) and, with termination, is widely applied in Validated Asynchronous Byzantine Agreement (VABA) and Asynchronous Distributed Key Generation (ADKG) to support secure distributed systems.
Currently, there are relatively few statistical secure ACSS protocols that can guarantee termination, and their communication complexity is relatively high. To reduce communication complexity, we propose a new multi-receiver signature scheme, ARICP, which supports linear operations on signatures. Leveraging the ARICP scheme and the properties of symmetric polynomials, we propose an ACSS protocol that ensures termination and optimal resilience ($t < n / 3$) with $\mathcal{O}(n^{2}\kappa$) bits per sharing. Compared with the best-known result of ACSS protocols that guarantee termination [CP23], the amortized communication complexity of our protocol is reduced by a factor of $\mathcal{O}(n)$.
Fully Homomorphic Encryption for Cyclotomic Prime Moduli
This paper presents a Generalized BFV (GBFV) fully homomorphic encryption scheme that encrypts plaintext spaces of the form $\mathbb{Z}[x]/(\Phi_m(x), t(x))$ with $\Phi_m(x)$ the $m$-th cyclotomic polynomial and $t(x)$ an arbitrary polynomial. GBFV encompasses both BFV where $t(x) = p$ is a constant, and the CLPX scheme (CT-RSA 2018) where $m = 2^k$ and $t(x) = x - b$ is a linear polynomial. The latter can encrypt a single huge integer modulo $\Phi_m(b)$, has much lower noise growth than BFV, but it is not known to be efficiently bootstrappable.
We show that by a clever choice of $m$ and higher degree polynomial $t(x)$, our scheme combines the SIMD capabilities of BFV with the low noise growth of CLPX, whilst still being efficiently bootstrappable. Moreover, we present parameter families that natively accommodate packed plaintext spaces defined by a large cyclotomic prime, such as the Fermat prime $\Phi_2(2^{16}) = 2^{16} + 1$ and the Goldilocks prime $\Phi_6(2^{32}) = 2^{64} - 2^{32} + 1$. These primes are often used in homomorphic encryption applications and zero-knowledge proof systems.
Due to the lower noise growth, GBFV can evaluate much deeper circuits compared to native BFV in the same ring dimension. As a result, we can evaluate either larger circuits or work with smaller ring dimensions. In particular, we can natively bootstrap GBFV at 128-bit security already at ring dimension $n = 2^{14}$, which was impossible before. We implemented the GBFV scheme on top of the SEAL library and achieve a latency of only 2 seconds to bootstrap a ciphertext encrypting up to 8192 elements modulo $2^{16} + 1$.
Revisiting Oblivious Top-$k$ Selection with Applications to Secure $k$-NN Classification
An oblivious Top-$k$ algorithm selects the $k$ smallest elements from $d$ elements while ensuring the sequence of operations and memory accesses do not depend on the input. In 1969, Alekseev proposed an oblivious Top-$k$ algorithm with complexity $O(d \log^2{k})$, which was later improved by Yao in 1980 for small $k \ll \sqrt{d}$.
In this paper, we revisit the literature on oblivious Top-$k$ and propose another improvement of Alekseev's method that outperforms both for large $k = \Omega(\sqrt{d})$. Our construction is equivalent to applying a new truncation technique to Batcher's odd-even sorting algorithm. In addition, we propose a combined network to take advantage of both Yao's and our technique that achieves the best concrete performance, in terms of the number of comparators, for any $k$. To demonstrate the efficiency of our combined Top-$k$ network, we implement a secure non-interactive $k$-nearest neighbors classifier using homomorphic encryption as an application. Compared with the work of Zuber and Sirdey (PoPETS 2021) where oblivious Top-$k$ was realized with complexity $O(d^2)$, our experimental results show a speedup of up to 47 times (not accounting for difference in CPU) for $d = 1000$.
SLAMP-FSS: Two-Party Multi-Point Function Secret Sharing from Simple Linear Algebra
Multi-point function secret sharing (FSS) is a building block for pseudo-
random correlation generators used in the novel silent correlation generation methods for various secure multiparty computation applications. However, the main construction used so far is the naive approach to combining several single point functions. In this paper, we propose an efficient and natural generalization of the point function. FSS scheme of Boyle et al. 2016 [BGI16 ] using a tree structure, a pseudorandom generator and systems of linear equations. Our schemes are more efficient in the evaluation phase than other previously proposed multi-point FSS schemes while being also more flexible and being similar in other efficiency parameters. Our setup phase is similar in cost to previous best versions, while the full evaluation, which is by far the costliest step, is more efficient.
Anamorphic Voting: Ballot Freedom Against Dishonest Authorities
Electronic voting schemes typically ensure ballot privacy by
assuming that the decryption key is distributed among tallying authorities, preventing any single authority from decrypting a voter’s ballot.
However, this assumption may fail in a fully dishonest environment where
all tallying authorities collude to break ballot privacy.
In this work, we introduce the notion of anamorphic voting, which enables voters to convey their true voting intention to an auditor while
casting an (apparently) regular ballot. We present new cryptographic
techniques demonstrating that several existing voting schemes can support anamorphic voting.
An Efficient SNARK for Field-Programmable and RAM Circuits
The advancement of succinct non-interactive argument of knowledge (SNARK) with constant proof size has significantly enhanced the efficiency and privacy of verifiable computation. Verifiable computation finds applications in distributed computing networks, particularly in scenarios where nodes cannot be generally trusted, such as blockchains. However, fully harnessing the efficiency of SNARK becomes challenging when the computing targets in the network change frequently, as the SNARK verification can involve some untrusted preprocess of the target, which is expected to be reproduced by other nodes. This problem can be addressed with two approaches: One relieves the reproduction overhead by reducing the dimensionality of preprocessing data; The other utilizes verifiable machine computation, which eliminates the dependency on preprocess at the cost of increased overhead to SNARK proving and verification. In this paper, we propose a new SNARK with constant proof size applicable to both approaches. The proposed SNARK combines the efficiency of Groth16 protocol, albeit lacking universality for new problems, and the universality of PlonK protocol, albeit with significantly larger preprocessing data dimensions. Consequently, we demonstrate that our proposed SNARK maintains the efficiency and the universality while significantly reducing the dimensionality of preprocessing data. Furthermore, our SNARK can be seamlessly applied to the verifiable machine computation, requiring a proof size smaller about four to ten times than other related works.
Cauchyproofs: Batch-Updatable Vector Commitment with Easy Aggregation and Application to Stateless Blockchains
Stateless blockchain designs have emerged to address the challenge of growing blockchain size using succinct global states. Previous works have developed vector commitments that support proof updates and aggregation to be used as such states. However, maintaining proofs for multiple users still demands significant computational resources, particularly to update proofs with every transaction. This paper introduces Cauchyproofs, a batch-updatable vector commitment that enables proof-serving nodes to efficiently update proofs in quasi-linear time relative to the number of users and transactions, utilizing an optimized KZG scheme to achieve complexity $O\left(\left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right) \log^2 \left(\left|\vec{\alpha}\right| + \left|\vec{\beta}\right|\right)\right)$ for $\left|\alpha\right|$ users and $\left|\beta\right|$ transactions, compared to the previous $O\left(\left|\vec{\alpha}\right|\cdot\left|\vec{\beta}\right|\right)$ approaches. This advancement reduces the computational burden on proof-serving nodes, allowing efficient proof maintenance across large user groups. We demonstrate that our approach is approximately eight times faster than the naive approach at the Ethereum-level transaction throughput if we perform batch update every hour. Additionally, we present a novel matrix representation for KZG proofs utilizing Cauchy matrices, enabling faster all-proof computations with reduced elliptic curve operations. Finally, we propose an algorithm for history proof query, supporting retrospective proof generation with high efficiency. Our contributions substantially enhance the scalability and practicality of proof-serving nodes in stateless blockchain frameworks.
Security in the Presence of Key Reuse: Context-Separable Interfaces and their Applications
Key separation is often difficult to enforce in practice. While key reuse can be catastrophic for security, we know of a number of cryptographic schemes for which it is provably safe. But existing formal models, such as the notions of joint security (Haber-Pinkas, CCS ’01) and agility (Acar et al., EUROCRYPT ’10), do not address the full range of key-reuse attacks—in particular, those that break the abstraction of the scheme, or exploit protocol interactions at a higher level of abstraction. This work attends to these vectors by focusing on two key elements: the game that codifies the scheme under attack, as well as its intended adversarial model; and the underlying interface that exposes secret key operations for use by the game. Our main security experiment considers the implications of using an interface (in practice, the API of a software library or a hardware platform such as TPM) to realize the scheme specified by the game when the interface is shared with other unspecified, insecure, or even malicious applications. After building up a definitional framework, we apply it to the analysis of two real-world schemes: the EdDSA signature algorithm and the Noise protocol framework. Both provide some degree of context separability, a design pattern for interfaces and their applications that aids in the deployment of secure protocols.
The many faces of Schnorr: a toolkit for the modular design of threshold Schnorr signatures
Recently, a number of highly optimized threshold signing protocols for Schnorr signatures have been proposed. While these proposals contain important new techniques, some of them present and analyze these techniques in very specific contexts, making it less than obvious how these techniques can be adapted to other contexts, or combined with one another. The main goal of this paper is to abstract out and extend in various ways some of these techniques, building a toolbox of techniques that can be easily combined in different ways and in different contexts. To this end, we present security results for various "enhanced" modes of attack on the Schnorr signature scheme in the non-distributed setting, and we demonstrate how to reduce the security in the distributed threshold setting to these enhanced modes of attack in the non-distributed setting. This results in a very modular approach to protocol design and analysis, which can be used to easily design new threshold Schnorr protocols that enjoy better security and/or performance properties than existing ones.
Secret-Key PIR from Random Linear Codes
Private information retrieval (PIR) allows to privately read a chosen bit from an $N$-bit database $x$ with $o(N)$ bits of communication. Lin, Mook, and Wichs (STOC 2023) showed that by preprocessing $x$ into an encoded database $\hat x$, it suffices to access only $polylog(N)$ bits of $\hat x$ per query. This requires $|\hat x|\ge N\cdot polylog(N)$, and prohibitively large server circuit size.
We consider an alternative preprocessing model (Boyle et al. and Canetti et al., TCC 2017), where the encoding $\hat x$ depends on a client's short secret key. In this secret-key PIR (sk-PIR) model we construct a protocol with $O(N^\epsilon)$ communication, for any constant $\epsilon>0$, from the Learning Parity with Noise assumption in a parameter regime not known to imply public-key encryption. This is evidence against public-key encryption being necessary for sk-PIR.
Under a new conjecture related to the hardness of learning a hidden linear subspace of $\mathbb{F}_2^n$ with noise, we construct sk-PIR with similar communication and encoding size $|\hat x|=(1+\epsilon)\cdot N$ in which the server is implemented by a Boolean circuit of size $(4+\epsilon)\cdot N$. This is the first candidate PIR scheme with such a circuit complexity.
GIGA Protocol: Unlocking Trustless Parallel Computation in Blockchains
The scalability of modern decentralized blockchain systems is constrained by the requirement that the participating nodes execute the entire chains transactions without the ability to delegate the verification workload across multiple actors trustlessly. This is further limited by the need for sequential transaction execution and repeated block validation, where each node must re-execute all transactions before accepting blocks, also leading to delayed broadcasting in many architectures.
Consequently, throughput is limited by the capacity of individual nodes, significantly preventing scalability.
In this paper, we introduce GIGA, a SNARK-based protocol that enables trustless parallel execution of transactions, processing non-conflicting operations concurrently, while preserving security guarantees and state consistency. The protocol organizes transactions into non-conflicting batches which are executed and proven in parallel, distributing execution across multiple decentralized entities. These batch proofs are recursively aggregated into a single succinct proof that validates the entire block.
As a result, the protocol both distributes the execution workload and removes redundant re-execution from the network, significantly improving blockchain throughput while not affecting decentralization.
Performance estimates demonstrate that, under the same system assumptions (e.g., consensus, networking, and virtual machine architecture) and under high degrees of transaction parallelism (i.e., when most transactions operate on disjoint parts of the state), our protocol may achieve over a 10000x throughput improvement compared to popular blockchain architectures that use sequential execution models, and over a 500x improvement compared to blockchain architectures employing intra-node parallelization schemes.
Furthermore, our protocol enables a significant increase in transaction computational complexity, unlocking a wide range of use cases that were previously unfeasible on traditional blockchain architectures due to the limited on-chain computational capacity.
Additionally, we propose a reward mechanism that ensures the economic sustainability of the proving network, dynamically adjusting to computational demand while fostering competition among provers based on cost-efficiency and reliability.
Binding Security of Implicitly-Rejecting KEMs and Application to BIKE and HQC
In this work, we continue the analysis of the binding properties of implicitly-rejecting key-encapsulation mechanisms (KEMs) obtained via the Fujisaki-Okamoto (FO) transform. These binding properties, in earlier literature known under the term robustness, thwart attacks that can arise when using KEMs in complex protocols. Recently, Cremers et al. (CCS'24) introduced a framework for binding notions, encompassing previously existing but also new ones. While implicitly-rejecting FO-KEMs have been analyzed with respect to multiple of these notions, there are still several gaps. We complete the picture by providing positive and negative results for the remaining notions. Further, we show how to apply our results to the code-based KEMs BIKE and HQC, which were round-4 candidates in NIST's PQC standardization process. Through this, we close a second gap as our results complete the analysis of the binding notions for the NIST round-4 KEMs. Finally, we give a modified version of the FO transform that achieves all binding notions.
Attacking at non-harmonic frequencies in screaming-channel attacks
Screaming-channel attacks enable Electromagnetic (EM) Side-Channel Attacks (SCAs) at larger distances due to higher EM leakage energies than traditional SCAs, relaxing the requirement of close access to the victim. This attack can be mounted on devices integrating Radio Frequency (RF) modules on the same die as digital circuits, where the RF can unintentionally capture, modulate, amplify, and transmit the leakage along with legitimate signals. Leakage results from digital switching activity, so the hypothesis of previous works was that this leakage would appear at multiples of the digital clock frequency, i.e., harmonics. This work demonstrates that compromising signals appear not only at the harmonics and that leakage at non-harmonics can be exploited for successful attacks. Indeed, the transformations undergone by the leaked signal are complex due to propagation effects through the substrate and power and ground planes, so the leakage also appears at other frequencies. We first propose two methodologies to locate frequencies that contain leakage and demonstrate that it appears at non-harmonic frequencies. Then, our experimental results show that screaming-channel attacks at non-harmonic frequencies can be as successful as at harmonics when retrieving a 16-byte AES key. As the RF spectrum is polluted by interfering signals, we run experiments and show successful attacks in a more realistic, noisy environment where harmonic frequencies are contaminated by multi-path fading and interference. These attacks at non-harmonic frequencies increase the attack surface by providing attackers with an increased number of potential frequencies where attacks can succeed.
Obfuscation for Deep Neural Networks against Model Extraction: Attack Taxonomy and Defense Optimization
Well-trained deep neural networks (DNN), including large
language models (LLM), are valuable intellectual property assets. To defend against model extraction attacks, one of the major ideas proposed in a large body of previous research is obfuscation: splitting the original DNN and storing the components separately. However, systematically analyzing the methods’ security against various attacks and optimizing the efficiency of defenses are still challenging. In this paper, We propose a taxonomy of model-based extraction attacks, which enables us to identify vulnerabilities of several existing obfuscation methods. We also propose an extremely efficient model obfuscation method called O2Splitter using trusted execution environment (TEE). The secrets we store in TEE have O(1)-size, i.e., independent of model size. Although O2Splitter relies on a pseudo-random function to provide a quantifiable guarantee for protection and noise compression, it does not need any complicated training or filtering of the weights. Our comprehensive experiments show that O2Splitter can mitigate norm-clipping and fine-tuning attacks. Even for
small noise (ϵ = 50), the accuracy of the obfuscated model is close to
random guess, and the tested attacks cannot extract a model with comparable accuracy. In addition, the empirical results also shed light on discovering the relation between DP parameters in obfuscation and the risks of concrete extraction attacks.
A Meta-Complexity Characterization of Quantum Cryptography
We prove the first meta-complexity characterization of a quantum cryptographic primitive. We show that one-way puzzles exist if and only if there is some quantum samplable distribution of binary strings over which it is hard to approximate Kolmogorov complexity. Therefore, we characterize one-way puzzles by the average-case hardness of a uncomputable problem. This brings to the quantum setting a recent line of work that characterizes classical cryptography with the average-case hardness of a meta-complexity problem, initiated by Liu and Pass. Moreover, since the average-case hardness of Kolmogorov complexity over classically polynomial-time samplable distributions characterizes one-way functions, this result poses one-way puzzles as a natural generalization of one-way functions to the quantum setting. Furthermore, our equivalence goes through probability estimation, giving us the additional equivalence that one-way puzzles exist if and only if there is a quantum samplable distribution over which probability estimation is hard. We also observe that the oracle worlds of defined by Kretschmer et. al. rule out any relativizing characterization of one-way puzzles by the hardness of a problem in $\mathbf{NP}$ or $\mathbf{QMA}$, which means that it may not be possible with current techniques to characterize one-way puzzles with another meta-complexity problem.
Scalable Non-Fungible Tokens on Bitcoin
This paper presents a protocol for scaling the creation, management, and trading of non-fungible tokens (NFTs) on Bitcoin by extending bridgeless minting patterns previously used on other blockchains. The protocol leverages on-chain Bitcoin data to handle all aspects of token ownership, including trading, while integrating a secondary consensus system for minting and optionally modifying token metadata. To minimize its on-chain footprint, the protocol utilizes the OP_RETURN mechanism for ownership records, while complementary NFT-related actions are stored on the LAOS blockchain. All data remains permanently on-chain, with no reliance on bridges or third-party operators.
Side-Channel and Fault Injection Attacks on VOLEitH Signature Schemes: A Case Study of Masked FAEST
Ongoing efforts to transition to post-quantum public-key cryptosystems have created the need for algorithms with a variety of performance characteristics and security assumptions.
Among the candidates in NIST's post-quantum standardisation process for additional digital signatures is FAEST, a Vector Oblivious Linear Evaluation in-the-Head (VOLEitH)-based scheme, whose security relies on the one-wayness of the Advanced Encryption Standard (AES).
The VOLEitH paradigm enables competitive performance and signature sizes under conservative security assumptions.
However, since it was introduced recently, in 2023, its resistance to physical attacks has not yet been analysed. In this paper, we present the first security analysis of VOLEitH-based signature schemes in the context of side-channel and fault injection attacks. We demonstrate four practical attacks on a masked implementation of FAEST in ARM Cortex-M4 capable of recovering the full secret key with high probability (greater than 0.87) from a single signature. These attacks exploit vulnerabilities of components specific to VOLEitH schemes and FAEST, such as the parallel all-but-one vector commitments, the VOLE generation, and the AES proof generation. Finally, we propose countermeasures to mitigate these attacks and enhance the physical security of VOLEitH-based signature schemes.
More NTRU+Sign Signatures from Cyclotomic Trinomials
Recently, $\mathsf{NTRU}$+$\mathsf{Sign}$ was proposed as a new compact signature scheme, following `Fiat-Shamir with Aborts' (FSwA) framework. Its compactness is mainly based on their novel NTRU-based key structure that fits well with bimodal distributions in the FSwA framework. However, despite its compactness, $\mathsf{NTRU}$+$\mathsf{Sign}$ fails to provide a diverse set of parameters that can meet some desired security levels. This limitation stems from its reliance on a ring $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$, where $n$ is restricted to powers of two, limiting the flexibility in selecting appropriate security levels. To overcome this limitation, we propose a revised version of $\mathsf{NTRU}$+$\mathsf{Sign}$ by adopting a ring $\mathbb{Z}_q[x]/\langle x^n-x^{n/2}+1\rangle$ from cyclotomic trinomials, where $n=2^{i}3^{j}$ for some positive integers $i$ and $j$. Our parameterization offers three distinct security levels: approximately $120$, $190$, and $260$ bits, while preserving the compactness in $\mathbb{Z}_q[x]/\langle x^n+1 \rangle$. We implement these re-parameterized $\mathsf{NTRU}$+$\mathsf{Sign}$ schemes, showing that the performance of $\mathsf{NTRU}$+$\mathsf{Sign}$ from cyclotomic trinomials is still comparable to previous lattice-based signature schemes such as $\mathsf{Dilithium}$ and $\mathsf{HAETAE}$.
Lattice-based Cryptography: A survey on the security of the lattice-based NIST finalists
This survey, mostly written in the years 2022-2023, is meant as an as short as possible description of the current state-of-the-art lattice attacks on lattice-based cryptosystems, without losing the essence of the matter.
The main focus is the security of the NIST finalists and
alternatives that are based on lattices, namely CRYSTALS-Kyber, CRYSTALS-Dilithium and Falcon. Instead of going through these cryptosystems case by case, this survey considers attacks on the underlying hardness assumptions: in the case of the mentioned lattice-based schemes, these are (variants of) LWE (Learning With Errors) and NTRU.
Cryptomania v.s. Minicrypt in a Quantum World
We prove that it is impossible to construct perfect-complete quantum public-key encryption (QPKE) with classical keys from quantumly secure one-way functions (OWFs) in a black-box manner, resolving a long-standing open question in quantum cryptography.
Specifically, in the quantum random oracle model (QROM), no perfect-complete QPKE scheme with classical keys, and classical/quantum ciphertext can be secure. This improves the previous works which require either unproven conjectures or imposed restrictions on key generation algorithms. This impossibility even extends to QPKE with quantum public key if the public key can be uniquely determined by the secret key, and thus is tight to all existing QPKE constructions.
Inner Product Masked Integral Distinguishers and Integral Sets over Large Finite Fields (Full Version)
The security and performance of many advanced protocols heavily rely on the underlying symmetric-key primitives.
These primitives, known as arithmetization-oriented (\texttt{AO}) ciphers, focus on arithmetic metrics over large finite fields.
Most \texttt{AO} ciphers are vulnerable to algebraic attacks, especially integral attacks.
In this work, we explore integral attacks over large finite fields.
By combining integral distinguishers with inner product masks, we propose inner product masked (IPM) integral distinguishers and establish a system of equations concerning the inner product mask.
Additionally, we provide theoretical lower bounds on the complexity of IPM integral distinguishers in certain cases.
For practical applications, we introduce IPM integral sets, which effectively characterize the integral property of sets over the finite field $\mathbb{F}_{p^n}$.
Besides the IPM sets based on additive subgroups and multiplicative subgroups, we present a method to obtain sets with improved integral properties by merging existing sets.
Exploring different classes of IPM integral sets can help us find lower-complexity integral distinguishers.
Combining with monomial detection techniques, we propose a framework for searching for integral distinguishers based on the IPM integral set.
Our framework is compatible with various monomial detection techniques, including general monomial prediction proposed by Cui et al. at ASIACRYPT 2022 and coefficient grouping invented by Liu et al. at EUROCRYPT 2023.
Previous integral distinguishers over $\mathbb{F}_{2^n}$ were predominantly based on additive subgroups.
With IPM integral sets based on multiplicative subgroups and merged sets, we successfully obtain IPM integral distinguishers with lower complexity for \textsf{MiMC}, \textsf{CIMINION}, and \textsf{Chaghri}.
We believe that our work will provide new insights into integral attacks over large finite fields.
Round-Efficient Adaptively Secure Threshold Signatures with Rewinding
A threshold signature scheme allows distributing a signing key to $n$ users, such that any $t$ of them can jointly sign, but any $t-1$ cannot. It is desirable to prove \emph{adaptive security} of threshold signature schemes, which considers adversaries that can adaptively corrupt honest users even after interacting with them. For a class of signatures that relies on security proofs with rewinding, such as Schnorr signatures, proving adaptive security entails significant challenges.
This work proposes two threshold signature schemes that are provably adaptively secure with rewinding proofs. Our proofs are solely in the random oracle model (ROM), without relying on the algebraic group model (AGM).
- We give a 3-round scheme based on the algebraic one-more discrete logarithm (AOMDL) assumption. The scheme outputs a standard Schnorr signature.
- We give a 2-round scheme based on the DL assumption. Signatures output by the scheme contain one more scalar than a Schnorr signature.
We follow the recent work by Katsumata, Reichle, and Takemure (Crypto 2024) that proposed the first threshold signature scheme with a rewinding proof of full adaptive security. Their scheme is a 5-round threshold Schnorr scheme based on the DL assumption. Our results significantly improve the round complexity.
Katsumata et al.'s protocol can be viewed as applying a masking technique to Sparkle, a threshold Schnorr signature scheme by Crites, Komlo, and Maller (Crypto 2023). This work shows wider applications of the masking technique. Our first scheme is obtained by masking FROST, a threshold Schnorr protocol by Komlo and Goldberg (SAC 2020). The second scheme is obtained by masking a threshold version of HBMS, a multi-signature scheme by Bellare and Dai (Asiacrypt 2021).
Katsumata et al. masked Sparkle at the cost of 2 additional rounds. Our main insight is that this cost varies across schemes, especially depending on how to simulate signing in the security proofs. The cost is 1 extra round for our first scheme, and is 0 for our second scheme.
Malicious Security for SCALES: Outsourced Computation with Ephemeral Servers
SCALES (Small Clients And Larger Ephemeral Servers) model is a recently proposed model for MPC (Acharya et al., TCC 2022). While the SCALES model offers several attractive features for practical large-scale MPC, the result of Acharya et al. only offered semi-honest secure protocols in this model.
We present a new efficient SCALES protocol secure against malicious adversaries, for general Boolean circuits. We start with the base construction of Acharya et al. and design and use a suite of carefully defined building blocks that may be of independent interest. The resulting protocol is UC-secure without honest majority, with a CRS and bulletin-board as setups, and allows publicly identifying deviations from correct execution.
SCALES: MPC with Small Clients and Larger Ephemeral Servers
The recently proposed YOSO model is a groundbreaking approach to MPC, executable on a public blockchain, circumventing adaptive player corruption by hiding the corruption targets until they are worthless. Players are selected unpredictably from a large pool to perform MPC subtasks, in which each selected player sends a single message (and reveals their identity). While YOSO MPC has attractive asymptotic complexity, unfortunately, it is concretely prohibitively expensive due to the cost of its building blocks.
We propose a modification to the YOSO model that preserves resilience to adaptive server corruption, but allows for much more efficient protocols. In SCALES (Small Clients And Larger Ephemeral Servers) only the servers facilitating the MPC computation are ephemeral (unpredictably selected and ``speak once''). Input providers (clients) publish problem instances and collect the output, but do not otherwise participate in computation. SCALES offers attractive features, and improves over YOSO protocols in outsourcing MPC to a large pool of servers under adaptive corruption.
We build SCALES from rerandomizable garbling schemes, which is a contribution of independent interest, with additional applications.
A Study of Blockchain Consensus Protocols
When Nakamoto invented Bitcoin, the first generation of cryptocurrencies followed it in applying POW (Proof of Work) consensus mechanism; due to its excessive energy consumption and heavy carbon footprints, new innovations evolved like Proof of Space, POS (Proof of Stake), and a lot more with many variants for each. Furthermore, the emergence of more blockchain applications and kinds beyond just cryptocurrencies needed more consensus mechanisms that is optimized to fit requirements of each application or blockchain kind; examples range from IoT (Internet of Things) blockchains for sustainability applications that often use variants of BFT (Byzantine Fault Tolerance) algorithm, and consensus needed to relay transactions and/or assets between different blockchains in interoperability solutions. Previous studies concentrated on surveying and/or proposing different blockchain consensus rules, on a specific consensus issue like attacks, randomization, or on deriving theoretical results. Starting from discussing most important theoretical results, this paper tries to gather and organize all significant existing material about consensus in the blockchain world explaining design challenges, tradeoffs and research areas. We realize that the topic could fit for a complete textbook, so we summarize the basic concepts and support with tables and appendices. Then we highlight some case examples from interoperability solutions to show how flexible and wide the design space is to fit both general and special purpose systems. The aim is to provide researchers with a comprehensive overview of the topic, along with the links to go deeper into every detail.
Impossible Differential Attack on SAND-64
SAND is an AND-RX-based lightweight block cipher proposed by Chen et al. There are two variants of SAND, namely SAND-64 and SAND-128, due to structural differences. In this paper, we search for impossible differential distinguishers of SAND-64 using the Constraint Programming (CP) and reveal 56 types of impossible differential distinguishers up to 11 rounds. Furthermore, we demonstrate a key recovery attack on 17-round SAND-64. The complexities for the attack require $2^{56}$ data, $2^{127}$ encryptions, and $2^{60}$ bytes of memory, respectively. Although this result currently achieves the best attack on round-reduced SAND-64, this attack does not threaten the security of SAND-64 against impossible differential attack.
Towards Scalable YOSO MPC via Packed Secret-Sharing
The YOSO (You Only Speak Once) model, introduced by Gentry et al. (CRYPTO 2021), helps to achieve strong security guarantees in cryptographic protocols for distributed settings, like blockchains, with large number of parties. YOSO protocols typically employ smaller anonymous committees to execute individual rounds of the protocol instead of having all parties execute the entire protocol. After completing their tasks, parties encrypt protocol messages for the next anonymous committee and erase their internal state before publishing ciphertexts, thereby enhancing security in dynamically changing environments.
In this work, we consider the problem of secure multi-party computation (MPC), a fundamental problem in cryptography and distributed computing. We assume honest majority among the committee members, and work in the online-offline, i.e., preprocessing, setting.
In this context, we present the first YOSO MPC protocol where efficiency---measured as communication complexity---improves as the number of parties increases. Specifically, for $0<\epsilon<1/2$ and an adversary corrupting $t<n(\frac{1}{2}-\epsilon)$ out of $n$ parties, our MPC protocol exhibits enhanced scalability as $n$ increases, where the online phase communication becomes independent of $n$.
Prior YOSO MPC protocols considered $t$ as large as $(n-1)/2$, but a significant hurdle persisted in obtaining YOSO MPC with communication that does not scale linearly with the number of committee members, a challenge that is exagerbated when the committee size was large per YOSO's requirements.
We show that, by considering a small ``gap'' of $\epsilon>0$, the sizes of the committees are only marginally increased, while online communication is significantly reduced.
Furthermore, we explicitly consider fail-stop adversaries, i.e., honest participants who may inadvertently fail due to reasons such as denial of service or software/hardware errors. In prior YOSO work, these adversaries were grouped with fully malicious parties. Adding explicit support for them allows us to achieve even better scalability.
Periodic Table of Cryptanalysis: Geometric Approach with Different Bases
In the past three decades, we have witnessed the creation of various cryptanalytic attacks. However, relatively little research has been done on their potential underlying connections. The geometric approach, developed by Beyne in 2021, shows that a cipher can be viewed as a linear operation when we treat its input and output as points in an induced \textit{free vector space}.
By performing a change of basis for the input and output spaces, one can obtain various transition matrices. Linear, differential, and (ultrametic) integral attacks have been well reinterpreted by Beyne's theory in a unified way.
Thus far, the geometric approach always uses the same basis for the input and output spaces. We observe here that this restriction is unnecessary and allowing different bases makes the geometric approach more flexible and able to interpret/predict more attack types. Given some set of bases for the input and output spaces, a family of basis-based attacks is defined by combining them, and all attacks in this family can be studied in a unified automatic search method.
We revisit three kinds of bases from previous geometric approach papers and extend them to four extra ones by introducing new rules when generating new bases. With the final seven bases, we can obtain $7^{2d}$ different basis-based attacks in the $d$-th order spaces, where the \textit{order} is defined as the number of messages used in one sample during the attack.
We then provide four examples of applications of this new framework. First, we show that by choosing a better pair of bases, Beyne and Verbauwhede's ultrametric integral cryptanalysis can be interpreted as a single element of a transition matrix rather than as a linear combination of elements. This unifies the ultrametric integral cryptanalysis with the previous linear and quasi-differential attacks.
Second, we revisit the multiple-of-$n$ property with our refined geometric approach and exhibit new multiple-of-$n$ distinguishers that can reach more rounds of the \skinny-64 cipher than the state-of-the-art.
Third, we study the multiple-of-$n$ property for the first-order case, which is similar to the subspace trail but it is the divisibility property that is considered. This leads to a new distinguisher for 11-round-reduced \skinny-64.
Finally, we give a closed formula for differential-linear approximations without any assumptions, even confirming that the two differential-linear approximations of \simeck-32 and \simeck-48 found by Hadipour \textit{et al.} are deterministic independently of concrete key values.
We emphasize that all these applications were not possible before.
Cryptography based on 2D Ray Tracing
We introduce a novel symmetric key cryptographic scheme involving a light ray's interaction with a 2D cartesian coordinate setup, several smaller boxes within this setup, of either reflection or refraction type and $1^{st}$, $2^{nd}$ or $3^{rd}$ degree polynomial curves inside each of these smaller boxes. We also incorporate boolean logic gates of types XOR, NOT-Shift and Permutation which get applied to the light ray after each interaction with a reflecting or refracting polynomial curve. This alternating interaction between Optical gates (polynomial curves) and Non-optical gates creates a complex and secure cryptographic system. Furthermore, we design and launch customized attacks on our cryptographic system and discuss the robustness of it against these.
Hybrid-query bounds with partial input control - framework and application to tight M-eTCR
In this paper, we present an improved framework for proving query bounds in the Quantum Random Oracle Model (QROM) for algorithms with both quantum and classical query interfaces, where the classical input is partially controlled by the adversary. By extending existing techniques, we develop a method to bound the progress an adversary can make with such partial-control classical queries. While this framework is applicable to different hash function properties, we decided to demonstrate the impact of the new techniques by giving an analysis of the multi-target extended target collision resistance property (m-eTCR). This new approach allows us to achieve an improved bound that significantly reduces the required function key size. Our proof is tight in terms of query complexity and has significant implications for cryptographic applications, especially for signature schemes in the hash & sign paradigm, enabling more efficient instantiations with reduced salt sizes and smaller signature lengths. For an example of multiple signatures aggregation, we achieve a signature size of 30 kB smaller.
On breaking McEliece keys using brute force
In the McEliece public-key encryption scheme, a private key is almost always not determined uniquely by its associated public key. This paper gives a structural characterization of equivalent private keys, generalizing a result known for the more approachable special case $\lvert L\rvert=q$.
These equivalences reduce the cost estimate for a simple private-key search using the support-splitting algorithm (SSA) by a polynomial but practically very substantial factor.
We provide an optimized software implementation of the SSA for this kind of key search and demonstrate its capabilities in practice by solving a key-recovery challenge with a naïve a‑priori cost estimate of $2^{83}$ bit operations in just ${\approx}\,1400$ core days, testing ${\approx}\,9400$ private-key candidates per core and second in the process.
We stress that the speedup from those equivalences is merely polynomial and does not indicate any weakness in realistic instantiations of the McEliece cryptosystem, whose parameter choices are primarily constrained by decoding attacks rather than ludicrously more expensive key-recovery attacks.
Dyna-hinTS: Silent Threshold Signatures for Dynamic Committees
The works of Garg et al. [S&P'24] (aka hinTS) and Das et al. [CCS'23] introduced the notion of silent threshold signatures (STS) - where a set of signers silently perform local computation to generate a public verification key. To sign a message, any set of $t$ signers sign the message non-interactively and these are aggregated into a constant-sized signature. This paradigm avoids performing expensive Distributed Key Generation procedure for each set of signers while keeping the public verification key constant-sized.
In this work, we propose the notion of committee-based silent threshold signature (c-STS) scheme. In a c-STS scheme, a set of signers initially perform a one-time setup to generate the verification key, and then a subset of signers are randomly chosen for an epoch to perform the threshold signing while the other signers are not authorized to sign during that epoch. This captures existing systems like Ethereum Altair and Dfinity where only a specific committee is authorized to sign in a designated epoch. The existing STS schemes cannot be extended to the committee setting because the signature verification only attests to the number of signing parties, not which committee they belong to.
So, we upgrade hinTS to the committee setting by proposing Dyna-hinTS. It is the $first$ c-STS scheme and it requires a one-time silent setup and generates a one-time public verification key that does not vary with the committee. Assuming a set of 1024 signers (with corrupt 682 signers), hinTS generates an aggregated signature in 1.7s whereas Dyna-hinTS generates it in $0.35$s within a committee of $80$ signers. This yields a $4.9\times$ improvement over hinTS for signature generation at the cost of increasing signature verification time by $4\%$ over hinTS. Dyna-hinTS supports general access structure, weighted signatures and improves existing multiverse threshold signatures.
Charge Your Clients: Payable Secure Computation and Its Applications
The online realm has witnessed a surge in the buying and selling of data, prompting the emergence of dedicated data marketplaces. These platforms cater to servers (sellers), enabling them to set prices for access to their data, and clients (buyers), who can subsequently purchase these data, thereby streamlining and facilitating such transactions. However, the current data market is primarily confronted with the following issues. Firstly, they fail to protect client privacy, presupposing that clients submit their queries in plaintext. Secondly, these models are susceptible to being impacted by malicious client behavior, for example, enabling clients to potentially engage in arbitrage activities.
To address the aforementioned issues, we propose payable secure computation, a novel secure computation paradigm specifically designed for data pricing scenarios. It grants the server the ability to securely procure essential pricing information while protecting the privacy of client queries. Additionally, it fortifies the server's privacy against potential malicious client activities. As specific applications, we have devised customized payable protocols for two distinct secure computation scenarios: Keyword Private Information Retrieval (KPIR) and Private Set Intersection (PSI).
We implement our two payable protocols and compare them with the state-of-the-art related protocols that do not support pricing as a baseline. Since our payable protocols are more powerful in the data pricing setting, the experiment results show that they do not introduce much overhead over the baseline protocols.
Our payable KPIR achieves the same online cost as baseline, while the setup is about $1.3-1.6\times$ slower than it. Our payable PSI needs about $2\times$ more communication cost than that of baseline protocol, while the runtime is $1.5-3.2\times$ slower than it depending on the network setting.
Audience Injection Attacks: A New Class of Attacks on Web-Based Authorization and Authentication Standards
We introduce audience injection attacks, a novel class of vulnerabilities that impact widely used Web-based authentication and authorization protocols, including OAuth 2.0, OpenID Connect, FAPI, CIBA, the Device Authorization Grant, and various well-established extensions, such as Pushed Authorization Requests, Token Revocation, Token Introspection, and their numerous combinations.
These protocols underpin services for billions of users across diverse ecosystems worldwide, spanning low-risk applications like social logins to high-risk domains such as open banking, insurance, and healthcare.
Audience injection attacks exploit a critical weakness in a core security mechanism of these protocols - the handling of so-called audiences in signature-based client authentication mechanisms. This vulnerability allows attackers to compromise fundamental security objectives whenever these mechanisms are utilized across two or more server endpoints. They enable the attacker to impersonate users and gain unauthorized access to their resources, even in high-security protocol families specifically designed for sensitive applications.
We responsibly disclosed these vulnerabilities to the relevant standardization bodies, which recognized their severity.
In collaboration with these organizations, we developed fixes and supported a coordinated response, leading to an ongoing effort to update a dozen of standards, numerous major implementations, and far-reaching ecosystems.
Improving the Masked Division for the FALCON Signature
FALCON is a post-quantum signature selected by the National Institute of Standards and Technology (NIST). Although its side-channel resilience has been studied and a masking countermeasure proposed, the division is a major performance bottleneck. This work proposes a different approach to the masked FALCON division. We use the Newton method and a convergent sequence to approximate this operation. The performance of the masked division is improved by a factor 6.7 for two shares and 6.98 for three shares. For the Gaussian sampler, the improvements are of a factor 1.45 for two shares and 1.43 for three shares. Formal security proofs using the MIMO-SNI criteria are also provided.
Everlasting Fully Dynamic Group Signatures
Group signatures allow a user to sign anonymously on behalf of a group of users while allowing a tracing authority to trace the signer's identity in case of misuse. In Chaum and van Heyst's original model (EUROCRYPT'91), the group needs to stay fixed. Throughout various attempts, including partially dynamic group signatures and revocations, Bootle et al. (ACNS'16, J. Cryptol.) formalized the notion of fully dynamic group signatures (FDGS), enabling both enrolling and revoking users of the group. However, in their scheme, the verification process needs to take into account the latest system information, and a previously generated signature will be invalidated as soon as, for example, there is a change in the group. We therefore raise a research question: Is it possible to construct an FDGS under which the validity of a signature can survive future changes in the system information?
In this paper, we propose Everlasting Fully Dynamic Group Signatures (EFDGS) that allow signers to generate signatures that do not require verification with any specific epoch. Specifically, once the signatures are created, they are valid forever. It also guarantees that the signer can only output such a signature when she is a valid user of the system. We realize the above new model by constructing a plausibly post-quantum standard-lattice-based EFDGS.
An Optimized Instantiation of Post-Quantum MQTT protocol on 8-bit AVR Sensor Nodes
Since the selection of the National Institute of Standards and Technology (NIST) Post-Quantum Cryptography (PQC) standardization algorithms, research on integrating PQC into security protocols such as TLS/SSL, IPSec, and DNSSEC has been actively pursued. However, PQC migration for Internet of Things (IoT) communication protocols remains largely unexplored. Embedded devices in IoT environments have limited computational power and memory, making it crucial to optimize PQC algorithms for efficient computation and minimal memory usage when deploying them on low-spec IoT devices. In this paper, we introduce KEM-MQTT, a lightweight and efficient Key Encapsulation Mechanism (KEM) for the Message Queuing Telemetry Transport (MQTT) protocol, widely used in IoT environments. Our approach applies the NIST KEM algorithm Crystals-Kyber (Kyber) while leveraging MQTT’s characteristics and sensor node constraints. To enhance efficiency, we address certificate verification issues and adopt KEMTLS to eliminate the need for Post-Quantum Digital Signatures Algorithm (PQC-DSA) in mutual authentication. As a result, KEM-MQTT retains its lightweight properties while maintaining the security guarantees of TLS 1.3. We identify inefficiencies in existing Kyber implementations on 8-bit AVR microcontrollers (MCUs), which are highly resource-constrained. To address this, we propose novel implementation techniques that optimize Kyber for AVR, focusing on high-speed execution, reduced memory consumption, and secure implementation, including Signed LookUp-Table (LUT) Reduction. Our optimized Kyber achieves performance gains of 81%,75%, and 85% in the KeyGen, Encaps, and DeCaps processes, respectively, compared to the reference implementation. With approximately 3 KB of stack usage, our Kyber implementation surpasses all state-of-the-art Elliptic Curve Diffie-Hellman (ECDH) implementations. Finally, in KEM-MQTT using Kyber-512, an 8-bit AVR device completes the handshake preparation process in 4.32 seconds, excluding the physical transmission and reception times.
FHECAP: An Encrypted Control System with Piecewise Continuous Actuation
We propose an encrypted controller framework for linear time-invariant systems with actuator non-linearity based on fully homomorphic encryption (FHE). While some existing works explore the use of partially homomorphic encryption (PHE) in implementing linear control systems, the impacts of the non-linear behaviors of the actuators on the systems are often left unconcerned. In particular, when the inputs to the controller become too small or too large, actuators may burn out due to unstable system state oscillations. To solve this dilemma, we design and implement FHECAP, an FHE-based controller framework that can homomorphically apply non-linear functions to the actuators to rectify the system inputs. In FHECAP, we first design a novel data encoding scheme tailored for efficient gain matrix evaluation. Then, we propose a high-precision homomorphic algorithm to apply non-arithmetic piecewise function to realize the actuator normalization. In the experiments, compared with the existing state-of-the-art encrypted controllers, FHECAP achieves $4\times$--$1000\times$ reduction in computational latency. We evaluate the effectiveness of FHECAP in the real-world application of encrypted control for spacecraft rendezvous. The simulation results show that the FHECAP achieves real-time spacecraft rendezvous with negligible accuracy loss.
Trapdoor one-way functions from tensors
Weyman and Zelevinsky generalised Vandermonde matrices to higher dimensions, which we call Vandermonde-Weyman-Zelevinsky tensors.
We generalise Lagrange interpolation to higher dimensions by devising a nearly linear time algorithm that given a Vandermonde-Weyman-Zelevinsky tensor and a sparse target vector, finds a tuple of vectors that hit the target under tensor evaluation. Tensor evaluation to us means evaluating the usual multilinear form associated with the tensor in all but one chosen dimension. Yet, this interpolation problem phrased with respect to a random tensor appears to be a hard multilinear system. Leveraging this dichotomy, we propose preimage sampleable trapdoor one-way functions in the spirit of Gentry-Peikert-Vaikuntanathan (GPV) lattice trapdoors. We design and analyse ``Hash-and-Sign'' digital signatures from such trapdoor one-way functions, yielding short signatures whose lengths scale nearly linearly in the security parameter. We also describe an encryption scheme.
Our trapdoor is a random Vandermonde-Weyman-Zelevinsky tensor over a finite field and a random basis change. We hide the Vandermonde-Weyman-Zelevinsky tensor under the basis change and publish the resulting pseudorandom tensor. The one way function is the tensor evaluation derived from the public tensor, restricted so as to only map to sparse vectors. We then design the domain sampler and preimage sampler demanded by the GPV framework. The former samples inputs that map to uniform images under the one-way function. The latter samples preimages given supplementary knowledge of the trapdoor. Preimage sampling is a randomised version of interpolation and knowing the basis change allows efficient translation between interpolation corresponding to the public and trapdoor tensors. An adversary seeking a preimage must solve a pseudorandom multilinear system, which seems cryptographically hard.
Pacmann: Efficient Private Approximate Nearest Neighbor Search
We propose a new private Approximate Nearest Neighbor (ANN) search scheme named Pacmann that allows a client to perform ANN search in a vector database without revealing the query vector to the server. Unlike prior constructions that run encrypted search on the server side, Pacmann carefully offloads limited computation and storage to the client, no longer requiring computationally-intensive cryptographic techniques. Specifically, clients run a graph-based ANN search, where in each hop on the graph, the client privately retrieves local graph information from the server. To make this efficient, we combine two ideas: (1) we adapt a leading graph-based ANN search algorithm to be compatible with private information retrieval (PIR) for subgraph retrieval; (2) we use a recent class of PIR schemes that trade offline preprocessing for online computational efficiency. Pacmann achieves significantly better search quality than the state-of-the-art private ANN search schemes, showing up to 2.5$\times$ better search accuracy on real-world datasets than prior work and reaching 90\% quality of a state-of-the-art non-private ANN algorithm. Moreover on large datasets with up to 100 million vectors, Pacmann shows better scalability than prior private ANN schemes
with up to $63\%$ reduction in computation time and $24\%$ reduction in overall latency.
CertainSync: Rateless Set Reconciliation with Certainty
Set reconciliation is a fundamental task in distributed systems, particularly in blockchain networks, where it enables the synchronization of transaction pools among peers and facilitates block dissemination. Existing traditional set reconciliation schemes are either statistical, providing success probability as a function of the communication overhead and the size of the symmetric difference, or require parametrization and estimation of the size of the symmetric difference, which can be prone to error. In this paper, we present CertainSync, a novel reconciliation framework that, to the best of our knowledge, is the first to guarantee successful set reconciliation without any parametrization or estimators in use. The framework is rateless and adapts to the unknown symmetric difference size. The set reconciliation is guaranteed to be completed successfully whenever the communication overhead reaches a lower bound derived from the symmetric difference size and the universe size. Our framework is based on recent constructions of Invertible Bloom Lookup Tables (IBLTs) ensuring successful element listing as long as the number of elements is bounded. We provide a theoretical analysis to prove the certainty in the set reconciliation for multiple constructions. The approach is also validated by simulations, showing the ability to synchronize sets with efficient communication costs while maintaining reconciliation guarantees compared to other baseline schemes for set reconciliation. To further improve communication overhead for large universes as blockchain networks, CertainSync is extended with a universe reduction technique to minimize communication overhead. We compare and validate the extended framework UniverseReduceSync against the basic CertainSync framework through simulations using real blockchain transaction hash data from the Ethereum blockchain network. The results illustrate a trade-off between improved communication costs and maintaining reconciliation guarantees without relying on parametrization or estimators, offering a comprehensive solution for set reconciliation in diverse scenarios.
Shuffle Shamir Secret Shares Uniformly with Linear Online Communication
In this paper, we revisit shuffle protocol for Shamir secret sharing. Upon examining previous works, we observe that existing constructions either produce non-uniform shuffle or require large communication and round complexity, e.g. exponential in the number of parties. We propose two shuffle protocols, both of which shuffle uniformly within $O(\frac{k + l}{\log k}n^2m\log m)$ communication for shuffling rows of an $m\times l$ matrix shared among $n$ parties, where $k\leq m$ is a parameter balancing communication and computation. Our first construction is more concretely efficient, while our second construction requires only $O(nml)$ online communication within $O(n)$ round. In terms of overall communication and online communication, both shuffle protocols outperform current optimal shuffle protocols for Shamir secret sharing.
At the core of our constructions is a novel permutation-sharing technique, which can be used to permute arbitrarily many vectors by computing matrix-vector products. Once shared, applying a permutation becomes much cheaper, which results in a faster online phase. Letting each party share one secret uniform permutation in the offline phase and applying them sequentially in the online phase, we obtain our first shuffle protocol. To further optimize online complexity and simplify the trade-off, we adopt the shuffle correlation proposed by Gao et al. and obtain the second shuffle protocol with $O(nml)$ online communication and $O(n^2ml)$ online computation. This brings an additional benefit that the online complexity is now independent of offline complexity, which reduces parameter optimization to optimizing offline efficiency.
Our constructions require only most basic primitives in Shamir secret sharing scheme, and work for arbitrary field $\mathbb{F}$ of size larger than $n$. As shuffle is a frequent operation in algorithm design, we expect them to accelerate many other primitives in context of Shamir secret sharing MPC, such as sorting, oblivious data structure, etc.
Multiparty Shuffle: Linear Online Phase is Almost for Free
Shuffle is a frequently used operation in secure multiparty computations, with applications including joint data analysis, anonymous communication systems, secure multiparty sorting, etc. Despite a series of ingenious works, the online (i.e. data-dependent) complexity of malicious secure $n$-party shuffle protocol remains $\Omega(n^2m)$ for shuffling data array of length $m$. This potentially slows down the application and MPC primitives built upon MPC shuffle.
In this paper, we study the online complexities of MPC shuffle protocol. We observe that most existing works follow a ``permute-in-turn'' paradigm,where MPC shuffle protocol consists of $n$ sequential calls to a more basic MPC permutation protocol. We hence raise the following question: given only black-box access to an arbitrary MPC framework and permutation protocol, can we build an MPC shuffle, whose online complexities are independent of the underlying permutation protocol?
We answer this question affirmatively, offering generic transformation from semi-honest/malicious MPC permutation protocolsto MPC shuffle protocols with semi-honest/malicious security and only $O(nm)$ online communication and computation. The linear online phase is obtained almost for free via the transformation, in the sense that in terms of overall complexities, the generated protocolequals the protocol generated by naive permute-in-turn paradigm. Notably, instantiating our construction with additive/Shamir secret sharing and corresponding optimal permutation protocol, we obtain the first malicious secure shuffle protocols with linear online complexities for additive/Shamir secret sharing, respectively. These results are to be compared with previous optimal online communication complexities of $O(Bn^2m)$ and $O(n^2m\log m)$ for malicious secure shuffle, for additive and Shamir secret sharing, respectively. We provide formal security proofs for both semi-honest and malicious secure transformations,showing that our malicious secure construction achieves universally composable security. Experimental results indicate that our construction significantly improves online performance while maintaining a moderate increase in offline overhead. Given that shuffle is a frequently used primitive in secure multiparty computation, we anticipate that our constructions will accelerate many real-world MPC applications.
Efficient SNARKs for Boolean Circuits via Sumcheck over Tower Fields
In this paper, we present efficient SNARKs for Boolean circuits, achieving significant improvements in the prover efficiency. The core of our technique is a novel tower sumcheck protocol and a tower zero-check protocol tailored for tower fields, which enable this efficiency boost. When instantiated with Wiedemann's binary tower fields with the base field of $GF(2)$ and the top-level field $GF(2^{2^\ell})$, assuming the quadratic complexity of multiplications \(O(2^{2\ell})\) in the top-level field with $2^\ell$ bits, the prover time of our sumcheck protocol is \(O(2^{1.5\ell}N)\). It is faster than the standard sumcheck protocol over the large field with the complexity of \(O(2^{2\ell}N)\). To achieve a reasonable security level, $2^\ell$ is usually set to $128$.
Leveraging this advancement, we improve the efficiency of IOP protocols over the binary or small characteristic fields for Plonkish, CCS, and GKR-based constraint systems. Moreover, to further improve the prover efficiency of the SNARKs, we introduce a basis-switching mechanism that efficiently transforms polynomial evaluations on the base-field polynomial to evaluations on the tower-field polynomial. With the basis-switching, we are able to compile the binary-field IOPs to SNARKs using large-field polynomial commitment schemes (PCS) that batch the witness over the base field. The size of the large-field PCS is only $\frac{1}{2^\ell}$ of the size of the witness over the base field. Combining the IOP and the PCS, the overall prover time of our SNARKs for Boolean circuits significantly faster than the naive approach of encoding Boolean values in a large field.
Quantum Analysis of AES
Quantum computing is considered one of the next big leaps in computational science. While a fully functional quantum computer is still in the future, there is an ever-growing need to evaluate the security of the symmetric key ciphers against a potent quantum adversary. Keeping this in mind, our work explores the key recovery attack using the Grover's search on the three variants of AES (-128, -192, -256). We develop a pool of 26 implementations per AES variant (thus totaling 78), by taking the state-of-the-art advancements in the relevant fields into account.
In a nutshell, we present the least Toffoli depth and full depth implementations of AES, thereby improving from Zou et al.'s Asiacrypt'20 paper by more than 97 percent for each variant of AES. We show that the qubit count - Toffoli depth product is reduced from theirs by more than 87 percent. Furthermore, we analyze the Jaques et al.'s Eurocrypt'20 implementations in detail, fix the bugs (arising from some problem of the quantum computing tool used and not related to their coding), and report corrected benchmarks. To the best of our finding, our work improves from all the previous works (including the Asiacrypt'22 paper by Huang and Sun, the Asiacrypt'23 paper by Liu et al. and the Asiacrypt'24 paper by Shi and Feng) in terms of various quantum circuit complexity metrics (Toffoli depth, full depth, Toffoli/full depth - qubit count product, full depth - gate count product, etc.). Also, our bug-fixing of Jaques et al.'s Eurocrypt'20 implementations seems to improve from the authors' own bug-fixing, thanks to our architecture consideration.
Equipped with the basic AES implementations, we further investigate the prospect of the Grover's search. We propose four new implementations of the S-box, one new implementation of the MixColumn; as well as five new architecture (one is motivated by the architecture by Jaques et al. in Eurocrypt'20, and the rest four are entirely our innovation). Under the MAXDEPTH constraint (specified by NIST), the circuit depth metrics (Toffoli depth, T-depth and full depth) become crucial factors and parallelization for often becomes necessary. We provide the least depth implementation in this respect that offers the best performance in terms of metrics for circuit complexity (like, depth-squared - qubit count product, depth - gate count product).
Thus, to our knowledge, we estimate the currently best-known quantum attack complexities for AES-128 ($2^{156.2630}$), AES-192 ($2^{221.5801}$) and AES-256 ($2^{286.0731}$); these provide the NIST-specified quantum security levels 1, 3 and 5 respectively. Additionally, we achieve the least Toffoli depth - qubit count product for AES-128 ($121920$, improving from $130720$ by Shi and Feng in Asiacrypt'24), AES-192 ($161664$, improving from $188880$ by Liu et al. in Asiacrypt'23) and AES-256 ($206528$, improving from $248024$ by Liu et al. in Asiacrypt'23) so far.
Byzantine Reliable Broadcast and Tendermint Consensus with trusted components
Byzantine Reliable Broadcast is one of the most popular communication primitives in distributed systems. Byzantine reliable broadcast ensures that processes agree to deliver a message from an initiator, even if some processes (possibly including the initiator) are Byzantine. In asynchronous settings, it is known since the prominent work of Bracha \cite{Bracha87} that Byzantine reliable broadcast can be implemented deterministically if the total number of processes, denoted by $n$, satisfies $n \geq 3t+1$ where $t$ is an upper bound on the number of Byzantine processes. Here, we study Byzantine Reliable Broadcast when processes are equipped with \emph{trusted components}, special software or hardware designed to prevent equivocation. Our contribution is threefold. First, we show that, despite common belief, when each process is equipped with a trusted component, Bracha's algorithm still needs $n \geq 3t+1$. Second, we present a novel algorithm that uses a single trusted component (at the initiator) that implements Byzantine Reliable Asynchronous Broadcast with $n \geq 2t+1$.
\yag{Lastly, building on our broadcast algorithm, we present TenderTee, a transformation of the Tendermint consensus algorithm by using trusted component, giving better Byzantine resilience. Tendertee works with $n \geq 2t+1$, where Tendermint needed $n=3t+1$.}
New Techniques for Preimage Sampling: Improved NIZKs and More from LWE
Recent constructions of vector commitments and non-interactive zero-knowledge (NIZK) proofs from LWE implicitly solve the following shifted multi-preimage sampling problem: given matrices $\mathbf{A}_1, \ldots, \mathbf{A}_\ell \in \mathbb{Z}_q^{n \times m}$ and targets $\mathbf{t}_1, \ldots, \mathbf{t}_\ell \in \mathbb{Z}_q^n$, sample a shift $\mathbf{c} \in \mathbb{Z}_q^n$ and short preimages $\boldsymbol{\pi}_1, \ldots, \boldsymbol{\pi}_\ell \in \mathbb{Z}_q^m$ such that $\mathbf{A}_i \boldsymbol{\pi}_i = \mathbf{t}_i + \mathbf{c}$ for all $i \in [\ell]$. In this work, we introduce a new technique for sampling $\mathbf{A}_1, \ldots, \mathbf{A}_\ell$ together with a succinct public trapdoor for solving the multi-preimage sampling problem with respect to $\mathbf{A}_1, \ldots, \mathbf{A}_\ell$. This enables the following applications:
* We provide a dual-mode instantiation of the hidden-bits model (and by correspondence, a dual-mode NIZK proof for $\mathsf{NP}$) with (1) a linear-size common reference string (CRS); (2) a transparent setup in hiding mode (which yields statistical NIZK arguments); and (3) hardness from LWE with a polynomial modulus-to-noise ratio. This improves upon the work of Waters (STOC 2024) which required a quadratic-size structured reference string (in both modes) and LWE with a super-polynomial modulus-to-noise ratio.
* We give a statistically-hiding vector commitment with transparent setup and polylogarithmic-size CRS, commitments, and openings from SIS. This simultaneously improves upon the vector commitment schemes of de Castro and Peikert (EUROCRYPT 2023) as well as Wee and Wu (EUROCRYPT 2023).
At a conceptual level, our work provides a unified view of recent lattice-based vector commitments and hidden-bits model NIZKs through the lens of the shifted multi-preimage sampling problem.
SPHINCSLET: An Area-Efficient Accelerator for the Full SPHINCS+ Digital Signature Algorithm
This work presents SPHINCSLET, the first fully standard-compliant and area-efficient hardware implementation of the SLH-DSA algorithm, formerly known as SPHINCS+, a post-quantum digital signature scheme. SPHINCSLET is designed to be parameterizable across different security levels and hash functions, offering a balanced trade-off between area efficiency and performance. Existing hardware implementations either feature a large area footprint to achieve fast signing and verification or adopt a coprocessor-based approach that significantly slows down these operations. SPHINCSLET addresses this gap by delivering a 4.7$\times$ reduction in area compared to high-speed designs while achieving a 2.5$\times$ to 5$\times$ improvement in signing time over the most efficient coprocessor-based designs for a SHAKE256-based SPHINCS+ implementation. The SHAKE256-based SPHINCS+ FPGA implementation targeting the AMD Artix-7 requires fewer than 10.8K LUTs for any security level of SLH-DSA. Furthermore, the SHA-2-based SPHINCS+ implementation achieves a 2$\times$ to 4$\times$ speedup in signature generation across various security levels compared to existing SLH-DSA hardware, all while maintaining a compact area footprint of 6K to 15K LUTs. This makes it the fastest SHA-2-based SLH-DSA implementation to date. With an optimized balance of area and performance, SPHINCSLET can assist resource-constrained devices in transitioning to post-quantum cryptography.
Cryptanalysis of a Type of White-Box Implementations of the SM4 Block Cipher
The SM4 block cipher is a Chinese national standard and an ISO international standard. Since white-box cryptography has many real-life applications nowadays, a few white-box implementations of SM4 has been proposed, among which a type of constructions is dominated, that uses a linear or affine diagonal block encoding to protect the original three 32-bit branches entering a round function and uses its inverse as the input encoding to the S-box layer. In this paper, we analyse the security of this type of constructions against Lepoint et al.'s collision-based attack method, our experiment under a small fraction of (encodings, round key) combinations shows that the rank of the concerned linear system is much less than the number of the involved unknowns, meaning these white-box SM4 implementations should resist Lepoint et al.'s method, but we leave it as an open problem whether there are such encodings that the rank of the corresponding linear system is slightly less than the number of the involved unknowns, in which scenario Lepoint et al.'s method may be used to recover a round key for the case with linear encodings and to remove most white-box operations until mainly some Boolean masks for the case with affine encodings.
Need for zkSpeed: Accelerating HyperPlonk for Zero-Knowledge Proofs
(Preprint) Zero-Knowledge Proofs (ZKPs) are rapidly gaining importance in privacy-preserving and verifiable computing. ZKPs enable a proving party to prove the truth of a statement to a verifying party without revealing anything else. ZKPs have applications in blockchain technologies, verifiable machine learning, and electronic voting, but have yet to see widespread adoption due to the computational complexity of the proving process.Recent works have accelerated the key primitives of state-of-the-art ZKP protocols on GPU and ASIC. However, the protocols accelerated thus far face one of two challenges: they either require a trusted setup for each application, or they generate larger proof sizes with higher verification costs, limiting their applicability in scenarios with numerous verifiers or strict verification time constraints. This work presents an accelerator, zkSpeed, for HyperPlonk, a state-of-the-art ZKP protocol that supports both one-time, universal setup and small proof sizes for typical ZKP applications in publicly verifiable, consensus-based systems. We accelerate the entire protocol, including two major primitives: SumCheck and Multi-scalar Multiplications (MSMs). We develop a full-chip architecture using 366.46 mm$^2$ and 2 TB/s of bandwidth to accelerate the entire proof generation process, achieving geometric mean speedups of 801$\times$ over CPU baselines.
Making BBS Anonymous Credentials eIDAS 2.0 Compliant
eIDAS 2.0 (electronic IDentification, Authentication and trust Services) is a very ambitious regulation aimed at equipping European citizens with a personal digital identity wallet (EU Digital Identity Wallet) on a mobile phone that not only needs to achieve a high level of security, but also needs to be available as soon as possible for a large number of citizens and respect their privacy (as per GDPR - General Data Protection Regulation).
In this paper, we introduce the foundations of a digital identity wallet solution that could help move closer to this objective by leveraging the proven anonymous credentials system BBS (Eurocrypt 2023), also known as BBS+, but modifying it to avoid the limitations that have hindered its widespread adoption, especially in certified infrastructures requiring trusted hardware implementation.
In particular, the solution we propose, which we call BBS#, does not rely, contrary to BBS/BBS +, on bilinear maps and pairing-friendly curves (which are not supported by existing hardware) and only depends on the hardware implementation of well-known digital signature schemes such as ECDSA (ISO/IEC 14888-3) or ECSDSA (also known as ECSchnorr, ISO/IEC 14888-3) using classical elliptic curves. More precisely, BBS# can be rolled out without requiring any change in existing hardware or the algorithms that hardware supports.
BBS# , which is proven secure in the random oracle model, retains the well-known security property (unforgeability of the credentials under the (gap) q-SDH assumption) and anonymity properties (multi-show full unlinkability and statistical anonymity of presentation proofs) of BBS/BBS+.
By implementing BBS# on several smartphones using different secure execution environments, we show that it is possible to achieve eIDAS 2.0 transactions which are not only efficient (around 70 ms on Android StrongBox), secure and certifiable at the highest level but also provide strong (optimal) privacy protection for all European ID Wallet users.
On the Tight Security of the Double Ratchet
The Signal Protocol is a two-party secure messaging protocol used in applications such as Signal, WhatsApp, Google Messages and Facebook Messenger and is used by billions daily. It consists of two core components, one of which is the Double Ratchet protocol that has been the subject of a line of work that aims to understand and formalise exactly what security it provides. Existing models capture strong guarantees including resilience to state exposure in both forward security (protecting past secrets) and post-compromise security (restoring security), adaptive state corruptions, message injections and out-of-order message delivery. Due to this complexity, prior work has failed to provide security guarantees that do not degrade in the number of interactions, even in the single-session setting.
Given the ubiquity of the Double Ratchet in practice, we explore tight security bounds for the Double Ratchet in the multi-session setting. To this end, we revisit the modelling of Alwen, Coretti and Dodis (EUROCRYPT 2019) who decompose the protocol into modular, abstract components, notably continuous key agreement (CKA) and forward-secure AEAD (FS-AEAD). To enable a tight security proof, we propose a CKA security model that provides one-way security under key checking attacks. We show that multi-session security of the Double Ratchet can be tightly reduced to the multi-session security of CKA and FS-AEAD, capturing the same strong security guarantees as Alwen et al.
Our result improves upon the bounds of Alwen et al. in the random oracle model. Even so, we are unable to provide a completely tight proof for the Double Ratchet based on standard Diffie-Hellman assumptions, and we conjecture it is not possible. We thus go a step further and analyse CKA based on key encapsulation mechanisms (KEMs). In contrast to previous works, our new analysis allows for tight constructions based on the DDH and post-quantum assumptions.
Cloning Games, Black Holes and Cryptography
Quantum no-cloning is one of the most fundamental properties of quantum information. In this work, we introduce a new toolkit for analyzing cloning games; these games capture more quantitative versions of no-cloning and are central to unclonable cryptography. Previous works rely on the framework laid out by Tomamichel, Fehr, Kaniewski and Wehner to analyze both the $n$-qubit BB84 game and the subspace coset game. Their constructions and analysis face the following inherent limitations:
- The existing bounds on the values of these games are at least $2^{-0.25n}$; on the other hand, the trivial adversarial strategy wins with probability $2^{-n}$. Not only that, the BB84 game does in fact admit a highly nontrivial winning strategy. This raises the natural question: are there cloning games which admit no non-trivial winning strategies?
- The existing constructions are not multi-copy secure; the BB84 game is not even $2 \mapsto 3$ secure, and the subspace coset game is not $t \mapsto t+1$ secure for a polynomially large $t$. Moreover, we provide evidence that the existing technical tools do not suffice to prove multi-copy security of even completely different constructions. This raises the natural question: can we design new cloning games that achieve multi-copy security, possibly by developing a new analytic toolkit?
Inspired by the literature on pseudorandom states, we study a new cloning game based on binary phase states and show that it is $t$-copy secure when $t=o(n/\log n)$. Moreover, for constant $t$, we obtain the first asymptotically optimal bounds of $O(2^{-n})$. To accomplish this, we introduce a new analytic toolkit based on of binary subtypes and combine this with novel bounds on the operator norms of block-wise tensor products of matrices. We also show a worst-case to average-case reduction for a large class of cloning games, which allows us to show the same quantitative results for Haar cloning games. These technical ingredients together enable two new applications which have previously been out of reach:
- In the area of black-hole physics, our cloning games reveal that, in an idealized model of a black hole which features Haar random (or pseudorandom) scrambling dynamics, the information from infalling qubits can only be recovered from either the interior or the exterior of the black hole---but never from both places at the same time. To demonstrate this result, it turns out to be crucial to prove an asymptotically optimal bound and to overcome the first limitation above.
- In the area of quantum cryptography, our worst-case to average-case reduction helps us construct succinct unclonable encryption schemes from the existence of pseudorandom unitaries, thereby, for the first time, bridging the gap between ``MicroCrypt'' and unclonable cryptography. We also propose and provide evidence for the security of multi-copy unclonable encryption schemes, which requires us to overcome the second limitation above.
Anonymous Self-Credentials and their Application to Single-Sign-On
Modern life makes having a digital identity no longer optional, whether one needs to manage a bank account or subscribe to a newspaper. As the number of online services increases, it is fundamental to safeguard user privacy and equip service providers (SP) with mechanisms enforcing Sybil resistance, i.e., preventing a single entity from showing as many.
Current approaches, such as anonymous credentials and self-sovereign identities, typically rely on identity providers or identity registries trusted not to track users' activities. However, this assumption of trust is no longer appropriate in a world where user data is considered a valuable asset.
To address this challenge, we introduce a new cryptographic notion, Anonymous Self-Credentials (ASC) along with two implementations. This approach enables users to maintain their privacy within an anonymity set while allowing SPs to obtain Sybil resistance. Then, we present a User-issued Unlinkable Single Sign-On (U2SSO) implemented from ASC that solely relies on an identity registry to immutably store identities. A U2SSO solution allows users to generate unlinkable child credentials for each SP using only one set of master credentials.
We demonstrate the practicality and efficiency of our U2SSO solution by providing a complete proof-of-concept.
Private SCT Auditing, Revisited
In order for a client to securely connect to a server on the web, the client must trust certificate authorities (CAs) only to issue certificates to the legitimate operator of the server. If a certificate is miss-issued, it is possible for an attacker to impersonate the server to the client. The goal of Certificate Transparency (CT) is to log every certificate issued in a manner that allows anyone to audit the logs for miss-issuance. A client can even audit a CT log itself, but this would leak sensitive browsing data to the log operator. As a result, client-side audits are rare in practice.
In this work, we revisit private CT auditing from a real-world perspective. Our study is motivated by recent changes to the CT ecosystem and advancements in Private Information Retrieval (PIR). First, we find that checking for inclusion of Signed Certificate Timestamps (SCTs) in a log — the audit performed by clients — is now possible with PIR in under a second and under 100kb of communication with minor adjustments to the protocol that have been proposed previously. Our results also show how to scale audits by using existing batching techniques and the algebraic structure of the PIR protocols, in particular to obtain certificate hashes by included in the log.
Since PIR protocols are more performant with smaller databases, we also suggest a number of strategies to lower the size of the SCT database for audits. Our key observation is that the web will likely transition to a new model for certificate issuance. While this transition is primarily motivated by the need to adapt the PKI to larger, post-quantum signature schemes, it also removes the need for SCT audits in most cases. We present the first estimates of how this transition may impact SCT auditing, based on data gathered from public CT logs. We find that large scale deployment of the new issuance model may reduce the number of SCT audits needed by a factor of 1,000, making PIR-based auditing practical to deploy.
Multi-Screaming-Channel Attacks: Frequency Diversity for Enhanced Attacks
Side-channel attacks consist of retrieving internal data from a victim system by analyzing its leakage, which usually requires proximity to the victim in the range of a few millimetres.
Screaming channels are EM side channels transmitted at a distance of a few meters.
They appear on mixed-signal devices integrating an RF module on the same silicon die as the digital part.
Consequently, the side channels are modulated by legitimate RF signal carriers and appear at the harmonics of the digital clock frequency.
While initial works have only considered collecting leakage at these harmonics, late work has demonstrated that the leakage is also present at frequencies other than these harmonics.
This result significantly increases the number of available frequencies to perform a screaming-channel attack, which can be convenient in an environment where multiple harmonics are polluted.
This work studies how this diversity of frequencies carrying leakage can be used to improve attack performance.
We first study how to combine multiple frequencies.
Second, we demonstrate that frequency combination can improve attack performance and evaluate this improvement according to the performance of the combined frequencies.
Finally, we demonstrate the interest of frequency combination in attacks at $15$ and, for the first time to the best of our knowledge, at $30$ meters.
One last important observation is that this frequency combination divides by $2$ the number of traces needed to reach a given attack performance.
State Machine Replication Without Borders
A set of unacquainted parties, some of which may misbehave, communicate with each other over an unauthenticated and unreliable gossip network. They wish to jointly replicate a state machine $\Pi$ so that each one of them has fair access to its operation. Specifically, assuming parties' computational power is measured as queries to an oracle machine $H(\cdot)$, parties can issue symbols to the state machine in proportion to their queries to $H(\cdot)$ at a given fixed rate. Moreover, if such access to the state machine is provided continuously in expected constant time installments we qualify it as fast fairness.
A state machine replication (SMR) protocol in this permissionless setting is expected to offer consistency across parties and reliably process all symbols that honest parties wish to add to it in a timely manner despite continuously fluctuating participation and in the presence of an adversary who commands less than half of the total queries to $H(\cdot)$ per unit of time.
A number of protocols strive to offer the above guarantee together with fast settlement — notably, the Bitcoin blockchain offers a protocol that settles against Byzantine adversaries in polylogarithmic rounds, while fairness only holds in a fail-stop adversarial model (due to the fact that Byzantine behavior can bias access to the state machine in the adversary's favor). In this work, we put forth the first Byzantine-resilient protocol solving SMR in this setting with both expected-constant-time settlement and fast fairness. In addition, our protocol is self-sufficient in the sense of performing its own time keeping while tolerating an adaptively fluctuating set of parties.
A Methodology to Achieve Provable Side-Channel Security in Real-World Implementations
Uncategorized
Uncategorized
A wide range of countermeasures have been proposed to defend against
side-channel attacks, with masking being one of the most effective and commonly
used techniques. While theoretical models provide formal security proofs, these often
rely on assumptions—sometimes implicit—that can be difficult to assess in practice.
As a result, the design of secure masked implementations frequently combines proven
theoretical arguments with heuristic and empirical validation. Despite the significant
body of work, the literature still lacks a cohesive and well-defined framework for
translating theoretical security guarantees into practical implementations on physical
devices. Specifically, there remains a gap in connecting provable results from abstract
models to quantitative security guarantees at the implementation level.
In this Systematization of Knowledge (SoK), we aim to provide a comprehensive
methodology to transform abstract cryptographic algorithms into physically secure
implementations against side-channel attacks on microcontrollers. We introduce new
tools to adapt the ideal noisy leakage model to practical, real-world scenarios, and
we integrate state-of-the-art techniques to build secure implementations based on
this model. Our work systematizes the design objectives necessary for achieving
high security levels in embedded devices and identifies the remaining challenges
in concretely applying security reductions. By bridging the gap between theory
and practice, we seek to provide a foundation for future research that can develop
implementations with proven security against side-channel attacks, based on well-
understood leakage assumptions.
Leveled Homomorphic Encryption Schemes for Homomorphic Encryption Standard
Homomorphic encryption allows for computations on encrypted data without exposing the underlying plaintext, enabling secure and private data processing in various applications such as cloud computing and machine learning. This paper presents a comprehensive mathematical foundation for three prominent homomorphic encryption schemes: Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski-Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS), all based on the Ring Learning with Errors (RLWE) problem. We align our discussion with the functionalities proposed in the recent homomorphic encryption standard, providing detailed algorithms and correctness proofs for each scheme. Additionally, we propose improvements to the current schemes focusing on noise management and optimization of public key encryption and leveled homomorphic computation. Our modifications ensure that the noise bound remains within a fixed function for all levels of computation, guaranteeing correct decryption and maintaining efficiency comparable to existing methods. The proposed enhancements reduce ciphertext expansion and storage requirements, making these schemes more practical for real-world applications.
SIGNITC: Supersingular Isogeny Graph Non-Interactive Timed Commitments
Non-Interactive Timed Commitment schemes (NITC) allow to open any commitment after a specified delay $t_{\mathrm{fd}}$. This is useful for sealed bid auctions and as primitive for more complex protocols. We present the first NITC without repeated squaring or theoretical black box algorithms like NIZK proofs or one-way functions. It has fast verification, almost arbitrary delay and satisfies IND-CCA hiding and perfect binding. Our protocol is based on isogenies between supersingular elliptic curves making it presumably quantum secure, and all algorithms have been implemented as part of SQISign or other well-known isogeny-based cryptosystems. Additionally, it needs no trusted setup and can use known primes for SIKE or SQISign.
From at Least $n/3$ to at Most $3\sqrt{n}$: Correcting the Algebraic Immunity of the Hidden Weight Bit Function
Weightwise degree-$d$ functions are Boolean functions that, on each set of fixed Hamming weight, coincide with a function of degree at most $d$. They generalize both symmetric functions and the Hidden Weight Bit Function (HWBF), which has been studied in cryptography for its favorable properties.
In this work, we establish a general upper bound on the algebraic immunity of such functions, a key security parameter against algebraic attacks on stream ciphers like filtered Linear Feedback Shift Registers (LFSRs). We construct explicit low-degree annihilators for WWdd functions with small $d$, and show how to generalize these constructions. As an application, we prove that the algebraic immunity of the HWBF is upper bounded by $3\sqrt{n}$ disproving a result from 2011 that claimed a lower bound of $n/3$. We then apply our technique to several generalizations of the HWBF proposed since 2021 for homomorphically friendly constructions and LFSR-based ciphers, refining or refuting results from six prior works.
Highly Efficient Actively Secure Two-Party Computation with One-Bit Advantage Bound
Secure two-party computation (2PC) enables two parties to jointly evaluate a function while maintaining input privacy. Despite recent significant progress, a notable efficiency gap remains between actively secure and passively secure protocols. In S\&P'12, Huang, Katz, and Evans formalized the notion of \emph{active security with one-bit leakage}, providing a promising approach to bridging this gap. Protocols derived from this notion have become foundational in designing highly efficient actively secure 2PC protocols. However, a critical challenge identified by Huang, Katz, and Evans remains unexplored: these protocols face significant weaknesses in ensuring fairness for honest parties when employed in standalone settings rather than as components within larger protocols. While the authors proposed two potential solutions to mitigate this issue, both approaches are prohibitively expensive and lack formalization of security guarantees.
In this paper, we first formally define an enhanced notion called \emph{active security with one-bit-advantage bound}, in which the adversaries' advantages are strictly bounded to at most one bit beyond what honest parties obtain. This bound is enforced through a \emph{progressive revelation} mechanism, where the evaluation result is disclosed incrementally bit by bit. In addition, we propose a novel approach leveraging label structures within garbled circuits to design a highly efficient constant-round 2PC protocol that achieves active security with one-bit advantage bound. Our protocol demonstrates \emph{runtime performance nearly identical to that of passively secure garbled-circuit counterparts} in duplex networks (\eg $1.033\times$ for the {\tt SHA256} circuit in LAN), with \emph{low overhead} for output progressive revelation (only $80$ communicated bytes per bit release).
With its strengthened security guarantees and minimal overhead, our protocol is highly suitable for practical 2PC applications.
Post-Quantum Threshold Ring Signature Applications from VOLE-in-the-Head
We propose efficient, post-quantum threshold ring signatures constructed from one-wayness of AES encryption and the VOLE-in-the-Head zero-knowledge proof system. Our scheme scales efficiently to large rings and extends the linkable ring signatures paradigm. We define and construct key-binding deterministic tags for signature linkability, that also enable succinct aggregation with approximate lower bound arguments of knowledge; this allows us to achieve succinct aggregation of our signatures without SNARKs. Finally, we extend our threshold ring signatures to realize post-quantum anonymous ledger transactions in the spirit of Monero. Our constructions assume symmetric key primitives only.
Whilst it is common to build post-quantum signatures from the one-wayness property of AES and a post-quantum NIZK scheme, we extend this paradigm to define and construct novel security properties from AES that are useful for advanced signature applications. We introduce key-binding and pseudorandomness of AES to establish linkability and anonymity of our threshold ring signatures from deterministic tags, and similarly establish binding and hiding properties of block ciphers modeled as ideal permutations to build commitments from AES, a crucial building block for our proposed post-quantum anonymous ledger scheme.
Pencil: A Domain-Extended PRF with Full $n$-bit Security for Strengthening GCM and More
We consider the problem of constructing efficient pseudorandom functions with Beyond-Birthday-Bound (BBB) security from blockciphers. More specifically, we are interested in variable-output-length pseudorandom functions (PRF) whose domain is twice that of the underlying blockcipher. We present two such constructions, $\textsf{Pencil}$ and $\sharp\textsf{Pencil}$, which provide weak PRF and full PRF security, respectively, where both achieve full $n$-bit security. While several recent works have focused on constructing BBB PRFs from blockciphers, much less attention has been given to weak PRF constructions which can potentially be constructed more efficiently and still serve as a useful primitive. Another understudied problem in this domain, is that of extending the domain of a BBB PRF, which turns out to be rather challenging. Besides being of theoretical interest in itself, this is also a very practical problem. Often, the input to the BBB PRF is a nonce, but random nonces are much easier to handle in practice as they do not require maintaining state---which can be very cumbersome in distributed systems and encrypted cloud storage. Accordingly, in order to maintain a BBB security bound, one requires random nonces of size between $1.5n$ and $2n$ bits long and corresponding BBB (weak) PRF constructions that admit matching input sizes. NIST has recently announced a pre-draft call for comments to standardise AEAD schemes that can encrypt larger amounts of data and admit larger nonces. The call lists two approaches. The first is to define an analogue of GCM using a 256-bit blockcipher, and the second is based on a recent proposal by Gueron, to extend GCM with a key derivation function (KDF) called DNDK to increase its security. In essence, DNDK is a BBB-secure expanding weak pseudorandom function with a domain size of 192 bits that is realised from AES. Our work makes relevant contributions to this domain in two important ways. Firstly, an immediate consequence of our work is that one can construct a GCM analogue with BBB security from $\sharp\textsf{Pencil}$, without resorting to a 256-bit blockcipher. Our second contribution is that $\sharp\textsf{Pencil}$ can be used as a KDF in combination with GCM in an analogous manner to DNDK-GCM. However, $\sharp\textsf{Pencil}$ being a full PRF as opposed to DNDK which is only a weak PRF, allows one to prove the KDF-GCM composition secure as an AEAD scheme. Finally, when contrasting $\textsf{Pencil}$ and DNDK as weak PRFs with comparable parameters, our construction requires only half the blockcipher calls.
A Better Kyber Butterfly for FPGAs
Kyber was selected by NIST as a Post-Quantum
Cryptography Key Encapsulation Mechanism standard. This
means that the industry now needs to transition and adopt
these new standards. One of the most demanding operations in
Kyber is the modular arithmetic, making it a suitable target for
optimization. This work offers a novel modular reduction design
with the lowest area on Xilinx FPGA platforms. This novel design,
through K-reduction and LUT-based reduction, utilizes 49 LUTs
and 1 DSP as opposed to Xing and Li’s 2021 CHES design
requiring 90 LUTs and 1 DSP for one modular multiplication.
Our design is the smallest modular multiplier reported as of
today.
Low-Latency Rate-Distortion-Perception Trade-off: A Randomized Distributed Function Computation Application
Semantic communication systems, which focus on transmitting the semantics of data rather than its exact reconstruction, redefine the design of communication networks for transformative efficiency in bandwidth-limited and latency-critical applications. Addressing these goals, we tackle the rate-distortion-perception (RDP) problem for image compression, a critical challenge in achieving perceptually realistic reconstructions under rate constraints. Formulated within the randomized distributed function computation (RDFC) framework, we establish an achievable non-asymptotic RDP region, providing finite blocklength trade-offs between rate, distortion, and perceptual quality, aligning with semantic communication objectives. We extend this region to also include a secrecy constraint, providing strong secrecy guarantees against eavesdroppers via physical-layer security methods, ensuring resilience against quantum attacks. Our contributions include (i) establishing achievable bounds for non-asymptotic RDP regions under realism and distortion constraints; (ii) extending these bounds to provide strong secrecy guarantees; (iii) characterizing the asymptotic secure RDP region under a perfect realism constraint; and (iv) illustrating significant reductions in rates and the effects of secrecy constraints and finite blocklengths. Our results provide actionable insights for designing low-latency, high-fidelity, and secure image compression systems with realistic outputs, advancing applications, e.g., in privacy-critical domains.
An Efficient and Secure Boolean Function Evaluation Protocol
Boolean functions play an important role in designing and analyzing many cryptographic systems, such as block ciphers, stream ciphers, and hash functions, due to their unique cryptographic properties such as nonlinearity, correlation immunity, and algebraic properties. The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. In this manuscript, we present an efficient and generic two-party protocol (namely BooleanEval) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building block. BooleanEval only employs hash evaluations and XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, BooleanEval avoids the use of additional cryptographic primitives, such as commitment schemes to reduce the computational overhead.
Engorgio: An Arbitrary-Precision Unbounded-Size Hybrid Encrypted Database via Quantized Fully Homomorphic Encryption
This work proposes an encrypted hybrid database framework that combines vectorized data search and relational data query over quantized fully homomorphic encryption (FHE). We observe that, due to the lack of efficient encrypted data ordering capabilities, most existing encrypted database (EDB) frameworks do not support hybrid queries involving both vectorized and relational data. To further enrich query expressiveness while retaining evaluation efficiency, we propose Engorgio, a hybrid EDB framework based on quantized data ordering techniques over FHE. Specifically, we design a new quantized data encoding scheme along with a set of novel comparison and permutation algorithms to accurately generate and apply orders between large-precision data items. Furthermore, we optimize specific query types, including full table scan, batched query, and Top-k query to enhance the practical performance of the proposed framework. In the experiment, we show that, compared to the state-of-the-art EDB frameworks, Engorgio is up to 28x--854x faster in homomorphic comparison, 65x--687x faster in homomorphic sorting and 15x--1,640x faster over a variety of end-to-end relational, vectorized, and hybrid SQL benchmarks. Using Engorgio, the amortized runtime for executing a relational and hybrid query on a 48-core processor is under 3 and 75 seconds, respectively, over a 10K-row hybrid database.
A Quantum Approach For Reducing Communications in Classical Secure Computations with Long Outputs
How could quantum cryptography help us achieve what are not achievable in classical cryptography? In this work we study the classical cryptographic problem that two parties would like to perform secure computations with long outputs. As a basic primitive and example, we first consider the following problem which we call secure function sampling with long outputs: suppose $f:\{0,1\}^n\rightarrow \{0,1\}^m$ is a public, efficient classical function, where $m$ is big; Alice would like to sample $x$ from its domain and sends $f(x)$ to Bob; what Bob knows should be no more than $f(x)$ even if it behaves maliciously. Classical cryptography, like FHE and succinct arguments [Gen09,Kil92,HW15], allows us to achieve this task within communication complexity $O(n+m)$; could we achieve this task with communication complexity independent of $m$?
In this work, we first design a quantum cryptographic protocol that achieves secure function sampling with approximate security, within $O(n)$ communication (omitting the dependency on the security parameter and error tolerance). We also prove the classical impossibility using techniques in [HW15], which means that our protocol indeed achieves a type of quantum advantage. Building on the secure function sampling protocol, we further construct protocols for general secure two-party computations [Yao86,GB01] with approximate security, with communication complexity only depending on the input length and the targeted security. In terms of the assumptions, we construct protocols for these problems assuming only the existence of collapsing hash functions [Unr16]; what's more, we also construct a classical-channel protocol for these problems additionally assuming the existence of noisy trapdoor claw-free functions [BCMVV,BKVV].
$\mathsf{emGraph}$: Efficient Multiparty Secure Graph Computation
Secure graph computation enables computing on graphs while hiding the graph topology as well as the associated node/edge data. This facilitates collaborative analysis among multiple data owners, who may only hold a private partial view of the global graph. Several works address this problem using the technique of secure multiparty computation (MPC) in the presence of 2 or 3 parties. However, when moving to the multiparty setting, as required for collaborative analysis among multiple data owners, these solutions are no longer scalable. This remains true with respect to the state-of-the-art framework of $\mathsf{Graphiti}$ (Koti et al., CCS 2024) as well. Specifically, $\mathsf{Graphiti}$ incurs a round complexity linear in the number of parties or data owners. This is due to its reliance on secure shuffle protocol, constituting a bottleneck in the multiparty setting. Additionally, $\mathsf{Graphiti}$ has a prohibitively expensive initialisation phase due to its reliance on secure sort, with a round complexity dependent on both the graph size and the number of parties.
We propose $\mathsf{emGraph}$, a generic framework for secure graph computation in the multiparty setting that eliminates the need of shuffle and instead, relies on a weaker primitive known as $\mathsf{Permute+Share}$. Further $\mathsf{emGraph}$ is designed to have a lightweight initialisation, that eliminates the need for sorting, making its round complexity independent of the graph size and number of parties. Unlike any of the prior works, achieving a round complexity independent of the number of parties is what makes $\mathsf{emGraph}$ scalable.
Finally, we implement and benchmark the performance of $\mathsf{emGraph}$ for the application of PageRank computation and showcase its efficiency and scalability improvements over $\mathsf{Graphiti}$. Concretely, we witness improvements of up to $80\times$ in runtime in comparison to state-of-the-art framework $\mathsf{Graphiti}$. Further, we observe that $\mathsf{emGraph}$ takes under a minute to perform 10 iterations of PageRank computation on a graph of size $10^6$ that is distributed among $25$ parties/data owners, making it highly practical for secure graph computation in the multiparty setting.
The Black-Box Simulation Barrier Persists in a Fully Quantum World
Zero-Knowledge (ZK) protocols have been a subject of intensive study due to their fundamental importance and versatility in modern cryptography. However, the inherently different nature of quantum information significantly alters the landscape, necessitating a re-examination of ZK designs.
A crucial aspect of ZK protocols is their round complexity, intricately linked to $\textit{simulation}$, which forms the foundation of their formal definition and security proofs. In the $\textit{post-quantum}$ setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21] demonstrated the non-existence of constant-round $\textit{black-box-simulatable}$ ZK arguments (BBZK) for $\mathbf{NP}$ unless $\mathbf{NP} \subseteq \mathbf{BQP}$. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum.
Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the $\textit{unconditional}$ security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner.
We resolved this problem by proving that only languages in $\mathbf{BQP}$ admit constant-round $\textit{fully-quantum}$ BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of $\mathbf{BQP}$ vs $\mathbf{QMA}$, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the $\textit{non-black-box}$ simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
Proving CPU Executions in Small Space
zkVMs are SNARKs for verifying CPU execution. They allow an untrusted prover to show that it correctly ran a specified program on a witness, where the program is given as bytecode conforming to an instruction set architecture like RISC-V. Existing zkVMs still struggle with high prover resource costs, notably large runtime and memory usage. We show how to implement Jolt—an advanced, sum-check- based zkVM—with a significantly reduced memory footprint, without relying on SNARK recursion, and with only modest runtime overhead (potentially well below a factor of two). We discuss benefits of this approach compared to prevailing recursive techniques.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size $s$ with communication proportional to $O(s/\log\log s)$ bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with $B$-bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size $O(s/\log\log s) \cdot (\lambda + \log B)$ bits, where $\lambda$ is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form $C(x) \cdot C'(y)$, where $C$ is any polynomially-sized circuit applied to the semi-private input $y$, and $C'$ is a restricted-multiplication (or, NC1) circuit applied to the private input $x$. This significantly broadens the expressiveness of prior known HSS constructions.
Clubcards for the WebPKI: smaller certificate revocation tests in theory and practice
CRLite is a low-bandwidth, low-latency, privacy-preserving mechanism for distributing certificate revocation data. A CRLite aggregator periodically encodes revocation data into a compact static hash set, or membership test, which can can be downloaded by clients and queried privately. We present a novel data-structure for membership tests, which we call a clubcard, and we evaluate the encoding efficiency of clubcards using data from Mozilla's CRLite infrastructure.
As of November 2024, the WebPKI contains over 900 million valid certificates and over 8 million revoked certificates. We describe an instantiation of CRLite that encodes the revocation status of these certificates in a 6.7 MB package. This is $54\%$ smaller than the original instantiation of CRLite presented at the 2017 IEEE Symposium on Security and Privacy, and it is $21\%$ smaller than the lower bound claimed in that work.
A sequence of clubcards can encode a dynamic dataset like the WebPKI revocation set. Using data from late 2024 again, we find that clubcards encoding 6 hour delta updates to the WebPKI can be compressed to 26.8 kB on average---a size that makes CRLite truly practical.
We have extended Mozilla's CRLite infrastructure so that it can generate clubcards, and we have added client-side support for this system to Firefox. We report on some performance aspects of our implementation, which is currently the default revocation checking mechanism in Firefox Nightly, and we propose strategies for further reducing the bandwidth requirements of CRLite.
Random Oracle Combiners: Merkle-Damgård Style
A Random Oracle Combiner (ROC), introduced by Dodis et al. (CRYPTO ’22), takes two hash functions $h_1, h_2$ from m bits to n bits and outputs a new hash function $C$ from $m$' to $n$' bits. This function C is guaranteed to be indifferentiable from a fresh random oracle as long as one of $h_1$ and $h_2$ (say, $h_1$) is a random oracle, while the other h2 can “arbitrarily depend” on $h_1$.
The work of Dodis et al. also built the first length-preserving ROC, where $n$′ = $n$. Unfortunately, despite this feasibility result, this construction has several deficiencies. From the practical perspective, it could not be directly applied to existing Merkle-Damgård-based hash
functions, such as SHA2 or SHA3. From the theoretical perspective, it required $h_1$ and $h_2$ to have input length $m$ > 3λ, where λ is the security parameter.
To overcome these limitations, Dodis et al. conjectured — and left as the main open question — that the following (salted) construction is a length-preserving ROC:
$C^{h1,h2}_{\mathcal{Z}_1,\mathcal{Z}_2} (M ) = h_1^*(M, \mathcal{Z}_1) \oplus h^*_2(M,\mathcal{Z}_2),$
where $\mathcal{Z}_1, \mathcal{Z}_2$ are random salts of appropriate length, and $f^*$ denotes the Merkle-Damgård-extension of a given compression function $f$. As our main result, we resolve this conjecture in the affirmative. For practical use, this makes the resulting combiner applicable to existing, Merkle-Damgård-based hash functions. On the theory side, it shows the existence of ROCs only requiring optimal input length $m$ = λ+O(1).
On some non-linear recurrences over finite fields linked to isogeny graphs
This paper presents new results that establish connections between isogeny graphs and nonlinear recurrences over finite fields. Specifically, we prove several theorems that link these two areas, offering deeper insights into the structure of isogeny graphs and their relationship with nonlinear recurrence sequences. We further provide two related conjectures which may be worth of further research. These findings contribute to a better understanding of the endomorphism ring of a curve, advancing progress toward the resolution of the Endomorphism Ring Problem, which aims to provide a computational characterization of the endomorphism ring of a supersingular elliptic curve.
Analytic and Simulation Results of a Gaussian Physically Unclonable Constant Based on Resistance Dispersion
Physically Unclonable Constants (PUCs) are a special type of Physically Unclonable Constants and they can be used to embed secret bit-strings in chips. Most PUCs are an array of cells where each cell is a digital circuit that evolve spontaneously toward one of two states, the chosen state being function of random manufacturing process variations. In this paper we propose an Analog Physically Unclonable Constant (APUC) whose output is an analog value to be transformed in digital by a digitizer circuit. The ratio behind this proposal is that an APUC cell has the potential of providing more than one bit, reducing the required footprint. Preliminary theoretical analysis and simulation results are presented. The proposed APUC has interesting performances (e.g., it can provide up to 5 bits per cell) that grant for further investigation.
TockOwl: Asynchronous Consensus with Fault and Network Adaptability
BFT protocols usually have a waterfall-like degradation in performance in the face of crash faults. Some BFT protocols may not experience sudden performance degradation under crash faults. They achieve this at the expense of increased communication and round complexity in fault-free scenarios. In a nutshell, existing protocols lack the adaptability needed to perform optimally under varying conditions.
We propose TockOwl, the first asynchronous consensus protocol with fault adaptability. TockOwl features quadratic communication and constant round complexity, allowing it to remain efficient in fault-free scenarios. TockOwl also possesses crash robustness, enabling it to maintain stable performance when facing crash faults. These properties collectively ensure the fault adaptability of TockOwl.
Furthermore, we propose TockOwl+ that has network adaptability. TockOwl+ incorporates both fast and slow tracks and employs hedging delays, allowing it to achieve low latency comparable to partially synchronous protocols without waiting for timeouts in asynchronous environments. Compared to the latest dual-track protocols, the slow track of TockOwl+ is simpler, implying shorter latency in fully asynchronous environments.
An attack on ML-DSA using an implicit hint
The security of ML-DSA, like most signature schemes, is partially based on the fact that the nonce used to generate the signature is unknown to any attacker. In this work, we exhibit a lattice-based attack that is possible if the nonces share implicit or explicit information. From a collection of signatures whose nonces share certain coefficients, it is indeed possible to build a collection of non full-rank lattices. Intersecting them, we show how to create a low-rank lattice that contains one of the polynomials of the secret key, which in turn can be recovered using lattice reduction techniques.
There are several interpretations of this result: firstly, it can be seen as a generalization of a fault-based attack on BLISS presented at SAC'16 by Thomas Espitau et al. Alternatively, it can be understood as a side-channel attack on ML-DSA, in the case where an attacker is able to recover only one of the coefficients of the nonce used during the generation of the signature. For ML-DSA-II, we show that $4 \times 160$ signatures and few hours of computation are sufficient to recover the secret key on a desktop computer. Lastly, our result shows that simple countermeasures, such as permuting the generation of the nonce coefficients, are not sufficient.
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
Secure two-party computation (2PC) allows two distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. Adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. While a semi-honest secure protocol is fast but strongly assumed that all participants will follow the protocol, a malicious protocol often requires heavy verification steps and preprocessing phase to prohibit any cheat. Covert security [10] was first introduced by Aumann and Lindell, which looks into the "middle ground" between semi-honest security and malicious security, such that active adversaries who cheat will be caught with a predefined probability. However, it is still an open question that how to properly determine such a probability before protocol execution, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. To achieve public verifiability and meanwhile outsource the verification steps, [12] presented publicly auditable security to enable an external auditor to verify the result correctness. Essentially, an additional existence assumption of a bulletin board functionality is required to keep tracking the broadcasted messages for the auditor. And moreover, the auditor cannot identify the cheater, but only points out the incorrect result. The (robust) accountability family [40, 62, 76] achieves both output delivery guarantee and public verifiability, which relies on heavy offline and online constructions with zero knowledge proofs.
In this paper, we propose a new security notion called honorific security, where an external arbiter can find the cheater with overwhelming probability under the malicious corruption. With honorific security, we do not prohibit cheat of a corrupted party during the online stage, but enable the honest party to detect and punish the cheater later on in public. We show that a maliciously secure garbled circuit (GC) [83] protocol can thus be constructed with only slightly more overhead than a passively secure protocol. Our construction performs up to 2.37 times and 13.30 times as fast as the state-of-the-art protocols with covert and malicious security, respectively.
Laconic Cryptography with Preprocessing
Laconic cryptography focuses on designing two-message protocols that allow secure computation on large datasets while minimizing communication costs. While laconic cryptography protocols achieve asymptotically optimal communication complexity for many tasks, their concrete efficiency is prohibitively expensive due to the heavy use of public-key techniques or the non-black-box of cryptographic primitives.
In this work, we initiate the study of "laconic cryptography with preprocessing", introducing a model that includes an offline phase to generate database-dependent correlations, which are then used in a lightweight online phase. These correlations are conceptually simple, relying on linear-algebraic techniques. This enables us to develop a protocol for private laconic vector oblivious linear evaluation (plvOLE). In such a protocol, the receiver holds a large database $\mathsf{DB}$, and the sender has two messages $v$ and $w$, along with an index $i$. The receiver learns the value $v \cdot \mathsf{DB}_i + w$ without revealing other information.
Our protocol, which draws from ideas developed in the context of private information retrieval with preprocessing, serves as the backbone for two applications of interest: laconic private set intersection (lPSI) for large universes and laconic function evaluation for RAM-programs (RAM-LFE). Based our plvOLE protocol, we provide efficient instantiations of these two primitives in the preprocessing model.
- « Previous
- 1
- 2
- 3
- ...
- 31
- Next »