All papers in 2025 (Page 3 of 710 results)
Adaptive Adversaries in Byzantine-Robust Federated Learning: A survey.
Federated Learning (FL) has recently emerged as one of the leading paradigms for collaborative machine learning, serving as a tool for model computation without a need to expose one’s privately stored data. However, despite its advantages, FL systems face severe challenges within its own security solutions that address both privacy and robustness of models. This paper focuses on vulnerabilities within the domain of FL security with emphasis on model-robustness. Identifying critical gaps in current defences, particularly against adaptive adversaries which modify their attack strategies after being disconnected and rejoin systems to continue attacks. To our knowledge, other surveys in this domain do not cover the concept of adaptive adversaries, this along with the significance of their impact serves as the main motivation for this work. Our contributions are fivefold: (1) we present a comprehensive overview of FL systems, presenting a complete summary of its fundamental building blocks, (2) an extensive overview of existing vulnerabilities that target FL systems in general, (3) highlight baseline attack vectors as well as state-of-the-art approaches to development of attack methods and defence mechanisms, (4) introduces a novel baseline method of attack leveraging reconnecting malicious clients, and (5) identifies future research directions to address and counter adaptive attacks. We demonstrate through experimental results that existing baseline secure aggregation rules used in other works for comparison such as Krum and Trimmed Mean are insufficient against those attacks. Further, works improving upon those algorithms do not address this concern either. Our findings serve as a basis for redefining FL security paradigms in the direction of adaptive adversaries.
Almost Optimal KP and CP-ABE for Circuits from Succinct LWE
We present almost-optimal lattice-based attribute-based encryption (ABE) and laconic function evaluation (LFE). For depth d circuits over $\ell$-bit inputs, we obtain
* key-policy (KP) and ciphertext-policy (CP) ABE schemes with ciphertext, secret key and public key size $O(1)$;
* LFE with ciphertext size $\ell + O(1)$ as well as CRS and digest size $O(1)$;
where O(·) hides poly(d, λ) factors. The parameter sizes are optimal, up to the poly(d) dependencies. The security of our schemes rely on succinct LWE (Wee, CRYPTO 2024). Our results constitute a substantial improvement over the state of the art; none of our results were known even under the stronger evasive LWE assumption.
Towards Building Scalable Constant-Round MPC from Minimal Assumptions via Round Collapsing
In this work, we study the communication complexity of constant-round secure multiparty computation (MPC) against a fully malicious adversary and consider both the honest majority setting and the dishonest majority setting. In the (strong) honest majority setting (where $t=(1/2-\epsilon)n$ for a constant $\epsilon$), the best-known result without relying on FHE is given by Beck et al. (CCS 2023) based on the LPN assumption that achieves $O(|C|\kappa)$ communication, where $\kappa$ is the security parameter and the achieved communication complexity is independent of the number of participants. In the dishonest majority setting, the best-known result is achieved by Goyal et al. (ASIACRYPT 2024), which requires $O(|C|n\kappa)$ bits of communication and is based on the DDH and LPN assumptions.
In this work, we achieve the following results: (1) For any constant $\epsilon<1$, we give the first constant-round MPC in the dishonest majority setting for corruption threshold $t<(1-\epsilon)n$ with $O(|C|\kappa+D (n+\kappa)^2\kappa)$ communication assuming random oracles and oblivious transfers, where $D$ is the circuit depth. (2) We give the first constant-round MPC in the standard honest majority setting (where $t=(n-1)/2$) with $O(|C|\kappa+D (n+\kappa)^2\kappa)$ communication only assuming random oracles.
Unlike most of the previous constructions of constant-round MPCs that are based on multiparty garbling, we achieve our result by letting each party garble his local computation in a non-constant-round MPC that meets certain requirements. We first design a constant-round MPC that achieves $O(|C|\kappa + Dn^2\kappa)$ communication assuming random oracles in the strong honest majority setting of $t=n/4$. Then, we combine the party virtualization technique and the idea of MPC-in-the-head to boost the corruption threshold to $t<(1-\epsilon)n$ for any constant $\epsilon<1$ assuming oblivious transfers to achieve our first result. Finally, our second result is obtained by instantiating oblivious transfers using a general honest-majority MPC and the OT extension technique built on random oracles.
Scalable Zero-knowledge Proofs for Non-linear Functions in Machine Learning
Zero-knowledge (ZK) proofs have been recently explored for the integrity of machine learning (ML) inference. However, these protocols suffer from high computational overhead, with the primary bottleneck stemming from the evaluation of non-linear functions. In this paper, we propose the first systematic ZK proof framework for non-linear mathematical functions in ML using the perspective of table lookup. The key challenge is that table lookup cannot be directly applied to non-linear functions in ML since it would suffer from inefficiencies due to the intolerably large table. Therefore, we carefully design several important building blocks, including digital decomposition, comparison, and truncation, such that they can effectively utilize table lookup with a quite small table size while ensuring the soundness of proofs. Based on these blocks, we implement complex mathematical operations and further construct ZK proofs for current mainstream non-linear functions in ML such as ReLU, sigmoid, and normalization. The extensive experimental evaluation shows that our framework achieves 50 ∼ 179× runtime improvement compared to the state-of-the-art work, while maintaining a similar level of communication efficiency.
On the Estonian Internet Voting System, IVXV, SoK and Suggestions
The Estonian i-voting experience is probably the richest to analyze; a country that is considered a pioneer in digitizing both the government and private sector since 2001, and hence digital voting in 2005, yet there are still some complaints submitted, critics and remarks to consider about the IVXV system. In this paper, we introduce a Systemization of Knowledge of the Estonian IVXV i-voting system and propose some added security enhancements. The presented SoK includes applications implemented by election observers in 2023 & 2024 elections, which, to our knowledge, has never been mentioned and/or analyzed in the academia before. The paper also updates the general knowledge about an extra right given to auditors (but not observers) in the June 2024 European election, recent improvements, and recent complaints. Finally, we discuss the current system status in 2024 EP elections, propose our own suggestions to some remaining vulnerabilities, then raise the inevitable question of the approaching quantum threat.
Capitalized Bitcoin Fork for National Strategic Reserve
We describe a strategy for a nation to acquire majority stake in Bitcoin with zero cost to the taxpayers of the nation. We propose a bitcoin fork sponsored by the the government of the nation, and backed by the full faith of treasury of the nation, such that the genesis block of this fork attributes fixed large amount of new kinds of tokens called strategic-reserve-bitcoin tokens (SRBTC) to the nation's treasury, which is some multiple (greater than one) of the amount of all Bitcoin tokens (BTC) currently set in the Bitcoin protocol. The BTC tokens continue to be treated 1:1 as SRBTC tokens in the forked chain. The only capital that the nation puts up is its explicit guarantee that the SRBTC tokens of the fork will be accepted as legal tender, such as payment of tax to the treasury.
We suggest that this is a better approach than starting a new blockchain that mimics Bitcoin, as it will be partially fair to the current holders of Bitcoin, which in turn would make it competitive in the space of other such possible forks by other powerful nations. Moreover, such a proof-of-work blockchain retains its egalitarian and democratic nature, which competitively deters the said nation from any dilutions in the future.
To justify our proposal we setup three competitive games, and show strategies for different players that are in Nash equilibrium and which throw further light on these claims. In particular,
1. The first game shows that if the only two alternatives for investors is to invest in BTC or SRBTC, then individuals who have a certain fraction $\theta$ of their wealth already invested in BTC, will invest new money in the original chain, whereas the individuals whose current wealth invested in BTC is less than the $\theta$ fraction will invest new money in SRBTC.
2. The second game shows that if there is a third alternative for investment, which is cash that is losing value (inflation-adjusted) by a percentage $d$, then the investors who had less than $\theta$ fraction of wealth in Bitcoin, will invest in SRBTC only if the dilution of SRBTC is large enough (as an increasing (linear) function of $1/d$). Here by dilution we mean the new SRBTC tokens that are allowed to be eventually mined in the fork.
3. The third game shows that investors would prefer a fork of Bitcoin over a replica of Bitcoin that doesn't value original BTC, when both are available and even if both are backed similarly by one or more nations.
Ideal Compartmented Secret Sharing Scheme Based on the Chinese Remainder Theorem for Polynomial Rings
A secret sharing scheme starts with a secret and then derives from it
certain shares (or shadows) which are distributed to users.
The secret may be recovered only by certain
predetermined groups. In case of compartmented secret sharing, the
set of users is partitioned into compartments and the secret
can be recovered only if the number of participants from
any compartment is greater than or equal to a fixed compartment threshold
and the total number of participants is greater than or equal to a global threshold.
In this paper we use the Chinese Remainder Theorem for Polynomial Rings in order to construct an ideal compartmented secret sharing scheme, inspired by the work from [20].
Max Bias Analysis: A New Approach on Computing the Entropy of Free Ring-Oscillator
This work introduce a new approach called Max bias analysis for the entropy computation of structures of Free Ring Oscillator-based Physical Random Number Generator. It employs the stochastic model based on the well-established Wiener process, specifically adapted to only capture thermal noise contributions while accounting for potential non-zero bias in the duty cycle.
Our analysis is versatile, applicable to combinations of multiple sampled Ring Oscillator (RO) filtering by any function. The entropy computation takes as inputs the parameters of the thermal stochastic model and delivers directly a proven bound for both Shannon entropy and min-entropy to fulfill AIS31 and NIST SP 800-90 B. As an example, we apply the new methodology on an enhanced structure of TRNG combining several free-running Ring Oscillators filtered by a vectorial function built from a linear error correcting code that optimizes the functional performance in terms of [entropy rate/silicium area used] and that maintains the mathematical proof of the entropy lower bound as simple as possible.
Registration-Based Encryption in the Plain Model
Registration-based encryption (RBE) is a recently developed alternative to identity-based encryption, that mitigates the well-known key-escrow problem by letting each user sample its own key pair. In RBE, the key authority is substituted by a key curator, a completely transparent entity whose only job is to reliably aggregate users' keys. However, one limitation of all known RBE scheme is that they all rely on one-time trusted setup, that must be computed honestly.
In this work, we ask whether this limitation is indeed inherent and we initiate the systematic study of RBE in the plain model, without any common reference string. We present the following main results:
- (Definitions) We show that the standard security definition of RBE is unachievable without a trusted setup and we propose a slight weakening, where one honest user is required to be registered in the system.
- (Constructions) We present constructions of RBE in the plain model, based on standard cryptographic assumptions. Along the way, we introduce the notions of non-interactive witness indistinguishable (NIWI) proofs secure against chosen statements attack and re-randomizable RBE, which may be of independent interest.
A major limitation of our constructions, is that users must be updated upon every new registration.
- (Lower Bounds) We show that this limitation is in some sense inherent. We prove that any RBE in the plain model that satisfies a certain structural requirement, which holds for all known RBE constructions, must update all but a vanishing fraction of the users, upon each new registration. This is in contrast with the standard RBE settings, where users receive a logarithmic amount of updates throughout the lifetime of the system.
Quantum Key-Recovery Attacks on Permutation-Based Pseudorandom Functions
Due to their simple security assessments, permutation-based pseudo-random functions (PRFs) have become widely used in cryptography. It has been shown that PRFs using a single $n$-bit permutation achieve $n/2$ bits of security, while those using two permutation calls provide $2n/3$ bits of security in the classical setting. This paper studies the security of permutation-based PRFs within the Q1 model, where attackers are restricted to classical queries and offline quantum computations. We present improved quantum-time/classical-data tradeoffs compared with the previous attacks. Specifically, under the same assumptions/hardware as Grover's exhaustive search attack, i.e. the offline Simon algorithm, we can recover keys in quantum time $\tilde{O}(2^{n/3})$, with $O(2^{n/3})$ classical queries and $O(n^2)$ qubits. Furthermore, we enhance previous superposition attacks by reducing the data complexity from exponential to polynomial, while maintaining the same time complexity. This implies that permutation-based PRFs become vulnerable when adversaries have access to quantum computing resources. It is pointed out that the above quantum attack can be used to quite a few cryptography, including PDMMAC and pEDM, as well as general instantiations like XopEM, EDMEM, EDMDEM, and others.
SecurED: Secure Multiparty Edit Distance for Genomic Sequences
DNA edit distance (ED) measures the minimum number of single nucleotide insertions, substitutions, or deletions required to convert a DNA sequence into another. ED has broad applications in healthcare such as sequence alignment, genome assembly, functional annotation, and drug discovery. Privacy-preserving computation is essential in this context to protect sensitive genomic data. Nonetheless, the existing secure DNA edit distance solutions lack efficiency when handling large data sequences or resort to approximations and fail to accurately compute the metric.
In this work, we introduce secureED, a protocol that tackles these limitations, resulting in a significant performance enhancement of approximately $2-24\times$ compared to existing methods. Our protocol computes a secure ED between two genomes, each comprising $1,000$ letters, in just a few seconds. The underlying technique of our protocol is a novel approach that transforms the established approximate matching technique (i.e., the Ukkonen algorithm) into exact matching, exploiting the inherent similarity in human DNA to achieve cost-effectiveness. Furthermore, we introduce various optimizations tailored for secure computation in scenarios with a limited input domain, such as DNA sequences composed solely of the four nucleotide letters.
SCAPEgoat: Side-channel Analysis Library
Side-channel analysis (SCA) is a growing field in
hardware security where adversaries extract secret information
from embedded devices by measuring physical observables like
power consumption and electromagnetic emanation. SCA is a
security assessment method used by governmental labs, standardization
bodies, and researchers, where testing is not just
limited to standardized cryptographic circuits, but it is expanded
to AI accelerators, Post Quantum circuits, systems, etc. Despite
its importance, SCA is performed on an ad hoc basis in the
sense that its flow is not systematically optimized and unified
among labs. As a result, the current solutions do not account
for fair comparisons between analyses. Furthermore, neglecting
the need for interoperability between datasets and SCA metric
computation increases students’ barriers to entry. To address
this, we introduce SCAPEgoat, a Python-based SCA library
with three key modules devoted to defining file format, capturing
interfaces, and metric calculation. The custom file framework
organizes side-channel traces using JSON for metadata, offering
a hierarchical structure similar to HDF5 commonly applied in
SCA, but more flexible and human-readable. The metadata can
be queried with regular expressions, a feature unavailable in
HDF5. Secondly, we incorporate memory-efficient SCA metric
computations, which allow using our functions on resource-restricted
machines. This is accomplished by partitioning datasets
and leveraging statistics-based optimizations on the metrics. In
doing so, SCAPEgoat makes the SCA more accessible to newcomers
so that they can learn techniques and conduct experiments
faster and with the possibility to expand on in the future.
Scoop: An Optimizer for Profiling Attacks against Higher-Order Masking
In this paper we provide new theoretical and empirical evidences that gradient-based deep learning profiling attacks (DL-SCA) suffer from masking schemes. This occurs through an initial stall of the learning process: the so-called plateau effect. To understand why, we derive an analytical expression of a DL-SCA model targeting simulated traces which enables us to study an analytical expression of the loss. By studying the loss landscape of this model, we show that not only do the magnitudes of the gradients decrease as the order of masking increases, but the loss landscape also exhibits a prominent saddle point interfering with the optimization process. From these observations, we (1) propose the usage of a second-order optimization algorithm mitigating the impact of low-gradient areas. In addition, we show how to leverage the intrinsic sparsity of valuable information in SCA traces to better pose the DL-SCA problem. To do so, we (2) propose to use the implicit regularization properties of the sparse mirror descent. These propositions are gathered in a new publicly available optimization algorithm, Scoop. Scoop combines second-order derivative of the loss function in the optimization process, with a sparse stochastic mirror descent. We experimentally show that Scoop pushes further the current limitations of DL-SCA against simulated traces, and outperforms the state-of-the-art on the ASCADv1 dataset in terms of number of traces required to retrieve the key, perceived information and plateau length. Scoop also performs the first non-worst-case attack on the ASCADv2 dataset. On simulated traces, we show that using Scoop reduces the DL-SCA time complexity by the equivalent of one masking order.
Fast Scloud+: A Fast Hardware Implementation for the Unstructured LWE-based KEM - Scloud+
Scloud+ is an unstructured LWE-based key encapsulation mechanism (KEM) with conservative quantum security, in which ternary secrets and lattice coding are incorporated for higher computational and communication efficiency. However, its efficiencies are still much inferior to those of the structured LWE-based KEM, like ML-KEM (standardized by NIST). In this paper, we present a configurable hardware architecture for Scloud+.KEM to improve the computational efficiency. Many algorithmic and architectural co-optimizations are proposed to reduce the complexity and increase the degree of parallelism. Specially, the matrix multiplications are computed by a block in serial and the block is calculated in one cycle, without using any multipliers. In addition, the random bits all are generated by an unfolded Keccak core, well matched with the data flow required by the block matrix multiplier. The proposed design is coded in Verilog and implemented under the SMIC 40nm LP CMOS technology. The synthesized results show that Scloud+.KEM-128 only costs 23.0 $us$, 24.3 $us$, and 24.6 $us$ in the KeyGen, Encaps, and Decaps stages, respectively, with an area consumption of 0.69 $mm^2$, significantly narrowing the gap with the state-of-the-art of Kyber hardware implementation.
Shortcut2Secrets: A Table-based Differential Fault Attack Framework
Recently, Differential Fault Attacks (DFAs) have proven highly effective against stream ciphers designed for Hybrid Homomorphic Encryption (HHE). In this work, we present a table-based DFA framework called the \textit{shortcut attack}, which generalizes the attack proposed by Wang and Tang on the cipher \textsf{Elisabeth}.
The framework applies to a broad sub-family of ciphers following the Group Filter Permutator (GFP) paradigm and enhances previous DFAs by improving both the fault identification and path generation steps. Notably, the shortcut attack circumvents the issue of function representation, allowing successful attacks even when the cipher's filter function cannot be represented over the ring it is defined on.
Additionally, we provide complexity estimates for the framework and apply the shortcut attack to \textsf{Elisabeth-4} and its patches. As a result, we optimize the DFA on \textsf{Elisabeth-4}, requiring fewer keystreams and running faster than previous methods. Specifically, we achieve a DFA that requires only $3000$ keystreams, which is one-fifth of the previous best result. We also successfully mount a practical DFA on \textsf{Gabriel-4} and provide a theoretical DFA for \textsf{Elisabeth-b4}.
For the latest patch, \textsf{Margrethe-18-4}, which follows the more general Mixed Filter Permutator (MFP) paradigm, we present a DFA in a stronger model. To the best of our knowledge, these are the first DFA results on the patches of \textsf{Elisabeth-4}. Finally, we derive security margins to prevent shortcut attacks on a broad sub-family of MFP ciphers, which can serve as parameter recommendations for designers.
A Security-Enhanced Pairing-Free Certificateless Aggregate Signature for Vehicular Ad-Hoc Networks, Revisited
We show that the aggregate signature scheme [IEEE Syst. J., 2023, 17(3), 3822-3833] is insecure against forgery attack. This flaw is due to that the ephemeral key or ephemeral value chosen in the signing phase is not indeed bound to the final signature. An adversary can sign any message while the verifier cannot find the fraud. We also suggest a revising method to frustrate this attack.
Electromagnetic Side-Channel Analysis of PRESENT Lightweight Cipher
Side-channel vulnerabilities pose an increasing threat to cryptographically protected devices. Consequently, it is crucial to observe information leakages through physical parameters such as power consumption and electromagnetic (EM) radiation to reduce susceptibility during interactions with cryptographic functions. EM side-channel attacks are becoming more prevalent. PRESENT is a promising lightweight cryptographic algorithm expected to be incorporated into Internet-of-Things (IoT) devices in the future. This research investigates the EM side-channel robustness of PRESENT using a correlation attack model. This work extends our previous Correlation EM Analysis (CEMA) of PRESENT with improved results. The attack targets the Substitution box (S-box) and can retrieve 8 bytes of the 10-byte encryption key with a minimum of 256 EM waveforms. This paper presents the process of EM attack modelling, encompassing both simple and correlation attacks, followed by a critical analysis.
Tighter Concrete Security for the Simplest OT
The Chou-Orlandi batch oblivious transfer (OT) protocol is a particularly attractive OT protocol that bridges the gap between practical efficiency and strong security guarantees and is especially notable due to its simplicity. The security analysis provided by Chou and Orlandi bases the security of their protocol on the hardness of the computational Diffie-Hellman ($\mathsf{CDH}$) problem in prime-order groups. Concretely, in groups in which no better-than-generic algorithms are known for the $\mathsf{CDH}$ problem, their security analysis yields that an attacker running in time $t$ and issuing $q$ random-oracle queries breaks the security of their protocol with probability at most $\epsilon \leq q^2 \cdot t / 2^{\kappa/2}$, where $\kappa$ is the bit-length of the group's order. This concrete bound, however, is somewhat insufficient for 256-bit groups (e.g., for $\kappa = 256$, it does not provide any guarantee already for $t = 2^{48}$ and $q = 2^{40}$).
In this work, we establish a tighter concrete security bound for the Chou-Orlandi protocol. First, we introduce the list square Diffie-Hellman ($\ell\text{-}\mathsf{sqDH}$) problem and present a tight reduction from the security of the protocol to the hardness of solving $\ell\text{-}\mathsf{sqDH}$. That is, we completely shift the task of analyzing the concrete security of the protocol to that of analyzing the concrete hardness of the $\ell\text{-}\mathsf{sqDH}$ problem. Second, we reduce the hardness of the $\ell\text{-}\mathsf{sqDH}$ problem to that of the decisional Diffie-Hellman ($\mathsf{DDH}$) problem without incurring a multiplicative loss. Our key observation is that although $\mathsf{CDH}$ and $\mathsf{DDH}$ have the same assumed concrete hardness, relying on the hardness of $\mathsf{DDH}$ enables our reduction to efficiently test the correctness of the solutions it produces.
Concretely, in groups in which no better-than-generic algorithms are known for the $\mathsf{DDH}$ problem, our analysis yields that an attacker running in time $t$ and issuing $q \leq t$ random-oracle queries breaks the security of the Chou-Orlandi protocol with probability at most $\epsilon \leq t / 2^{\kappa/2}$ (i.e., we eliminate the above multiplicative $q^2$ term). We prove our results within the standard real-vs-ideal framework considering static corruptions by malicious adversaries, and provide a concrete security treatment by accounting for the statistical distance between a real-model execution and an ideal-model execution.
Endorser Peer Anonymization in Hyperledger Fabric for Consortium of Organizations
Hyperledger Fabric is a unique permissioned platform for implementing blockchain in a consortium. It has a distinct transaction flow of execute-order-validate. During the execution phase, a pre-determined set of endorsing peers execute a transaction and sign the transaction response. This process is termed endorsement. In the validation phase, peers validate the transaction with reference to an endorsement policy. The identity of the endorsing organizations is obtainable to all the nodes in the network through the endorser signature and endorsement policy. Knowing this has led to serious vulnerabilities in the blockchain network.
In this paper, we propose a privacy-preserving endorsement system which conceals both endorser signature and endorsement policy. Endorser is anonymized by replacing the signature scheme with a scoped-linkable threshold ring signature scheme. Endorsement policy is secured using Pedersen commitments and non-interactive proof of knowledge of integer vector. We also achieve efficiency in the computation by employing non-interactive proof of co-prime roots. We provide the necessary security analysis to prove that the proposed work guarantees anonymity and unlinkability properties. A comparative analysis of our work with an existing framework is provided which shows that the proposed scheme offers higher level of security and it is optimal in terms of efficiency.
Blind Brother: Attribute-Based Selective Video Encryption
The emergence of video streams as a primary medium for communication and the demand for high-quality video sharing over the internet have given rise to several security and privacy issues, such as unauthorized access and data breaches. To address these limitations, various Selective Video Encryption (SVE) schemes have been proposed, which encrypt specific portions of a video while leaving others unencrypted. The SVE approach balances security and usability, granting unauthorized users access to certain parts while encrypting sensitive content. However, existing SVE schemes adopt an all-or-nothing coarse-grain encryption approach, where a user with a decryption key can access all the contents of a given video stream. This paper proposes and designs a fine-grained access control-based selective video encryption scheme, ABSVE, and a use-case protocol called \protocol. Our scheme encrypts different identified Regions of Interest (ROI) with a unique symmetric key and applies a Ciphertext Policy Attribute Based Encryption (CP-ABE) scheme to tie these keys to specific access policies. This method provides multiple access levels for a single encrypted video stream. Crucially, we provide a formal syntax and security definitions for ABSVE, allowing for rigorous security analysis of this and similar schemes -- which is absent in prior works. Finally, we provide an implementation and evaluation of our protocol in the Kvazaar HEVC encoder. Overall, our constructions enhance security and privacy while allowing controlled access to video content and achieve comparable efficiency to compression without encryption.
PREAMBLE: Private and Efficient Aggregation of Block Sparse Vectors and Applications
We revisit the problem of secure aggregation of high-dimensional vectors in a two-server system such as Prio. These systems are typically used to aggregate vectors such as gradients in private federated learning, where the aggregate itself is protected via noise addition to ensure differential privacy. Existing approaches require communication scaling with the dimensionality, and thus limit the dimensionality of vectors one can efficiently process in this setup.
We propose PREAMBLE: Private Efficient Aggregation Mechanism for Block-sparse Euclidean Vectors. PREAMBLE is a novel extension of distributed point functions that enables communication- and computation-efficient aggregation of block-sparse vectors, which are sparse vectors where the non-zero entries occur in a small number of clusters of consecutive coordinates. We then show that PREAMBLE can be combined with random sampling and privacy amplification by sampling results, to allow asymptotically optimal privacy-utility trade-offs for vector aggregation, at a fraction of the communication cost. When coupled with recent advances in numerical privacy accounting, our approach incurs a negligible overhead in noise variance, compared to the Gaussian mechanism used with Prio.
Translating Between the Common Haar Random State Model and the Unitary Model
Black-box separations are a cornerstone of cryptography, indicating barriers to various goals. A recent line of work has explored black-box separations for quantum cryptographic primitives. Namely, a number of separations are known in the Common Haar Random State (CHRS) model, though this model is not considered a complete separation, but rather a starting point. A few very recent works have attempted to lift these separations to a unitary separation, which are considered complete separations. Unfortunately, we find significant errors in some of these lifting results.
We prove general conditions under which CHRS separations can be generically lifted, thereby giving simple, modular, and bug-free proofs of complete unitary separations between various quantum primitives. Our techniques allow for simpler proofs of existing separations as well as new separations that were previously only known in the CHRS model.
Exploring General Cyclotomic Rings in Torus-Based Fully Homomorphic Encryption: Part I - Prime Power Instances
In the realm of fully homomorphic encryption on the torus, we investigate the algebraic manipulations essential for handling polynomials within cyclotomic rings characterized by prime power indices. This includes operations such as modulo reduction, computation of the trace operator, extraction, and the blind rotation integral to the bootstrapping procedure, all of which we reformulate within this mathematical framework.
webSPDZ: Versatile MPC on the Web
Multi-party computation (MPC) has become increasingly practical in the last two decades, solving privacy and security issues in various domains, such as healthcare, finance, and machine learning. One big caveat is that MPC sometimes lacks usability since the knowledge barrier for regular users can be high. Users have to deal with, e.g., various CLI tools, private networks, and sometimes even must install many dependencies, which are often hardware-dependent.
A solution to improve the usability of MPC is to build browser-based MPC engines where each party runs within a browser window. Two examples of such an MPC web engine are JIFF and the web variant of MPyC. Both support an honest majority with passive corruptions.
$\texttt{webSPDZ}$: Our work brings one of the most performant and versatile general-purpose MPC engines, MP-SPDZ, to the web. MP-SPDZ supports ≥40 MPC protocols with different security models, enabling many security models on the web. To port MP-SPDZ to the web, we use Emscripten to compile MP-SPDZ’s C++ BackEnd to WebAssembly and upgrade the party communication for the browser (WebRTC or WebSockets). We call the new MPC web engine webSPDZ. As with the native versions of the mentioned MPC web engines, MPyC-Web and JIFF, webSPDZ outperforms them in our end-to-end experiments.
We believe that webSPDZ brings forth many interesting and practically relevant use cases. Thus, webSPDZ pushes the boundaries of practical MPC: making MPC more usable and enabling it for a broader community.
On One-Shot Signatures, Quantum vs Classical Binding, and Obfuscating Permutations
One-shot signatures (OSS) were defined by Amos, Georgiou, Kiayias, and Zhandry (STOC'20). These allow for signing exactly one message, after which the signing key self-destructs, preventing a second message from ever being signed. While such an object is impossible classically, Amos et al observe that OSS may be possible using quantum signing keys by leveraging the no-cloning principle. OSS has since become an important conceptual tool with many applications in decentralized settings and for quantum cryptography with classical communication. OSS are also closely related to separations between classical-binding and collapse-binding for post-quantum hashing and commitments. Unfortunately, the only known OSS construction due to Amos et al. was only justified in a classical oracle model, and moreover their justification was ultimately found to contain a fatal bug. Thus, the existence of OSS, even in a classical idealized model, has remained open.
We give the first standard-model OSS, with provable security assuming (sub-exponential) indistinguishability obfuscation (iO) and LWE. This also gives the first standard-model separation between classical and collapse-binding post-quantum commitments/hashing, solving a decade-old open problem. Along the way, we also give the first construction with unconditional security relative to a classical oracle. To achieve our standard-model construction, we develop a notion of permutable pseudorandom permutations (permutable PRPs), and show how they are useful for translating oracle proofs involving random permutations into obfuscation-based proofs. In particular, obfuscating permutable PRPs gives a trapdoor one-way permutation that is $\textit{full-domain}$, solving another decade-old-problem of constructing this object from (sub-exponential) iO and one-way functions.
Key reconstruction for QC-MDPC McEliece from imperfect distance spectrum
McEliece cryptosystems, based on code-based cryptography, is a candidate in Round 4 of NIST's post-quantum cryptography standardization process. The QC-MDPC (quasi-cyclic moderate-density parity-check) variant is particularly noteworthy due to its small key length. The Guo-Johansson-Stankovski (GJS) attack against the QC-MDPC McEliece cryptosystem was recently proposed and has intensively been studied. This attack reconstructs the secret key using information on decoding error rate (DER). However, in practice, obtaining complete DER information is presumed to be time-consuming. This paper proposes two algorithms to reconstruct the secret key under imperfection in the DER information and evaluates the relationship between the imperfection and efficiency of key reconstruction. This will help us to increase the efficacy of the GJS attack.
EvoLUTe+: Fine-Grained Look-Up-Table-based RTL IP Redaction
Hardware obfuscation is an active trustworthy design technique targeting threats in the IC supply chain, such as IP piracy and overproduction. Recent research on Intellectual Property (IP) protection technologies suggests that using embedded reconfigurable components (e.g., eFPGA redaction) could be a promising approach to hide the functional and structural information of security-critical designs. However, such techniques suffer from almost prohibitive overhead in terms of area, power, delay, and testability. This paper proposes an obfuscation technique called EvoLUTe+, which is a unique and more fine-grained redaction approach using smaller reconfigurable components (e.g., Look-Up Tables (LUTs)). EvoLUTe+ achieves fine-grained partitioning, sub-circuit coloring, and scoring of IP, and then encrypts the original IP through the substitution of some sub-circuits. Different attacks are used to test the robustness of EvoLUTe+, including structural and machine learning attacks, as well as Bounded Model Checking (BMC) attacks. The overhead of the obfuscation design is also analyzed. Experimental results demonstrate that EvoLUTe+ exhibits robustness with acceptable overhead while resisting such threat models.
Adaptively Secure Threshold Blind BLS Signatures and Threshold Oblivious PRF
We show the first threshold blind signature scheme and threshold Oblivious PRF (OPRF) scheme which remain secure in the presence of an adaptive adversary, who can adaptively decide which parties to corrupt throughout the lifetime of the scheme. Moreover, our adaptively secure schemes preserve the minimal round complexity and add only a small computational overhead over prior solutions that offered security only for a much less realistic static adversary, who must choose the subset of corrupted parties before initializing the protocol.
Our threshold blind signature scheme computes standard BLS signatures while our threshold OPRF computes a very efficient "2HashDH" OPRF [JKK14]. We prove adaptive security of both schemes in the Algebraic Group Model (AGM). Our adaptively secure threshold schemes are as practical as the underlying standard single-server BLS blind signature and 2HashDH OPRF, and they can be used to add cryptographic fault-tolerance and decentralize trust in any system that relies on blind signatures, like anonymous credentials and e-cash, or on OPRF, like the OPAQUE password authentication and the Privacy Pass anonymous authentication scheme, among many others.
An Efficient Sequential Aggregate Signature Scheme with Lazy Verification
A sequential aggregate signature scheme (SAS) allows multiple potential signers to sequentially aggregate their respective signatures into a single compact signature. Typically, verification of a SAS signatures requires access to all messages and public key pairs utilized in the aggregate generation. However, efficiency is crucial for cryptographic protocols to facilitate their practical implementation. To this end, we propose a sequential aggregate signature scheme with lazy verification for a set of user-message pairs, allowing the verification algorithm to operate without requiring access to all messages and public key pairs in the sequence. This construction is based on the RSA assumption in the random oracle model and is particularly beneficial in resource constrained applications that involve forwarding of authenticated information between parties, such as certificate chains. As an extension of this work, we introduce the notion of sequentially aggregatable proxy re-signatures that enables third parties or proxies to transform aggregatable signatures under one public key to another, useful in applications such as sharing web certificates and authentication of network paths. We also present a construction of a sequential aggregate proxy re-signature scheme, secure in the random oracle model, based on the RSA assumption, which may be of independent interest.
RHQC: post-quantum ratcheted key exchange from coding assumptions
Key Exchange mechanisms (KE or KEMs) such as the Diffie-Hellman protocol have proved to be a cornerstone conciliating the efficiency of symmetric encryption and the practicality of public key primitives.
Such designs however assume the non-compromission of the long term asymmetric key in use. To relax this strong security assumption, and allow for modern security features such as Perfect Forward Secrecy (PFS) or Post Compromise Security (PCS), Ratcheted-KE (RKE) have been proposed.
This work proposes to turn the Hamming Quasi-Cyclic (HQC) cryptosystem into such a Ratcheted-KE, yielding the first code-based such construction.
Interestingly, our design allows indifferently one party to update the key on-demand rather than the other, yielding a construction called bi-directional RKE, which compares favorably to generic transformations.
Finally, we prove that the resulting scheme satisfies the usual correctness and key-indistinguishability properties, and suggest concrete sets of parameters, assuming different real-life use cases.
Worst-case Analysis of Lattice Enumeration Algorithm over Modules
This paper presents a systematic study of module lattices. We extend the lattice enumeration algorithm from Euclidean lattices to module lattices, providing a generalized framework.
To incorporate the refined analysis by Hanrot and Stehlè (CRYPTO'07), we adapt key definitions from Euclidean lattices, such as HKZ-reduced bases and quasi-HKZ-reduced bases, adapting them to the pseudo-basis of modules.
Furthermore, we revisit the lattice profile, a crucial aspect of enumeration algorithm analysis, and extend its analysis to module lattices.
As a result, we improve the asymptotic performance of the module lattice enumeration algorithm and module-SVP.
For instance, let $K = \mathbb{Q}[x]/\langle x^d + 1\rangle$ be a number field with a power-of-two integer $d$, and suppose that $n\ln n = o(\ln d)$.
Then, the nonzero shortest vector in $M \subset K^n$ can be found in time $d^{\frac{d}{2e} + o(d)}$, improving upon the previous lattice enumeration bound of $(nd)^{\frac{nd}{2e}+ o(nd)}$.
Our algorithm naturally extends to solving ideal-SVP. Given an ideal $I \subset R$, where $R = \mathbb{Z}[x]/\langle x^t + 1 \rangle$ with a power-of-two integer $t = nd$, we can find the nonzero shortest element of $I$ in time $\exp(O(\frac{t}{2e} \ln \ln t))$, improving upon the previous enumeration bound of $\exp(O(\frac{t}{2e} \ln t))$.
Post Quantum Migration of Tor
Shor's and Grover's algorithms' efficiency and the advancement of quantum computers imply that the cryptography used until now to protect one's privacy is potentially vulnerable to retrospective decryption, also known as harvest now, decrypt later attack in the near future. This dissertation proposes an overview of the cryptographic schemes used by Tor, highlighting the non-quantum-resistant ones and introducing theoretical performance assessment methods of a local Tor network. The measurement is divided into three phases. We will start with benchmarking a local Tor network simulation on constrained devices to isolate the time taken by classical cryptography processes. Secondly, the analysis incorporates existing benchmarks of quantum-secure algorithms and compares these performances on the devices. Lastly, the estimation of overhead is calculated by replacing the measured times of traditional cryptography with the times recorded for Post Quantum Cryptography (PQC) execution within the specified Tor environment. By focusing on the replaceable cryptographic components, using theoretical estimations, and leveraging existing benchmarks, valuable insights into the potential impact of PQC can be obtained without needing to implement it fully.
Attacking Single-Cycle Ciphers on Modern FPGAs featuring Explainable Deep Learning
In this paper, we revisit the question of key recovery using side-channel analysis for unrolled, single-cycle block ciphers. In particular, we study the Princev2 cipher. While it has been shown vulnerable in multiple previous studies, those studies were performed on side-channel friendly ASICs or older FPGAs (e.g., Xilinx Virtex II on the SASEBO-G board), and using mostly expensive equipment. We start with the goal of exploiting a cheap modern FPGA and board using power traces from a cheap oscilloscope. Particularly, we use Xilinx Artix 7 on the Chipwhisperer CW305 board and PicoScope 5000A, respectively.
We split our study into three parts. First, we show that the new set-up still exhibits easily detectable leakage, using a non-specific t-test. Second, we replicate attacks from older FPGAs. Namely, we start with the attack by Yli-Mäyry et al., which is a simple chosen plaintext correlation power analysis attack using divide and conquer. However, we demonstrate that even this simple, powerful attack does not work, demonstrating a peculiar behavior. We study this behavior using a stochastic attack that attempts to extract the leakage model, and we show that models over a small part of the state are inconsistent and depend on more key bits than what is expected. We also attempt classical template attacks and get similar results.
To further exploit the leakage, we employ deep learning techniques and succeed in key recovery, albeit using a large number of traces. We perform the explainability technique called Key Guessing Occlusion (KGO) to detect which points the neural networks exploit. When we use these points as features for the classical template attack, although it did not recover the secret key, its performance improves compared to other feature selection techniques.
A Note on the Advanced Use of the Tate Pairing
This short note explains how the Tate pairing can be used to efficiently sample torsion points with precise requirements, and other applications. These applications are most clearly explained on Montgomery curves, using the Tate pairing of degree 2, but hold more generally for any degree or abelian variety, or even generalized Tate pairings. This note is explanatory in nature; it does not contain new results, but aims to provide a clear and concise explanation of results in the literature that are somewhat hidden, yet are extremely useful in practical isogeny-based cryptography.
A note on "industrial blockchain threshold signatures in federated learning for unified space-air-ground-sea model training"
We show that the threshold signature scheme [J. Ind. Inf. Integr. 39: 100593 (2024)] is insecure against forgery attack. An adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm, so as to convert the legitimate signature $(sig, s, r_x)$ of message $m$ into a valid signature $(sig, s, r_x')$ of any message $m'$.
HammR: A ZKP Protocol for Fixed Hamming-Weight Restricted-Entry Vectors
In this paper, we introduce $\mathsf{HammR}$, a generic Zero-Knowledge Proof (ZKP) protocol demonstrating knowledge of a secret vector that has a fixed Hamming weight with entries taken from a shifted multiplicative group.
As special cases, we are able to directly apply this protocol to restricted vectors and to rank-1 vectors, which are vectors with entries that lie in a dimension one subspace of $\mathbb{F}_q$.
We show that these proofs can be batched with low computational overhead, and further prove that this general framework is complete, sound, and zero-knowledge, thus truly a genuine ZKP.
Finally, we present applications of $\mathsf{HammR}$ to various Syndrome Decoding Problems, including the Regular and Restricted SDPs, as well as other implementations such as lookup instances, proof of proximity, and electronic voting protocols.
Black-Box Constant-Round Secure 2PC with Succinct Communication
The most fundamental performance metrics of secure multi-party computation (MPC) protocols are related to the number of messages the parties exchange (i.e., round complexity), the size of these messages (i.e., communication complexity), and the overall computational resources required to execute the protocol (i.e., computational complexity). Another quality metric of MPC protocols is related to the black-box or non-black-box use of the underlying cryptographic primitives. Indeed, the design of black-box MPC protocols, other than being of theoretical interest, usually can lead to protocols that have better computational complexity.
In this work, we aim to optimize the round and communication complexity of black-box secure multi-party computation in the plain model, by designing a constant-round two-party computation protocol in the malicious setting, whose communication complexity is only polylogarithmic in the size of the function being evaluated.
We successfully design such a protocol, having only black-box access to fully homomorphic encryption, trapdoor permutations, and hash functions. To the best of our knowledge, our protocol is the first to make black-box use of standard cryptographic primitives while achieving almost asymptotically optimal communication and round complexity.
Cross-Platform Benchmarking of the FHE Libraries: Novel Insights into SEAL and OpenFHE
The rapid growth of cloud computing and data-driven applications has amplified privacy concerns, driven by the increasing demand to process sensitive data securely. Homomorphic encryption (HE) has become a vital solution for addressing these concerns by enabling computations on encrypted data without revealing its contents. This paper provides a comprehensive evaluation of two leading HE libraries, SEAL and OpenFHE, examining their performance, usability, and support for prominent HE schemes such as BGV and CKKS. Our analysis highlights computational efficiency, memory usage, and scalability across Linux and Windows platforms, emphasizing their applicability in real-world scenarios. Results reveal that Linux outperforms Windows in computation efficiency, with OpenFHE emerging as the optimal choice across diverse cryptographic settings. This paper provides valuable insights for researchers and practitioners to advance privacy-preserving applications using FHE.
Quantum Attacks on Sum of Even-Mansour Construction Utilizing Online Classical Queries
The Sum of Even-Mansour (SoEM) construction, proposed by Chen et al. at Crypto 2019, has become the basis for designing some symmetric schemes, such as
the nonce-based MAC scheme $\text{nEHtM}_{p}$ and the nonce-based encryption scheme $\text{CENCPP}^{\ast}$. In this paper, we make the first attempt to study the quantum security of SoEM under the Q1 model where the targeted encryption oracle can only respond to classical queries rather than quantum ones.
Firstly, we propose a quantum key recovery attack on SoEM21 with a time complexity of $\tilde{O}(2^{n/3})$ along with $O(2^{n/3})$ online classical queries. Compared with the current best classical result which requires $O(2^{2n/3})$, our method offers a quadratic time speedup while maintaining the same number of queries. The time complexity of our attack is less than that observed for quantum exhaustive search by a factor of $2^{n/6}$. We further propose classical and quantum key recovery attacks on the generalized SoEMs1 construction (consisting of $s\geq 2$ independent public permutations), revealing that the application of quantum algorithms can provide a quadratic acceleration over the pure classical methods. Our results also imply that the quantum security of SoEM21 cannot be strengthened merely by increasing the number of permutations.
A Practical Tutorial on Deep Learning-based Side-channel Analysis
This tutorial provides a practical introduction to Deep Learning-based Side-Channel Analysis (DLSCA), a powerful approach for evaluating the security of cryptographic implementations.
Leveraging publicly available datasets and a Google Colab service, we guide readers through the fundamental steps of DLSCA, offering clear explanations and code snippets.
We focus on the core DLSCA framework, providing references for more advanced techniques, and address the growing interest in this field driven by emerging standardization efforts like AIS 46. This tutorial is designed to be accessible to researchers, students, and practitioners seeking to learn practical DLSCA techniques and improve the security of cryptographic systems.
On Deniable Authentication against Malicious Verifiers
Deniable authentication allows Alice to authenticate a message to Bob, while retaining deniability towards third parties. In particular, not even Bob can convince a third party that Alice authenticated that message. Clearly, in this setting Bob should not be considered trustworthy. Furthermore, deniable authentication is necessary for deniable key exchange, as explicitly desired by Signal and off-the-record (OTR) messaging.
In this work we focus on (publicly verifiable) designated verifier signatures (DVS), which are a widely used primitive to achieve deniable authentication. We propose a definition of deniability against malicious verifiers for DVS. We give a construction that achieves this notion in the random oracle (RO) model. Moreover, we show that our notion is not achievable in the standard model with a concrete attack; thereby giving a non-contrived example of the RO heuristic failing.
All previous protocols that claim to achieve deniable authentication against malicious verifiers (like Signal's initial handshake protocols X3DH and PQXDH) rely on the Extended Knowledge of Diffie–Hellman (EKDH) assumption. We show that this assumption is broken and that these protocols do not achieve deniability against malicious verifiers.
Practical Semi-Open Chat Groups for Secure Messaging Applications
Chat groups in secure messaging applications such as Signal, Telegram, and Whatsapp are nowadays used for rapid and widespread dissemination of information to large groups of people. This is common even in sensitive contexts, associated with the organisation of protests, activist groups, and internal company dialogues. Manual administration of who has access to such groups quickly becomes infeasible, in the presence of hundreds or thousands of members.
We construct a practical, privacy-preserving reputation system, that automates the approval of new group members based on their reputation amongst the existing membership. We demonstrate security against malicious adversaries in a single-server model, with no further trust assumptions required. Furthermore, our protocol supports arbitrary reputation calculations while almost all group members are offline (as is likely). In addition, we demonstrate the practicality of the approach via an open-source implementation. For groups of size 50 (resp. 200), an admission process on a user that received 40 (resp. 80) scores requires 1312.2 KiB (resp. 5239.4 KiB) of communication, and 3.3s (resp. 16.3s) of overall computation on a single core. While our protocol design matches existing secure messaging applications, we believe it can have value in distributed reputation computation beyond this problem setting.
Optimized Frobenius and Cyclotomic Cubing for Enhanced Pairing Computation
Efficient implementation of a pairing-based cryptosystem relies on high-performance arithmetic in finite fields $\mathbb{F}_{p}$ and their extensions $\mathbb{F}_{p^k}$, where $k$ is the embedding degree. A small embedding degree is crucial because part of the arithmetic for pairing computation occurs in $\mathbb{F}_{{p}^k}$ and includes operations such as squaring, multiplication, and Frobenius operations.
In this paper, we present a fast and efficient method for computing the Frobenius endomorphism and its complexity. Additionally, we introduce an improvement in the efficiency of cyclotomic cubing operations for several pairing-friendly elliptic curves, which are essential for the calculation of Tate pairing and its derivatives.
PMNS arithmetic for elliptic curve cryptography
We show that using a polynomial representation of prime field elements (PMNS) can be relevant for real-world cryptographic applications even in terms of performance. More specifically, we consider elliptic curves for cryptography when pseudo-Mersenne primes cannot be used to define the base field (e.g. Brainpool standardized curves, JubJub curves in the zkSNARK context, pairing-friendly curves). All these primitives make massive use of the Montgomery reduction algorithm and well-known libraries such as GMP or OpenSSL for base field arithmetic. We show how this arithmetic can be replaced by a polynomial representation of the number that can be easily parallelized, avoids carry propagation, and allows randomization process. We provide good PMNS basis in the cryptographic context mentioned above, together with a C-implementation that is competitive with GMP and OpenSSL for performing basic operations in the base fields considered. We also integrate this arithmetic into the Rust reference implementation of elliptic curve scalar multiplication for Zero-knowledge applications, and achieve better practical performances for such protocols. This shows that PMNS is an attractive alternative for the base field arithmetic layer in cryptographic primitives using elliptic curves or pairings.
Algebraic Cryptanalysis of Small-Scale Variants of Stream Cipher E0
This study explores the algebraic cryptanalysis of small-scale variants of the E0 stream cipher, a legacy cipher used in the Bluetooth protocol. By systematically reducing the size of the linear feedback shift registers (LFSRs) while preserving the cipher’s core structure, we investigate the relationship between the number of unknowns and the number of consecutive keystream bits required to recover the internal states of the LFSRs. Our work demonstrates an approximately linear relationship between the number of consecutive keystream bits and the size of small-scale E0 variants, as indicated by our experimental results. To this end, we utilize two approaches: the computation of Gröbner bases using Magma’s F4 algorithm and the application of CryptoMiniSat’s SAT solver. Our experimental results show that increasing the number of keystream bits significantly improves computational efficiency, with the F4 algorithm achieving a speedup of up to 733× when additional equations are supplied. Furthermore, we verify the non-existence of equations of degree four or lower for up to seven consecutive keystream bits, and the non-existence of equations of degree three or lower for up to eight consecutive keystream bits, extending prior results on the algebraic properties of E0.
zkAML: Zero-knowledge Anti Money Laundering in Smart Contracts with whitelist approach
In the interconnected global financial system, anti-money laundering (AML) and combating the financing of terrorism (CFT) regulations are indispensable for safeguarding financial integrity. However, while illicit transactions constitute only a small fraction of overall financial activities, traditional AML/CFT frameworks impose uniform compliance burdens on all users, resulting in inefficiencies, transaction delays, and privacy concerns.
These issues stem from the institution-centric model, where financial entities independently conduct compliance checks, resulting in repeated exposure of personally identifiable information (PII) and operational bottlenecks.
To address these challenges, we introduce \textsf{zkAML}, a cryptographic framework that offers a novel approach to AML/CFT compliance. By leveraging zero-knowledge Succinct Non-Interactive Argument of Knowledge (zk-SNARK) proofs, \textsf{zkAML}~enables users to cryptographically demonstrate their regulatory compliance without revealing sensitive personal information. This approach eliminates redundant identity checks, streamlines compliance procedures, and enhances transaction efficiency while preserving user privacy.
We implement and evaluate \textsf{zkAML}~on a blockchain network to demonstrate its practicality. Our experimental results show that \textsf{zkAML}~achieves 55 transactions per second (TPS) on a public network and 324 TPS on a private network. The zk-SNARK proof generation times are $226.59$ms for senders and $215.76$ms for receivers, with a constant verification time of $1.47$ms per transaction. These findings highlight \textsf{zkAML}'s potential as a privacy-preserving and regulation-compliant solution for modern financial systems.
SoK: Efficient Design and Implementation of Polynomial Hash Functions over Prime Fields
Poly1305 is a widely-deployed polynomial hash function. The rationale behind its design was laid out in a series of papers by Bernstein, the last of which dates back to 2005. As computer architectures evolved, some of its design features became less relevant, but implementers found new ways of exploiting these features to boost its performance. However, would we still converge to this same design if we started afresh with today's computer architectures and applications? To answer this question, we gather and systematize a body of knowledge concerning polynomial hash design and implementation that is spread across research papers, cryptographic libraries, and developers' blogs. We develop a framework to automate the validation and benchmarking of the ideas that we collect. This approach leads us to five new candidate designs for polynomial hash functions. Using our framework, we generate and evaluate different implementations and optimization strategies for each candidate. We obtain substantial improvements over Poly1305 in terms of security and performance. Besides laying out the rationale behind our new designs, our paper serves as a reference for efficiently implementing polynomial hash functions, including Poly1305.
Multi-Party Computation in Corporate Data Processing: Legal and Technical Insights
This paper examines the deployment of Multi-Party Computation (MPC) in corporate data processing environments, focusing on its legal and technical implications under the European Union’s General Data Protection Regulation (GDPR). By combining expertise in cryptography and legal analysis, we address critical questions necessary for assessing the suitability of MPC for real-world applications. Our legal evaluation explores the conditions under which MPC qualifies as an anonymizing approach under GDPR, emphasizing the architectural requirements, such as the distribution of control among compute parties, to minimize re-identification risks effectively. The assertions put forth in the legal opinion are validated by two distinct assessments conducted independently.
We systematically answer key regulatory questions, demonstrating that a structured legal assessment is indispensable for organizations aiming to adopt MPC while ensuring compliance with privacy laws. In addition, we complement this analysis with a practical implementation of privacy-preserving analytics using Carbyne Stack, a cloud-native open-source platform for scalable MPC applications, which integrates the MP-SPDZ framework as its backend. We benchmark SQL queries under various security models to evaluate scalability and efficiency.
Practical Key Collision on AES and Kiasu-BC
The key collision attack was proposed as an open problem in key-committing security in Authenticated Encryption (AE) schemes like $\texttt{AES-GCM}$ and $\texttt{ChaCha20Poly1305}$. In ASIACRYPT 2024, Taiyama et al. introduce a novel type of key collision—target-plaintext key collision ($\texttt{TPKC}$) for $\texttt{AES}$. Depending on whether the plaintext is fixed, $\texttt{TPKC}$ can be divided into $\texttt{fixed-TPKC}$ and $\texttt{free-TPKC}$, which can be directly converted into collision attacks and semi-free-start collision attacks on the Davies-Meyer ($\texttt{DM}$) hashing mode.
In this paper, we propose a new rebound attack framework leveraging a time-memory tradeoff strategy, enabling practical key collision attacks with optimized complexity. We also present an improved automatic method for finding \textit{rebound-friendly} differential characteristics by controlling the probabilities in the inbound and outbound phases, allowing the identified characteristics to be directly used in $\textit{rebound-based}$ key collision attacks. Through our analysis, we demonstrate that the 2-round $\texttt{AES-128}$ $\texttt{fixed-TPKC}$ attack proposed by Taiyama et al. is a $\texttt{free-TPKC}$ attack in fact, while $\texttt{fixed-TPKC}$ attacks are considerably more challenging than $\texttt{free-TPKC}$ attacks. By integrating our improved automatic method with a new rebound attack framework, we successfully identify a new differential characteristic for the 2-round $\texttt{AES-128}$ $\texttt{fixed-TPKC}$ attack and develope the first practical $\texttt{fixed-TPKC}$ attack against 2-round $\texttt{AES-128}$. Additionally, we present practical $\texttt{fixed-TPKC}$ attacks against 5-round $\texttt{AES-192}$ and 3-round $\texttt{Kiasu-BC}$, along with a practical $\texttt{free-TPKC}$ attack against 6-round $\texttt{Kiasu-BC}$. Furthermore, we reduce time complexities for $\texttt{free-TPKC}$ and $\texttt{fixed-TPKC}$ attacks on other $\texttt{AES}$ variants.
Machine-checking Multi-Round Proofs of Shuffle: Terelius-Wikstrom and Bayer-Groth
Shuffles are used in electronic voting in much the same way physical ballot boxes are used in paper systems: (encrypted) ballots are input into the shuffle and (encrypted) ballots are output in a random order, thereby breaking the link between voter identities and ballots. To guarantee that no ballots are added, omitted or altered, zero-knowledge proofs, called proofs of shuffle, are used to provide publicly verifiable transcripts that prove that the outputs are a re-encrypted permutation of the inputs. The most prominent proofs of shuffle, in practice, are those due to Terelius and
Wikström (TW), and Bayer and Groth (BG). TW is simpler whereas BG is more efficient, both in terms of bandwidth and computation. Security for the simpler (TW) proof of shuffle has already been machine-checked but several prominent vendors insist on using the more complicated BG proof of shuffle. Here, we machine-check the security of the Bayer-Groth proof of shuffle via the Coq proof-assistant. We then extract the verifier (software) required to check the transcripts produced by Bayer-Groth implementations and use it to check transcripts from the Swiss Post evoting
system under development for national elections in Switzerland.
Achieving Data Reconstruction Hardness and Efficient Computation in Multiparty Minimax Training
Generative models have achieved remarkable success in a wide range of applications. Training such models using proprietary data from multiple parties has been studied in the realm of federated learning. Yet recent studies showed that reconstruction of authentic training data can be achieved in such settings.
On the other hand, multiparty computation (MPC) guarantees standard data privacy, yet scales poorly for training generative models.
In this paper, we focus on improving reconstruction hardness during Generative Adversarial Network (GAN) training while keeping the training cost tractable. To this end, we explore two training protocols that use a public generator and an MPC discriminator: Protocol 1 (P1) uses a fully private discriminator, while Protocol 2 (P2) privatizes the first three discriminator layers. We prove reconstruction hardness for P1 and P2 by showing that (1) a public generator does not allow recovery of authentic training data, as long as the first two layers of the discriminator are private; and through an existing approximation hardness result on ReLU networks, (2) a discriminator with at least three private layers does not allow authentic data reconstruction with algorithms polynomial in network depth and size. We show empirically that compared with fully MPC training, P1 reduces the training time by $2\times$ and P2 further by $4-16\times$.
Privacy and Security of FIDO2 Revisited
We revisit the privacy and security analyses of FIDO2, a widely deployed standard for passwordless authentication on the Web. We discuss previous works and conclude that each of them has at least one of the following limitations:
(i) impractical trusted setup assumptions,
(ii) security models that are inadequate in light of state of the art of practical attacks,
(iii) not analyzing FIDO2 as a whole, especially for its privacy guarantees.
Our work addresses these gaps and proposes revised security models for privacy and authentication. Equipped with our new models, we analyze FIDO2 modularly and focus on its component protocols, WebAuthn and CTAP2, clarifying their exact security guarantees. In particular, our results, for the first time, establish privacy guarantees for FIDO2 as a whole. Furthermore, we suggest minor modifications that can help FIDO2 provably meet stronger privacy and authentication definitions and withstand known and novel attacks.
CAKE requires programming - On the provable post-quantum security of (O)CAKE
Uncategorized
Uncategorized
In this work we revisit the post-quantum security of KEM-based password-authenticated key exchange (PAKE), specifically that of (O)CAKE. So far, these schemes evaded a security proof considering quantum adversaries. We give a detailed analysis of why this is the case, determining the missing proof techniques. To this end, we first provide a proof of security in the post-quantum setting, up to a single gap. This proof already turns out to be technically involved, requiring advanced techniques to reason in the QROM, including the compressed oracle and the extractable QROM. To pave the way towards closing the gap, we then further identify an efficient simulator for the ideal cipher. This provides certain programming abilities as a necessary and sufficient condition to close the gap in the proof: we demonstrate that we can close the gap using the simulator, and give a meta-reduction based on KEM-anonymity that shows the impossibility of a non-programming reduction that covers a class of KEMs that includes Kyber / ML-KEM.
A 10-bit S-box generated by Feistel construction from cellular automata
In this paper, we propose a new 10-bit S-box generated from a Feistel construction. The subpermutations are generated by a 5-cell cellular automaton based on a unique well-chosen rule and bijective affine transformations. In particular, the cellular automaton rule is chosen based on empirical tests of its ability to generate good pseudorandom output on a ring cellular automaton. Similarly, Feistel's network layout is based on empirical data regarding the quality of the output S-box.
We perform cryptanalysis of the generated 10-bit S-box: we test the properties of algebraic degree, algebraic complexity, nonlinearity, strict avalanche criterion, bit independence criterion, linear approximation probability, differential approximation probability, differential uniformity and boomerang uniformity of our S-box, and relate them to those of the AES S-box. We find security properties comparable to or sometimes even better than those of the standard AES S-box. We believe that our S-box could be used to replace the 5-bit substitution of ciphers like ASCON.
A Democratic Distributed Post-Quantum Certificateless Encryption Scheme
We propose a post-quantum certificateless encryption scheme based on a web of trust instead of a centralized Key Generation Center. Our scheme allows nodes to communicate securely. It is the nodes already present in the network that vote on the acceptance of new nodes, and agree on the shared key. The threshold required for the acceptance of a new node is configurable. Our protocol thus allows to completely operate without the Key Generation Center (or Key Distribution Center).
Our scheme is based on Quasi-Cyclic Moderate Density Parity Check Code McEliece, which is resistant to quantum computer attacks. The voting system uses Shamir secret sharing, coupled with the Kabatianskii-Krouk-Smeets signature scheme, both are also resistant to quantum computer attacks.
We provide a security analysis of our protocol, as well as a formal verification and a proof of concept code.
StaMAC: Fault Protection via Stable-MAC Tags
Fault attacks pose a significant threat to cryptographic implementations, motivating the development of countermeasures, primarily based on a combination of redundancy and masking techniques. Redundancy, in these countermeasures, is often implemented via duplication or linear codes. However, their inherent structure remains susceptible to strategic fault injections bypassing error checks. To address this, the CAPA countermeasure from CRYPTO 2018 leveraged information-theoretic MAC tags for protection against fault and combined attacks. However, a recent attack has shown that CAPA can only protect against either side-channel analysis or fault attacks, but not both simultaneously, and with significant hardware costs. Its successor, M&M, improves efficiency but lacks protection against ineffective faults.
In this paper, we propose StaMAC, a framework aimed at securely incorporating MAC tags against both side-channel and fault adversaries in a non-combined scenario. We extend the security notions outlined in StaTI from TCHES 2024, and propose the notion of MAC-stability, ensuring fault propagation in masked and MACed circuits, necessitating only a single error check at the end of the computation. Additionally, we show that the stability notion from StaTI is arbitrarily composable (whereas it was previously thought to be only serially composable), making it the first arbitrary composable fault security notion which does not require intermediate error checks or correction. Then, we establish the improved protection of masking combined with MAC tags compared to linear encoding techniques by showing bounds on the advantage considering several fault adversaries: a gate/register faulting adversary, an arbitrary register faulting adversary, and a random register faulting adversary. Then, we show how to transform any probing secure circuit to protect against fault attacks using the proposed MAC-stable gadgets implementing field operations. Finally, we demonstrate StaMAC on an AES implementation, evaluating its security and hardware costs compared to the countermeasures using MAC tags.
Quantum circuit for implementing AES S-box with low costs
Advanced Encryption Standard (AES) is one of the most widely used and extensively studied encryption algorithms globally, which is renowned for its efficiency and robust resistance to attacks. In this paper, three quantum circuits are designed to implement the S-box, which is the sole nonlinear component in AES. By incorporating a linear key schedule, we achieve a quantum circuit for implementing AES with the minimum number of qubits used. As a consequence, only 264/328/398 qubits are needed to implement the quantum circuits for AES-128/192/256. Furthermore, through quantum circuits of the S-box and key schedule, the overall size of the quantum circuit required for Grover's algorithm to attack AES is significantly decreased. This enhancement improves both the security and resource efficiency of AES in a quantum computing environment.
Verifiable Secret Sharing Based on Fully Batchable Polynomial Commitment for Privacy-Preserving Distributed Computation
Privacy-preserving distributed computation enables a resource-limited client to securely delegate computations on sensitive data to multiple servers by distributing shares of the data. In such systems, verifiable secret sharing (VSS) is a fundamental component, ensuring secure data distribution and directly impacting the overall performance. The most practical approach to construct VSS is through polynomial commitment (PC), with two main research directions to improve the VSS efficiency. The first focuses on improving the dealer time by designing PC that supports batch evaluation, i.e., generating multiple evaluation$\&$proof pairs in one shot. The second aims to reduce the broadcast cost by designing PC that supports batch opening, i.e., producing a compact proof for multiple evaluations.
Recently, Zhang et al. (Usenix Security 2022) proposed a transparent PC that supports batch evaluation and obtained a transparent VSS with optimal dealer time. However, their scheme does not support batch opening, leading to high broadcast costs in VSS. To the best of our knowledge, no transparent PC currently supports both batch evaluation and batch opening, thus limiting the performance of existing VSS schemes.
In this paper, we propose a transparent fully batchable polynomial commitment (TFB-PC), that simultaneously supports batch evaluation and batch opening. Leveraging TFB-PC, we present a VSS scheme with optimal complexity: $O(n\log n)$ dealer time, $O(n)$ participant time and $O(n)$ communication cost. Furthermore, we implement our VSS scheme and compare its performance with Zhang et al.’s VSS
(the naive approach). Results show that our scheme achieves $954\text{-}27,595\times$ reduction in communication cost and a $1,028\text{-}1,155,106\times$ speed up in participant time for $2^{11}$-$2^{21}$ parties.
Polar Lattice Cryptography
Presenting a protocol that builds a cryptographic solution which shifts security responsibility from the cipher designer to the cipher user. The Polar Lattice is a pattern-devoid cryptographic cipher. It is based on a geometric construct -- a polar lattice, on which the letters of a plaintext alphabet A, are presented as two points each letter, so that to transmit a letter the transmitter transmits a randomized pathway, a trail, (ciphertext) that begins at the first point of the transmitted letter and ends at the second point of the transmitted letter; the transmitted pathway is a set of steps on the lattice. Once a letter is transmitted the next bits on the ciphertext mark the beginning of the pathway that points to the next letter. The size and the geometric construction of the polar lattice are randomized and kept secret. The randomized pathways may be long or short, the attacker does not know how to parcel the ciphertext to individual trails pointing to distinct letters in the plaintext alphabet A. The polar lattice may be implemented algebraically, or geometrically; the lattice may be a physical nano-construct. The polar lattice is very power efficient, very fast. It claims all the attributes associated with pattern devoid cryptography: it allows for only brute force cryptanalysis, which in turn can be defeated through increased ciphertext size, unlimited key size and structure complexity.
Analysis of the Telegram Key Exchange
We describe, formally model, and prove the security of Telegram's key exchange protocols for client-server communications. To achieve this, we develop a suitable multi-stage key exchange security model along with pseudocode descriptions of the Telegram protocols that are based on analysis of Telegram's specifications and client source code. We carefully document how our descriptions differ from reality and justify our modelling choices. Our security proofs reduce the security of the protocols to that of their cryptographic building blocks, but the subsequent analysis of those building blocks requires the introduction of a number of novel security assumptions, reflecting many design decisions made by Telegram that are suboptimal from the perspective of formal analysis. Along the way, we provide a proof of IND-CCA security for the variant of RSA-OEAP+ used in Telegram and identify a hypothetical attack exploiting current Telegram server behaviour (which is not captured in our protocol descriptions). Finally, we reflect on the broader lessons about protocol design that can be taken from our work.
Verifiable Decapsulation: Recognizing Faulty Implementations of Post-Quantum KEMs
Cryptographic schemes often contain verification steps that are essential for security. Yet, faulty implementations missing these steps can easily go unnoticed, as the schemes might still function correctly. A prominent instance of such a verification step is the re-encryption check in the Fujisaki-Okamoto (FO) transform that plays a prominent role in the post-quantum key encapsulation mechanisms (KEMs) considered in NIST's PQC standardization process. In KEMs built from FO, decapsulation performs a re-encryption check that is essential for security, but not for functionality. In other words, it will go unnoticed if this essential step is omitted or wrongly implemented, opening the door for key recovery attacks. Notably, such an implementation flaw was present in HQC's reference implementation and was only noticed after 19 months.
In this work, we develop a modified FO transform that binds re-encryption to functionality, ensuring that a faulty implementation which skips re-encryption will be exposed through basic correctness tests. We do so by adapting the "verifiable verification" methodology of Fischlin and Günther (CCS 2023) to the context of FO-based KEMs. More concretely, by exporting an unpredictable confirmation code from the public key encryption and embedding it into the key derivation function, we can confirm that (most of) the re-encryption step was indeed performed during decapsulation. We formalize this concept, establish modified FO transforms, and prove how unpredictable PKE confirmation codes turn into noticeable correctness errors for faulty implementations. We show how to apply this technique to ML-KEM and HQC, both with negligible overhead, by leveraging the entropy lost through ciphertext compression or truncation. We confirm that our approach works through mathematical proofs, as well as experimental data. Our experiments show that the implementation flaw in HQC's reference implementation indeed makes basic test cases when following our approach.
Concretely Efficient Correlated Oblivious Permutation
Oblivious permutation (OP) enables two parties, a sender with a private data vector $x$ and a receiver with a private permutation π, to securely obtain the shares of π(x). OP has been used to construct many important MPC primitives and applications such as secret shuffle, oblivious sorting, private set operations, secure database analysis, and privacy-preserving machine learning. Due to its high complexity, OP has become a performance bottleneck in several practical applications, and many efforts have been devoted to enhancing its concrete efficiency. Chase et al. (Asiacrypt'20) proposed an offline-online OP paradigm leveraging a pre-computable resource termed Share Translation. While this paradigm significantly reduces online costs, the substantial offline cost of generating Share Translation remains an area for further investigation.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively.
We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
Ciphertext-Ciphertext Matrix Multiplication: Fast for Large Matrices
Matrix multiplication of two encrypted matrices (CC-MM) is a key challenge for privacy-preserving machine learning applications. As modern machine learning models focus on scalability, fast CC-MM on large datasets is increasingly in demand.
In this work, we present a CC-MM algorithm for large matrices. The algorithm consists of plaintext matrix multiplications (PP-MM) and ciphertext matrix transpose algorithms (C-MT). We propose a fast C-MT algorithm, which is computationally inexpensive compared to PP-MM. By leveraging high-performance BLAS libraries to optimize PP-MM, we implement large-scale CC-MM with substantial performance improvements. Furthermore, we propose lightweight algorithms, significantly reducing the key size from $1\ 960$ MB to $1.57$ MB for CC-MM with comparable efficiency.
In a single-thread implementation, the C-MT algorithm takes $0.76$ seconds to transpose a $2\ 048\times 2\ 048$ encrypted matrix. The CC-MM algorithm requires $85.2$ seconds to multiply two $4\ 096\times 4\ 096$ encrypted matrices. For large matrices, our algorithm outperforms the state-of-the-art CC-MM method from Jiang-Kim-Lauter-Song [CCS'18] by a factor of over $800$.
Protecting Computations Against Continuous Bounded-Communication Leakage
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using $t$ bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function.
In this work, we eliminate the first limitation and make progress on the second. Concretely:
- We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case.
- For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Disincentivize Collusion in Verifiable Secret Sharing
In verifiable secret sharing (VSS), a dealer shares a secret input among several parties, ensuring each share is verifiable. Motivated by its applications in the blockchain space, we focus on a VSS where parties holding shares are not allowed to reconstruct the dealer's secret (even partially) on their own terms, which we address as privacy-targeted collusion if attempted.
In this context, our work investigates mechanisms deterring such collusion in VSS among rational and malicious parties. For this problem, we make both algorithmic and combinatorial contributions:
1. We provide two collusion-deterrent mechanisms to discourage parties from colluding and recovering the dealer's secret. Notably, when it is desired to achieve fairness---where non-colluding parties are not at a loss---while allowing for the best achievable malicious fault tolerance, we define ``trackable access structures'' (TAS) and design a deterrence mechanism tailored for VSS on these structures.
2. We estimate the size of the optimal TAS, construct them from Steiner systems, provide highly robust TAS using partial Steiner systems, and present efficient secret sharing schemes for the latter close-to-optimal TAS for various parameter regimes.
3. We demonstrate that trackability in access structures is connected to combinatorial objects like (partial) Steiner systems, uniform subsets with restricted intersections, and appropriate binary codes. The robustness of access structures is equivalent to the minimum vertex cover of hypergraphs.
We believe these connections between cryptography, game theory, and discrete mathematics will be of broader interest.
A proof of P≠NP (New symmetric encryption algorithm against any linear attacks and differential attacks)
P vs NP problem is the most important unresolved problem in the field of computational complexity. Its impact has penetrated into all aspects of algorithm design, especially in the field of cryptography. The security of cryptographic algorithms based on short keys depends on whether P is equal to NP. In fact, Shannon strictly proved that the one-time-pad system meets unconditional security, but because the one-time-pad system requires the length of key to be at least the length of plaintext, how to transfer the key is a troublesome problem that restricts the use of the one-time-pad system in practice. Cryptography algorithms used in practice are all based on short key, and the security of the short key mechanism is ultimately based on one-way assumption. In fact, the existence of one-way function can directly lead to the important conclusion P≠NP.
In this paper, we originally constructed a short-key block cipher algorithm. The core feature of this algorithm is that for any block, when a plaintext-ciphertext pair is known, any key in the key space is valid, that is, for each block, the plaintext-ciphertext pair and the key are independence, and the independence between blocks is also easy to construct. This feature is completely different from all existing short-key cipher algorithms.
Based on the above feature, we construct a problem and theoretically prove that the problem satisfies the properties of one-way functions, thereby solving the problem of the existence of one-way functions, that is, directly proving that P≠NP.
Multiparty Garbling from OT with Linear Scaling and RAM Support
State-of-the-art protocols that achieve constant-round secure multiparty computation currently present a trade-off: either consume an amount of communication that scales quadratically in the number of parties, or achieve better asymptotics at the cost of high constant factors (e.g. schemes based on LPN or DDH).
We construct a constant-round MPC protocol where communication scales linearly in the number of parties n. Our construction relies only on OT and RO, and it leverages packed secret sharing. Due to building on simple primitives, our protocol offers concrete improvement over asymptotically-efficient LPN-based schemes. We consider security in the presence of a dishonest majority where the malicious (with abort) adversary corrupts an arbitrary constant fraction of parties.
By leveraging tri-state circuits (Heath et al. Crypto 2023), we extend our protocol to the RAM model of computation. For a RAM program that halts within $T$ steps, our maliciously-secure protocol communicates $O(n \cdot T \log^3 T \log \log T \cdot \kappa)$ total bits, where $\kappa$ is a security parameter.
Homomorphic Signature-based Witness Encryption and Applications
Practical signature-based witness encryption (SWE) schemes recently emerged as a viable alternative to instantiate timed-release cryptography in the honest majority setting. In particular, assuming threshold trust in a set of parties that release signatures at a specified time, one can ``encrypt to the future'' using an SWE scheme. Applications of SWE schemes include voting, auctions, distributed randomness beacons, and more. However, the lack of homomorphism in existing SWE schemes reduces efficiency and hinders deployment. In this work, we introduce the notion of homomorphic SWE (HSWE) to improve the practicality of timed-release encryption schemes. We show one can build HSWE using a pair of encryption and signature schemes where the uniqueness of the signature is required when the encryption scheme relies on injective one-way functions. We then build three HSWE schemes in various settings using BLS, RSA, and Rabin signatures and show how to achieve a privacy-preserving variant that only allows extracting the homomorphically aggregated result while keeping the individual plaintexts confidential
A Unified Framework for Succinct Garbling from Homomorphic Secret Sharing
A major challenge in cryptography is the construction of succinct garbling schemes that have asymptotically smaller size than Yao’s garbled circuit construction. We present a new framework for succinct garbling that replaces the heavy machinery of most previous constructions by lighter-weight homomorphic secret sharing techniques.
Concretely, we achieve 1-bit-per-gate (amortized) garbling size for Boolean circuits under circular variants of standard assumptions in composite-order or prime-order groups, as well as a lattice-based instantiation. We further extend these ideas to layered circuits, improving the per-gate cost below 1 bit, and to arithmetic circuits, eliminating the typical Ω(λ)-factor overhead for garbling mod-p computations. Our constructions also feature “leveled” variants that remove circular-security requirements at the cost of adding a depth-dependent term to the garbling size.
Our framework significantly extends a recent technique of Liu, Wang, Yang, and Yu (Eurocrypt 2025) for lattice-based succinct garbling, and opens new avenues toward practical succinct garbling. For moderately large circuits with a few million gates, our garbled circuits can be two orders of magnitude smaller than Yao-style garbling. While our garbling and evaluation algorithms are much slower, they are still practically feasible, unlike previous fully succinct garbling schemes that rely on expensive tools such as iO or a non-black-box combination of FHE and ABE. This trade-off
can make our framework appealing when a garbled circuit is used as a functional ciphertext that is broadcast or stored in multiple locations (e.g., on a blockchain), in which case communication and storage may dominate computational cost.
High-Order Masking of BIKE
Every cryptographic implementation on embedded device is vulnerable to side-channel attacks. To prevent these attacks, the main countermeasure consists in splitting each sensitive variable in shares and processing them independently.
With the upcoming of new algorithms designed to resist quantum computers and the complexity of their operations, this protection represents a real challenge.
In this article, we present an attack on an earlier attempt to protect the decoder of BIKE cryptosystem against first-order attack.
Additionally, we introduce a new procedure for the high-order masking of the decoder, up-to-date with its latest improvement.
We also present the first fully masked implementation of the whole cryptosystem, including the key generation and the encapsulation.
Eventually, to assess the correctness of our countermeasures and initiate further comparison, we implemented our countermeasures in C and provide benchmarks of their performance.
AI for Code-based Cryptography
We introduce the use of machine learning in the cryptanalysis of code-based cryptography. Our focus is on distinguishing problems related to the security of NIST round-4 McEliece-like cryptosystems, particularly for Goppa codes used in ClassicMcEliece and Quasi-Cyclic Moderate Density Parity-Check (QC-MDPC) codes used in BIKE. We present DeepDistinguisher, a new algorithm for distinguishing structured codes from random linear codes that uses a transformer. The results show that the new distinguisher achieves a high level of accuracy in distinguishing Goppa codes, suggesting that their structure may be more recognizable by AI models. Our approach outperforms traditional attacks in distinguishing Goppa codes in certain settings and does generalize to larger code lengths without further training using a puncturing technique. We also present the first distinguishing results dedicated to MDPC and QC-MDPC codes.
Preimage Attacks on up to 5 Rounds of SHA-3 Using Internal Differentials
In this paper, we study preimage resistance of the SHA-3 standard. We propose a squeeze meet-in-the-middle attack as a new preimage attack method for the sponge functions. This attack combines the squeeze attack and meet-in-the-middle attack, and is implemented by internal differentials. We analyze the inverse operation of the SHA-3 round function, and develop a new target internal differential algorithm as well as a linearization technique for the Sbox in the backward phase. In addition, we propose the concept of a value-difference distribution table (VDDT) to optimize the attack complexity. These techniques lead to faster preimage attacks on five (out of six) SHA-3 functions reduced to 4 rounds, and also bring preimage attacks on 5 rounds of four SHA-3 instances. The attack techniques are verified by performing practical preimage attack on a small variant of 4-round Keccak.
Transmitting Secrets by Transmitting only Plaintext
Presenting a novel use of encryption, not for hiding a secret, but for marking letters. Given a 2n letters plaintext, the transmitter encrypts the first n letters with key K1 to generate corresponding n cipherletters, and encrypts the second n letters with key K2 to generate n corresponding cipherletters. The transmitter sends the 2n cipherletters along with the keys, K1 and K2 The recipient (and any interceptor) will readily decrypt the 2n cipherletters to the original plaintext. This makes the above procedure equivalent to sending out the plaintext. So why bother? When decrypting the 2n cipherletters one will make a note of how the letters that were encrypted with K1 are mixed with the letters encrypted with K2 while keeping the original order of the letters encrypted with each key. There are 2^n possible mixings. Which means that the choice of mixing order can deliver a secret message, S, comprising n bits. So while on the surface a given plaintext is sent out from transmitter to recipient, this plaintext hides a secret. Imagine a text messaging platform that uses this protocol. An adversary will not know which plain innocent message harbors a secret message. This allows residents of cyberspace to communicate secrets without exposing the fact that they communicated a secret. Expect a big impact on the level of cyberspace privacy.
Improved Cryptanalysis of ChaCha: Beating PNBs with Bit Puncturing
ChaCha is a widely deployed stream cipher and one of the most important symmetric primitives. Due to this practical importance, many cryptanalysis have been proposed. Until now, Probabilistic Neutral Bits (PNBs) have been the most successful. Given differential-linear distinguishers, PNBs are the technique for key recovery relying on an experimental backward correlation obtained through blackbox analysis. A careful theoretical analysis exploiting the round function design may find a better attack and improve our understanding, but the complicated nature of the ARX structure makes such analysis difficult.
%
We propose a theoretical methodology inspired by bit puncturing, which was recently proposed at Eurocrypt 2024. Our method has a theoretical foundation and is thus fundamentally different from PNBs, to which it is the first effective alternative. As a result, we significantly improved the attack complexity for 6, 7, and 7.5-round ChaCha. The 7-round attack is about $2^{40}$ times faster than the previous best. Furthermore, we propose the first 7.5-round attack with a non-negligible advantage over an exhaustive search.
The Algebraic One-More MISIS Problem and Applications to Threshold Signatures
This paper introduces a new one-more computational problem for lattice-based cryptography, which we refer to as the Algebraic One-More MISIS problem, or AOM-MISIS for short. It is a modification of the AOM-MLWE problem recently introduced by Espitau et al. (CRYPTO ’24) to prove security of new two-round threshold signatures.
Our first main result establishes that the hardness of AOM-MISIS is implied by the hardness of MSIS and MLWE (with suitable parameters), both of which are standard assumptions for efficient lattice-based cryptography. We prove this result via a new generalization of a technique by Tessaro and Zhu (EUROCRYPT ’23) used to prove hardness of a one-more problem for linear hash functions assuming their collision resistance, for which no clear lattice analogue was known. Since the hardness of AOM-MISIS implies the hardness of AOM-MLWE, our result resolves the main open question from the work of Espitau et al., who only provided a similar result for AOM-MLWE restricted to selective adversaries, a class which does not cover the use for threshold signatures.
Furthermore, we show that our novel formulation of AOM-MISIS offers a better interface to develop tighter security bounds for state-of-the-art two-round threshold signatures. We exemplify this by providing new proofs of security, assuming the hardness of MLWE and MSIS, for two threshold signatures, the one proposed in the same work by Espitau et al., as well as a recent construction by Chairattana-Apirom et al. (ASIACRYPT 2024). For the former scheme, we also show that it satisfies the strongest security notion (TS-UF-4) in the security hierarchy of Bellare et al. (CRYPTO ’22), as a result of independent interest.
Constant-Time Code: The Pessimist Case
This note discusses the problem of writing cryptographic implementations in software, free of timing-based side-channels, and many ways in which that endeavour can fail in practice. It is a pessimist view: it highlights why such failures are expected to become more common, and how constant-time coding is, or will soon become, infeasible in all generality.
Fine-Grained Verifier NIZK and Its Applications
In this paper, we propose a new type of non-interactive zero-knowledge (NIZK), called Fine-grained Verifier NIZK (FV-NIZK), which provides more flexible and more fine-grained verifiability of proofs than standard NIZK that supports public verifiability and designated-verifier NIZK (DV-NIZK) that supports private verifiability. FV-NIZK has two statistically (or computationally) equivalent verification approaches:
--- a master verification using the master secret key $msk$;
--- a fine-grained verification using a derived secret key $sk_d$, which is derived from $msk$ w.r.t. $d$ (which may stand for user identity, email address, vector, etc.).
We require unbounded simulation soundness (USS) of FV-NIZK to hold, even if an adversary obtains derived secret keys $sk_d$ with $d$ of its choices, and define proof pseudorandomness which stipulates the pseudorandomness of proofs for adversaries that are not given any secret key.
We present two instantiations of FV-NIZK for linear subspace languages, based on the matrix decisional Diffie-Hellman (MDDH) assumption.
One of the FV-NIZK instantiations is pairing-free and achieves almost tight USS and proof pseudorandomness. We also adapt the two instantiations to support unbounded fine-grained secret key delegations.
We illustrate the usefulness of FV-NIZK by showing two applications and obtain the following pairing-free schemes:
--- the first almost tightly multi-challenge CCA (mCCA)-secure inner-product functional encryption (IPFE) scheme without pairings;
--- the first public-key encryption (PKE) scheme that reconciles the inherent contradictions between public verifiability and anonymity.
We formalize such PKE as Fine-grained Verifiable PKE (FV-PKE), which derives a special key from the decryption secret key, such that for those who obtain the derived key, they can check the validity of ciphertexts but the anonymity is lost from their views (CCA-security still holds for them), while for others who do not get the derived key, they cannot do the validity check but the anonymity holds for them.
Our FV-PKE scheme achieves almost tight mCCA-security for adversaries who obtain the derived keys, and achieves almost tight ciphertext pseudorandomness (thus anonymity) for others who do not get any derived key.
MIDAS: an End-to-end CAD Framework for Automating Combinational Logic Locking
Logic locking has surfaced as a notable safeguard
against diverse hazards that pose a risk to the integrated circuit
(IC) supply chain. Existing literature on logic locking largely
encompasses the art of proposing new constructions, on the one
hand, and unearthing weaknesses in such algorithms on the
other. Somehow, in this race of make and break, the stress on
automation of adopting such techniques on real-life circuits has
been rather limited. For the first time, we present a generic
end-to-end combinational logic locking CAD framework, MIDAS.
This framework analyses circuit netlists and generates locked
netlists. Due to its generic circuit analysis, it bridges the gap,
integrates diverse logic locking techniques, and offers a scope of
integration of potential future ones. MIDAS framework’s efficacy
has been verified through its application on ISCAS’85 and
ISCAS’99 benchmark circuits, locked using six different schemes
such as EPIC, Anti-SAT, SFLL-HD, SFLL-fault, CAS-Lock, and
LoPher. MIDAS minimizes the hardware overhead requirements
of otherwise resource-intensive locking technique LoPher by
extracting an influential portion of circuit to lock and utilizing
a simple fitness function. We also assess the overhead increase
for the aforementioned locking methods, thereby complicating
the identification of influential nodes within the locked netlists.
Finally, we evaluate MIDAS by selectively locking parts of a
commercially-designed open-source RISC-V core.
Black-Box (and Fast) Non-Malleable Zero Knowledge
Non-malleable zero-knowledge (NMZK), originally introduced in the seminal work of Dolev, Dwork, and Naor (STOC 91), is a fundamental concept for modeling the security of proof systems against man-in-the-middle attacks.
Recently, Kim, Liang, and Pandey (CRYPTO 2022) presented the first efficient constant-round NMZK argument system based solely on symmetric-key cryptography. Their construction relies on a non-black-box use of the involved cryptographic primitives and on multiple executions of Ligero (CCS 2017) that affect both the round complexity and the computational efficiency of their protocol. Their work left open the natural important challenge of achieving NMZK using the underlying primitives only in a black-box fashion (regardless of the number of rounds and actual efficiency).
In this paper, we solve the aforementioned open problem by presenting the first NMZK argument system based on the black-box use of cryptographic primitives. Our work is optimal in the use of primitives since we only need one-way functions, and asymptotically optimal in the number of rounds since we only require a constant number of rounds. Our argument system is non-malleable with respect to the strong "simulation-extractability" flavor of non-malleability.
Furthermore, we also show that our construction can be efficiently instantiated in Minicrypt, significantly improving upon the work of Kim et al., both in terms of round complexity and computational efficiency.
Commitment Schemes Based on Module-LIP
Recently, Jiang et al. (EUROCRYPT 2025) proposed a universal framework for constructing commitment schemes using group actions, and instantiated it with the Lattice Isomorphism Problem (LIP). This paper attempts to construct an instantiation based on module-LIP with this framework. More precisely, we first present a reduction from $\mathcal{O}_{\mathbb{L}}^2$-LIP to $\mathcal{O}_{\mathbb{L}}^2$-LAP. Then we develop a re-randomized algorithm based on the self-reduction framework of Module-LIP (Ducas et al. ASIACRYPT 2022), adapting it to the framework to construct commitment schemes.
Non-interactive Anonymous Tokens with Private Metadata Bit
Anonymous tokens with private metadata bit (ATPM) have received increased interest as a method for anonymous client authentication while also embedding trust signals that are only readable by the authority who holds the issuance secret key and nobody else. A drawback of all existing ATPM constructions is that they require client-issuer interaction during the issuance process. In this work, we build the first non-interactive anonymous tokens (NIAT) with private metadata bit, inspired by the recent work of Hanzlik (Eurocrypt '23) on non-interactive blind signatures. We discuss how the non-interaction property during the issuance process allows for more efficient issuance protocols that avoid the need for online signing. We construct an efficient NIAT scheme based on Structure-preserving Signatures on Equivalence Classes (SPS-EQ) and experimentally evaluate its performance. We also present an extension to our NIAT construction that allows the identification of clients who attempt to double-spend (i.e., present the same token twice).
Enhanced CKKS Bootstrapping with Generalized Polynomial Composites Approximation
Bootstrapping in approximate homomorphic encryption involves evaluating the modular reduction function. Traditional methods decompose the modular reduction function into three components: scaled cosine, double-angle formula, and inverse sine. While these approaches offer a strong trade-off between computational cost and level consumption, they lack flexibility in parameterization.
In this work, we propose a new method to decompose the modular reduction function with improved parameterization, generalizing prior trigonometric approaches. Numerical experiments demonstrate that our method achieves near-optimal approximation errors. Additionally, we introduce a technique that integrates the rescaling operation into matrix operations during bootstrapping, further reducing computational overhead.
On Improved Cryptanalytic Results against ChaCha for Reduced Rounds ≥ 7
In this paper, we analyze the subtle issues of complexity estimates related to state-of-the-art cryptanalytic efforts on ChaCha. In this regard, we demonstrate that the currently best-known cryptanalytic result on $7$-round ChaCha with time $2^{189.7}$ and data $2^{102.63}$ [Xu et al., ToSC 2024] can be estimated as $2^{178.12}$ for time and $2^{101.09}$ for data complexity. We improve the best-known result for the $7.25$ round by obtaining an improved set of Probabilistic Neutral Bits and considering our revised estimation. Our result with time complexity $2^{212.43}$ and data complexity $2^{100.56}$ improves the result of Xu et al., where they could achieve time and data complexity $2^{223.9}$ and $2^{100.80}$, respectively. For both the $7$ and $7.25$ rounds, we can show an improvement of the order of $2^{11}$ in the time complexity. For $7.5$-round, we improve the result of Dey [IEEE-IT 2024], which reports the time and data complexity of $2^{255.24}$ and $2^{32.64}$, respectively. By applying the formula of the same paper and incorporating additional PNBs, we obtain improved time and data complexity of $2^{253.23}$ and $2^{34.47}$, respectively. Thus, this paper describes the currently best-known cryptanalytic results against reduced round ChaCha. Our results do not affect the security claims of the complete algorithm with 20 rounds. Also, we provide a rebuttal of the Work by Wang et al. \cite{wangeprint} and analyze their claim about the error in the ``Divide-and-Conquer'' Approach.
BUFFing Threshold Signature Schemes
We explore advanced security notions for threshold signature schemes, focusing on Beyond UnForgeability Features (BUFF), introduced by Cremers et al. (S&P’21) in the non-threshold setting. The BUFF properties protect against attacks based on maliciously chosen keys, e.g., expropriating a message-signature pair under a new public key (called exclusive ownership). We first formalize these notions in the threshold setting and examine their relationships. Notably, unlike regular signature schemes, the hierarchy of variants of exclusive ownership notions only holds for threshold schemes if they are also robust.
We then present a generic compiler that transforms any threshold signature scheme to satisfy exclusive ownership, and message-bound signature properties with minimal overhead. Furthermore, we modify the threshold BLS signature scheme to achieve these additional properties without increasing the signature size. Lastly, we identify specific structures in threshold signature schemes where BUFF properties can be naturally extended from the underlying standard signature scheme, and we analyze and prove the security properties in some of the existing threshold schemes.
Exploring How to Authenticate Application Messages in MLS: More Efficient, Post-Quantum, and Anonymous Blocklistable
The Message Layer Security (MLS) protocol has recently been standardized by the IETF. MLS is a scalable secure group messaging protocol expected to run more efficiently compared to the Signal protocol at scale, while offering a similar level of strong security. Even though MLS has undergone extensive examination by researchers, the majority of the works have focused on confidentiality.
In this work, we focus on the authenticity of the application messages exchanged in MLS. Currently, MLS authenticates every application message with an EdDSA signature and while manageable, the overhead is greatly amplified in the post-quantum setting as the NIST-recommended Dilithium signature results in a 40x increase in size. We view this as an invitation to explore new authentication modes that can be used instead. We start by taking a systematic view on how application messages are authenticated in MLS and categorize authenticity into four different security notions. We then propose several authentication modes, offering a range of different efficiency and security profiles. For instance, in one of our modes, COSMOS++, we replace signatures with one-time tokens and a MAC tag, offering roughly a 75x savings in the post-quantum communication overhead. While this comes at the cost of weakening security compared to the authentication mode used by MLS, the lower communication overhead seems to make it a worthwhile trade-off with security.
A Note on the Blindness of the Scheme from ePrint 2025/397
This note demonstrates that the blind signature scheme based on cryptographic group actions, as proposed in ePrint paper 2025/397, fails to ensure blindness. Specifically, we construct an adversary that achieves a $1/8$ advantage in the blindness experiment. The attack leverages selective abort techniques (also known as selective failure attacks), a well-known strategy in the MPC literature.
Matchmaker: Fast Secure Inference across Deployment Scenarios
Secure Two-Party Computation (2PC) enables secure inference with cryptographic guarantees that protect the privacy of the model owner and client. However, it adds significant performance overhead. In this work, we make 2PC-based secure inference efficient while considering important deployment scenarios.
We observe that the hitherto unconsidered latency of fetching keys from storage significantly impacts performance, as does network speed. We design a Linear Secret Sharing (LSS)-based system $LSS^M$ and a Function Secret Sharing (FSS)-based system $FSS^M$ for secure inference, optimized for small key size and communication, respectively. Notably, our highly-optimized and hardware-aware CPU-based $LSS^M$ outperforms prior GPU-based LSS systems by up to $50\times$. We then show that the best choice between $LSS^M$ and $FSS^M$ depends on the deployment scenario.
In fact, under certain deployments, a combination of $LSS^M$ and $FSS^M$ can leverage heterogeneous processing across CPU and GPU. Such protocol-system co-design lets us outperform state-of-the-art secure inference systems
by up to $21\times$ (geomean $3.25\times$).
Multi-Client Attribute-Based Unbounded Inner Product Functional Encryption, and More
This paper presents the concept of a multi-client functional encryption (MC-FE) scheme for attribute-based inner product functions (AB-IP), initially proposed by Abdalla et al. [ASIACRYPT’20], in an unbounded setting. In such a setting, the setup is independent of vector length constraints, allowing secret keys to support functions of arbitrary lengths, and clients can dynamically choose vector lengths during encryption. The functionality outputs the sum of inner products if vector lengths and indices meet a specific relation, and all clients’ attributes satisfy the key’s policy. We propose the following constructions based on the matrix decisional Diffie-Hellman assumption in a natural permissive setting
of unboundedness:
– the first multi-client attribute-based unbounded IPFE (MC-AB-UIPFE) scheme secure in the standard model, overcoming previous limitations where clients could only encrypt fixed-length data;
– the first multi-input AB-UIPFE (MI-AB-UIPFE) in the public key setting; improving upon prior bounded constructions under the same assumption;
– the first dynamic decentralized UIPFE (DD-UIPFE); enhancing the dynamism property of prior works.
Technically, we follow the blueprint of Agrawal et al. [CRYPTO’23] but begin with a new unbounded FE called extended slotted unbounded IPFE. We first construct a single-input AB-UIPFE in the standard model and then extend it to multi-input settings. In a nutshell, our work demonstrates the applicability of function-hiding security of IPFE in realizing variants of multi-input FE capable of encoding unbounded
length vectors both at the time of key generation and encryption.
Private Computation on Common Fuzzy Records
Private computation on common records refers to analyze data from two databases containing shared records without revealing personal information. As a basic requirement for private computation, the databases involved essentially need to be aligned by a common identification system. However, it is hard to expect such common identifiers in real world scenario. For this reason, multiple quasi-identifiers can be used to identify common records. As some quasi-identifiers might be missing or have typos, it is important to support fuzzy records setting. Identifying common records using quasi-identifiers requires manipulation of highly sensitive information, which could be privacy concerns.
This work studies the problem of enabling such data analysis on the fuzzy records of quasi-identifiers. To this end, we propose ordered threshold-one (OTO) matching which can be efficiently realized by circuit-based private set intersection (CPSI) protocols and some multiparty computation (MPC) techniques. Furthermore, we introduce some generic encoding techniques from traditional matching rules to the OTO matching. Finally, we achieve a secure efficient private computation protocol which supports various matching rules which have already been widely used.
We also demonstrate the superiority of our proposal with experimental validation. First, we empirically check that our encoding to OTO matching does not affect accuracy a lot for the benchmark datasets found in the fuzzy record matching literature. Second, we implement our protocol and achieve significantly faster performance at the cost of communication overhead compared to previous privacy-preserving record linkage (PPRL) protocols. In the case of 100K records for each dataset, our work shows 147.58MB communication cost, 10.71s setup time, and 1.97s online time, which is 7.78 times faster compared to the previous work (50.12 times faster when considering online time only).
A Note on Obfuscation-based Attacks on Private-coin Evasive LWE
The evasive learning with errors (evasive LWE) assumption is a new assumption recently introduced by Wee (Eurocrypt 2022) and Tsabary (Crypto 2022) independently, as a significant strengthening of the standard LWE assumption.
While the assumption is known to imply various strong primitives including witness encryption [Wee22,Tsabary22], the assumption in the most general case (i.e., the private coin variant) is considered quite implausible due to the obfuscation based attack mentioned in [Wee22]. This obfuscation based attack is then later formalized by Vaikuntanathan, Wee, and Wichs [VWW22].
In this note, we revisit their attack and show that the attack actually does not work by showing a concrete counterexample. We then show that their attack can be made valid with some modifications. Along the way, we also improve the counterexample by making it provable. Specifically, our counterexample is valid assuming the (plain) LWE assumption and the existence of instance-hiding witness encryption, whereas their original counterexample was dependent on the heuristic assumption of the existence of an ideal obfuscation.
Non-Interactive Verifiable Aggregation
Consider a weak analyst that wishes to outsource data collection and computation of aggregate statistics over a a potentially large population of (also weak) clients to a powerful server. For flexibility and efficiency, we consider public-key and non-interactive protocols, meaning the clients know the analyst's public key but do not share secrets, and each client sends at most one message. Furthermore, the final step should be silent, whereby the analyst simply downloads the (encrypted) result from the server when needed. To capture this setting, we define a new primitive we call Non-Interactive Verifiable Aggregation (NIVA).
We require both privacy and robustness for a NIVA protocol to be deemed secure. Namely, our security notion for NIVA ensures that the clients' data remains private to both the server and the analyst, while also ensuring that malicious clients cannot skew the results by providing faulty data.
We propose a secure NIVA protocol, which we call PEAR (for Private, Efficient, Accurate, Robust), which can validate inputs according to any NP validity rule. PEAR is based on a novel combination of functional encryption for inner-products (Abdalla et al., PKC 2015) and fully-linear probabilistically-checkable proofs (Boneh et al., Crypto 2019). We emphasize that PEAR is non-interactive, public-key, and makes black-box use of the underlying cryptographic primitives. Additionally, we devise substantial optimizations of PEAR for practically-relevant validity rules. Finally, we implement PEAR to show feasibility for such validity rules, conducting a thorough performance evaluation. In particular, we compare PEAR to two more straightforward or "off-the-shelf" NIVA protocols and show performance gains, demonstrating the merit of our new approach. The bottleneck in our protocol comes from the fact that we require the underlying IPFE scheme to be "unrestricted" over a large field. As more efficient such schemes are developed, they can be immediately be plugged into PEAR for further gains.
Samaritan: Linear-time Prover SNARK from New Multilinear Polynomial Commitments
We study linear-time prover SNARKs and make the following contributions:
We provide a framework for transforming a univariate polynomial commitment scheme into a multilinear polynomial commitment scheme. Our transformation is generic, can be instantiated with any univariate scheme and improves on prior transformations like Gemini (EUROCRYPT 2022) and Virgo (S&P 2020) in all relevant parameters: proof size, verification complexity, and prover complexity. Instantiating the above framework with the KZG univariate polynomial commitment scheme, we get SamaritanPCS – the first multilinear polynomial commitment scheme with constant proof size and linear-time prover. SamaritanPCS is a drop-in replacement for the popular PST scheme, and improves upon PST in all relevant parameters.
We construct LogSpartan – a new multilinear PIOP for R1CS based on recent techniques for lookup arguments. Compiling this PIOP using SamaritanPCS gives Samaritan – a SNARK in the universal and updatable SRS setting. Samaritan has linear-time prover, logarithmic verification and logarithmic proof size. Concretely, its proof size is one of the smallest among other known linear-time prover SNARKs without relying on concretely expensive proof recursion techniques. For an R1CS instance with 1 million constraints, Samaritan (over BLS12-381 curve) has a proof size of 6.7KB.
We compare Samaritan with other linear-time prover SNARKs in the updatable setting. We asymptotically improve on the $\log^2 n$ proof size of Spartan. Unlike Libra (CRYPTO 2019), the argument size of Samaritan is independent of the circuit depth. Compared to Gemini (EUROCRYPT 2022), Samaritan achieves 3$\times$ smaller argument size at 1 million constraints. We match the argument size of HyperPlonk, which is the smallest linear-time SNARK for the Plonkish constraint system, while achieving slightly better verification complexity.
We believe that our transformation and our techniques for applying lookups based on logarithmic derivatives to the multilinear setting are of wider interest.
ProofFrog: A Tool For Verifying Game-Hopping Proofs
Cryptographic proofs allow researchers to provide theoretical guarantees on the security that their constructions provide. A proof of security can completely eliminate a class of attacks by potential adversaries. Human fallibility, however, means that even a proof reviewed by experts may still hide flaws or outright errors. Proof assistants are software tools built for the purpose of formally verifying each step in a proof, and as such have the potential to prevent erroneous proofs from being published and insecure constructions from being implemented.
Unfortunately, existing tooling for verifying cryptographic proofs has found limited adoption in the cryptographic community, in part due to concerns with ease of use. We present ProofFrog: a new tool for verifying cryptographic game-hopping proofs. ProofFrog is designed with the average cryptographer in mind, using an imperative syntax similar to C for specifying games and a syntax for proofs that closely models pen-and-paper arguments. As opposed to other proof assistant tools which largely operate by manipulating logical formulae, ProofFrog manipulates abstract syntax trees (ASTs) into a canonical form to establish indistinguishable or equivalent behaviour for pairs of games in a user-provided sequence. We also detail the domain-specific language developed for use with the ProofFrog proof engine, the exact transformations it applies to canonicalize ASTs, and case studies of verified proofs. A tool like ProofFrog that prioritizes ease of use can lower the barrier of entry to using computer-verified proofs and aid in catching insecure constructions before they are made public.
Evaluation of Privacy-aware Support Vector Machine (SVM) Learning using Homomorphic Encryption
The requirement for privacy-aware machine learning increases as we continue to use PII (Personally Identifiable Information) within machine training. To overcome these privacy issues, we can apply Fully Homomorphic Encryption (FHE) to encrypt data before it is fed into a machine learning model. This involves creating a homomorphic encryption key pair, and where the associated public key will be used to encrypt the input data, and the private key will decrypt the output. But, there is often a performance hit when we use homomorphic encryption, and so this paper evaluates the performance overhead of using the SVM machine learning technique with the OpenFHE homomorphic encryption library. This uses Python and the scikit-learn library for its implementation. The experiments include a range of variables such as multiplication depth, scale size, first modulus size, security level, batch size, and ring dimension, along with two different SVM models, SVM-Poly and SVM-Linear. Overall, the results show that the two main parameters which affect performance are the ring dimension and the modulus size, and that SVM-Poly and SVM-Linear show similar performance levels.
Trapdoor Hash Functions and PIR from Low-Noise LPN
Trapdoor hash functions (TDHs) are compressing hash functions, with an additional trapdoor functionality: Given a encoding key for a function $f$, a hash on $x$ together with a (small) input encoding allow one to recover $f(x)$. TDHs are a versatile tool and a useful building block for more complex cryptographic protocols.
In this work, we propose the first TDH construction assuming the (quasi-polynomial) hardness of the LPN problem with noise rate $\epsilon = O(\log^{1+\beta} n / n)$ for $\beta>0$, i.e., in the so-called low-noise regime. The construction achieves $2^{\Theta(\log^{1-\beta} \lambda)}$ compression factor. As an application, we obtain a private-information retrieval (PIR) with communication complexity $L / 2^{\Theta(\log^{1-\beta} L)}$, for a database of size L. This is the first PIR scheme with non-trivial communication complexity (asymptotically smaller than $L$) from any code-based assumption.
On the Soundness of Algebraic Attacks against Code-based Assumptions
We study recent algebraic attacks (Briaud-Øygarden EC'23) on the Regular Syndrome Decoding (RSD) problem and the assumptions underlying the correctness of their attacks' complexity estimates. By relating these assumptions to interesting algebraic-combinatorial problems, we prove that they do not hold in full generality. However, we show that they are (asymptotically) true for most parameter sets, supporting the soundness of algebraic attacks on RSD. Further, we prove—without any heuristics or assumptions—that RSD can be broken in polynomial time whenever the number of error blocks times the square of the size of error blocks is larger than 2 times the square of the dimension of the code.
Additionally, we use our methodology to attack a variant of the Learning With Errors problem where each error term lies in a fixed set of constant size. We prove that this problem can be broken in polynomial time, given a sufficient number of samples. This result improves on the seminal work by Arora and Ge (ICALP'11), as the attack's time complexity is independent of the LWE modulus.
Deimos Cipher: A High-Entropy, Secure Encryption Algorithm with Strong Diffusion and Key Sensitivity
Deimos Cipher is a symmetric encryption algorithm designed to achieve high entropy, strong diffusion, and computational efficiency. It integrates HKDF with BLAKE2b for key expansion, ensuring secure key derivation from user-supplied passwords. The encryption process employs XChaCha20, a high-speed stream cipher, to provide strong security and resistance against nonce reuse attacks. To guarantee data integrity and authentication, HMAC-SHA256 is used, preventing unauthorized modifications.
Security evaluations demonstrate that Deimos Cipher exhibits superior randomness, achieving 6.24 bits per byte entropy for short plaintexts and 7.9998 bits per byte for long plaintexts, surpassing industry standards like AES and ChaCha20. Avalanche Effect analysis confirms optimal diffusion, with 50.18% average bit change, ensuring high resistance to differential cryptanalysis. Additionally, key sensitivity tests reveal 50.54% ciphertext change for minimal key variations, making brute-force and key-recovery attacks impractical.
With its combination of a robust key expansion mechanism, stream cipher encryption, and cryptographic authentication, Deimos Cipher offers a secure and efficient encryption scheme suitable for secure messaging, cloud data protection, and high-security environments. This paper presents the algorithm’s design, security analysis, and benchmarking against established cryptographic standards.
Garblet: Multi-party Computation for Protecting Chiplet-based Systems
The introduction of shared computation architectures assembled from
heterogeneous chiplets introduces new security threats. Due to the shared logical and physical resources, an untrusted chiplet can act maliciously to surreptitiously probe the data communication between chiplets or sense the computation shared between them. This paper presents Garblet, the first framework to leverage the flexibility offered by chiplet technology and Garbled Circuits (GC)-based MPC to enable efficient, secure computation even in the presence of potentially compromised chiplets. Our approach integrates a customized hardware Oblivious Transfer (OT) module and an optimized evaluator engine into chiplet-based platforms. This configuration distributes the tasks of garbling and evaluating circuits across two chiplets, reducing communication costs and enhancing computation speed. We implement this framework on an AMD/Xilinx UltraScale+ multi-chip module and demonstrate its effectiveness using benchmark functions. Additionally, we introduce a novel circuit decomposition technique that allows for parallel processing across multiple chiplets to further improve computational efficiency. Our results highlight the potential of chiplet systems for accelerating GC (e.g., the time complexity of garbled AES is 0.0226ms) in order to guarantee the security and privacy of the computation on chiplets.
Multi-Authority Functional Encryption: Corrupt Authorities, Dynamic Collusion, Lower Bounds, and More
Decentralization is a great enabler for adoption of modern cryptography in real-world systems. Widespread adoption of blockchains and secure multi-party computation protocols are perfect evidentiary examples for dramatic rise in deployment of decentralized cryptographic systems. Much of cryptographic research can be viewed as reducing (or eliminating) the dependence on trusted parties, while shielding from stronger adversarial threats. In this work, we study the problem of multi-authority functional encryption (MAFE), a popular decentralized generalization of functional encryption (FE). Our main contributions are:
1. We design MAFE for all poly-sized circuits, in the bounded collusion model, under the minimal assumption of PKE/OWFs. Prior to our work, this required either sub-exponentially secure obfuscation, or $\log n$-party key exchange, or Random Oracles and sub-exponentially secure PKE. We also extend our constructions to the dynamic collusion model under the minimal assumptions of IBE/OWFs. Unlike all prior works, our MAFE systems are truly dynamic and put no restrictions on the maximum number of authorities.
2. Under the hardness of learning with errors (LWE) assumption, we design MAFE for all poly-sized circuits where we allow adversaries to adaptively corrupt local authorities. We allow an adversary to corrupt any $k$ out of $n$ local authorities as long as ${{n}\choose{k}}$ = poly$(\lambda)$. Prior to this, such MAFE relied on sub-exponentially secure obfuscation. Additionally, we design a new MAFE compiler for boosting selective authority corruptions to non-adaptive authority corruptions.
3. We prove a tight implication from MAFE to (VBB/indistinguishability) obfuscation. We show that MAFE implies obfuscation only if the number of attribute bits (jointly) controlled by all corrupt local authorities is $\omega(\log \lambda)$. This proves optimality of our second result for a wide range of parameters.
4. Finally, we propose a new MAFE system that we refer to as multi-authority attribute-based functional encryption (MA-ABFE). We view it as an approach to get best of both worlds (fully collusion resistant MA-ABE, and bounded collusion resistant MAFE). By combining our results with prior MA-ABE results, we obtain MA-ABFE for $\mathsf{NC}^1 \circ \mathsf{P}/\mathsf{Poly}$ from standard pairing-based assumptions, and for $\mathsf{DNF} \circ \mathsf{P}/\mathsf{Poly}$ from LWE, both in the Random Oracle Model. We also describe a simple construction of MA-ABE for general predicates from witness encryption, and combining with known results, we also get MA-ABFE for $\mathsf{P}/\mathsf{Poly} \circ \mathsf{P}/\mathsf{Poly}$ from evasive LWE.
Security of the Ascon Authenticated Encryption Mode in the Presence of Quantum Adversaries
We examine the post-quantum security of the Ascon authenticated encryption (AE) mode. In spite of comprehensive research of Ascon's classical security, the potential impact of quantum adversaries on Ascon has not yet been explored much. We investigate the generic security of the Ascon AE mode in the setting where the adversary owns a quantum computer to improve its attack, while the adversarial encryption or decryption queries are still classical. In this so-called Q1 model, Ascon achieves security up to approximately $\min\{2^{c/3},2^{k/2}\}$ evaluations, where $c$ is the capacity, $k$ the key size, and the adversary is block-wise adaptive but restricted to one forgery attempt. Our technique is based on applying the semi-classical one-way to hiding (O2H) lemma, and on tailoring the puncture set to the Ascon mode.
Additionally, we discuss different parameter choices for Ascon and compare our results to generic quantum attacks, such as Grover-based key search and state recovery.