All papers (Page 11 of 24080 results)
Linear Proximity Gap for Linear Codes within the 1.5 Johnson Bound
We establish a linear proximity gap for linear codes within the one-and-a-half Johnson bound. Specifically, we investigate the proximity gap for linear codes, revealing that any affine subspace is either entirely $\delta$-close to a linear code or nearly all its members are $\delta$-far from it. When $\delta$ is within the one-and-a-half Johnson bound, we prove an upper bound on the number of members (in the affine subspace) that are $\delta$-close to the linear code for the latter case. Our bound is linear in the length of codewords.
In comparison,
Ben-Sasson, Carmon, Ishai, Kopparty and Saraf [FOCS 2020] work on Reed-Solomon codes and prove a linear bound when $\delta$ is within the unique decoding bound and a quadratic bound when $\delta$ is within the Johnson bound. Note that when the minimum relative distance of the linear code is bigger than 0.77, the one-and-a-half Johnson bound is better than the unique decoding bound.
Proximity gaps for linear codes have implications in various code-based protocols. In many cases, a stronger property than individual distance—known as correlated agreement—is required, i.e., functions in the affine subspace are not only $\delta$-close to a linear code but also agree on the same evaluation domain. Our results support this stronger property. Furthermore, mutual correlated agreement, the further strengthening property, is also supported.
Foundations of Adaptor Signatures
Adaptor signatures extend the functionality of regular signatures through the computation of pre-signatures on messages for statements of NP relations. Pre-signatures are publicly verifiable; they simultaneously hide and commit to a signature of an underlying signature scheme on that message. Anybody possessing a corresponding witness for the statement can adapt the pre-signature to obtain the "regular" signature. Adaptor signatures have found numerous applications for conditional payments in blockchain systems, like payment channels (CCS'20, CCS'21), private coin mixing (CCS'22, SP'23), and oracle-based payments (NDSS'23).
In our work, we revisit the state of the security of adaptor signatures and their constructions. In particular, our two main contributions are:
- Security Gaps and Definitions: We review the widely-used security model of adaptor signatures due to Aumayr et al. (ASIACRYPT'21) and identify gaps in their definitions that render known protocols for private coin-mixing and oracle-based payments insecure. We give simple counterexamples of adaptor signatures that are secure w.r.t. their definitions but result in insecure instantiations of these protocols. To fill these gaps, we identify a minimal set of modular definitions that align with these practical applications.
- Secure Constructions: Despite their popularity, all known constructions are (1) derived from identification schemes via the Fiat-Shamir transform in the random oracle model or (2) require modifications to the underlying signature verification algorithm, thus making the construction useless in the setting of cryptocurrencies. More concerningly, all known constructions were proven secure w.r.t. the insufficient definitions of Aumayr et al., leaving us with no provably secure adaptor signature scheme to use in applications.
Firstly, in this work, we salvage all current applications by proving the security of the widely-used Schnorr adaptor signatures under our proposed definitions. We then provide several new constructions, including presenting the first adaptor signature schemes for Camenisch-Lysyanskaya (CL), Boneh-Boyen-Shacham (BBS+), and Waters signatures, all of which are proven secure in the standard model. Our new constructions rely on a new abstraction of digital signatures, called dichotomic signatures, which covers the essential properties we need to build adaptor signatures. Proving the security of all constructions (including identification-based schemes) relies on a novel non-black-box proof technique. Both our digital signature abstraction and the proof technique could be of independent interest to the community.
Breaking BASS
We provide several attacks on the BASS signature scheme introduced by Grigoriev, Ilmer, Ovchinnikov and Shpilrain in 2023. We lay out a trivial forgery attack which generates signatures passing the scheme's probabilistic signature verification with high probability. Generating these forgeries is faster than generating signatures honestly. Moreover, we describe a key-only attack which allows us to recover an equivalent private key from a signer's public key. The time complexity of this recovery is asymptotically the same as that of signing messages.
An Unstoppable Ideal Functionality for Signatures and a Modular Analysis of the Dolev-Strong Broadcast
Many foundational results in the literature of consensus follow the Dolev-Yao model (FOCS '81), which treats digital signatures as ideal objects with perfect correctness and unforgeability. However, no work has yet formalized an ideal signature scheme that is both suitable for this methodology and possible to instantiate, or a composition theorem that ensures security when instantiating it cryptographically.
The Universal Composition (UC) framework would ensure composition if we could specify an ideal functionality for signatures and prove it UC-realizable. Unfortunately, all signature functionalities heretofore proposed are problematic when used to construct higher-level protocols: either the functionality internally computes a computationally secure signature, and therefore higher-level protocols must rely upon computational assumptions, or else the functionality introduces a new attack surface that does not exist when the functionality is realized. As a consequence, no consensus protocol has ever been analyzed in a modular way using existing ideal signature functionalities.
We propose a new unstoppable ideal functionality for signatures that is UC-realized exactly by the set of standard EUF-CMA signature schemes that are consistent and linear time. No adversary can prevent honest parties from obtaining perfectly ideal signature services from our functionality. We showcase its usefulness by presenting the first modular analysis of the Dolev-Strong broadcast protocol (SICOMP '83). Our result can be interpreted as a step toward a sound realization of the Dolev-Yao methodology. We also generalize our
result to the threshold setting.
Encrypted RAM Delegation: Applications to Rate-1 Extractable Arguments, Homomorphic NIZKs, MPC, and more
In this paper we introduce the notion of encrypted RAM delegation. In an encrypted RAM delegation scheme, the prover creates a succinct proof for a group of two input strings $x_\mathsf{pb}$ and $x_\mathsf{pr}$, where $x_\mathsf{pb}$ corresponds to a large \emph{public} input and $x_\mathsf{pr}$ is a \emph{private} input. A verifier can check correctness of computation of $\mathcal{M}$ on $(x_\mathsf{pb}, x_\mathsf{pr})$, given only the proof $\pi$ and $x_\mathsf{pb}$.
We design encrypted RAM delegation schemes from a variety of standard assumptions such as DDH, or LWE, or $k$-linear. We prove strong knowledge soundness guarantee for our scheme as well as a special input hiding property to ensure that $\pi$ does not leak anything about $x_\mathsf{pr}$.
We follow this by describing multiple applications of encrypted RAM delegation. First, we show how to design a rate-1 non-interactive zero-knowledge (NIZK) argument system with a straight-line extractor. Despite over 30+ years of research, the only known construction in the literature for rate-1 NIZKs from standard assumptions relied on fully homomorphic encryption. Thus, we provide the first rate-1 NIZK scheme based purely on DDH or $k$-linear assumptions.
Next, we also design fully-homomorphic NIZKs from encrypted RAM delegation. The only prior solution crucially relied on algebraic properties of pairing-based NIZKs, thus was only known from the decision linear assumption. We provide the first fully-homomorphic NIZK system from LWE (thus post-quantum security) and from DDH-hard groups.
We also provide a communication-complexity-preserving compiler for a wide class of semi-malicious multiparty computation (MPC) protocols to obtain fully malicious MPC protocols. This gives the first such compiler for a wide class of MPC protocols as any comparable compiler provided in prior works relied on strong non-falsifiable assumptions such as zero-knowledge succinct non-interactive arguments of knowledge (zkSNARKs). Moreover, we also show many other applications to composable zero-knowledge batch arguments, succinct delegation of committed programs, and fully context-hiding multi-key multi-hop homomorphic signatures.
Solving the Shortest Vector Problem in $2^{0.63269n+o(n)}$ time on Random Lattices
The Shortest Vector problem (SVP) is the most important problem in lattice-based cryptanalysis. There is currently a gap in the understanding of this problem with respect to its worst-case complexity and its average-case behaviour. For instance, SVP on an n-dimensional lattice has worst-case complexity $2^{n+o(n)}$ [ADRS15]. However, in practice, people rely on heuristic (unproven) sieving algorithms of time complexity $2^{0.292n+o(n)}$ [BDGL16] to assess the security of lattice-based cryptography schemes. Those heuristic algorithms are experimentally verified for lattices used in cryptography, which are usually random in some way.
In this paper, we try to bridge the gap between worst-case and heuristic algorithms. Using the formalism of random real lattices developed by Siegel, we show a tighter upper bound on an important lattice parameter called the smoothing parameter that applies to almost all random lattices. Using a known discrete Gaussian sampler at the smoothing parameter, we can then directly sample short vectors. This allows us to provably solve an approximation version of the SVP on almost all random lattices with a small constant approximation factor $1.123$, in time $2^{n/2+o(n)}$. With further analysis, we can provably solve the exact SVP in time $2^{0.63269n+o(n)}$ on almost all random lattices as well. We also provide a smooth time approximation factor tradeoff between these two cases. All our algorithms work in space $2^{n/2+o(n)}$. Our paper is a first step towards better understanding the heuristic behaviour of lattice sieving on random lattices.
Quantum Chosen-Cipher Attack on Camellia
The Feistel structure represents a fundamental architectural component within the domain of symmetric cryptographic algorithms, with a substantial body of research conducted within the context of classical computing environments. Nevertheless, research into specific symmetric cryptographic algorithms utilizing the Feistel structure is relatively scarce in quantum computing environments. This paper builds upon a novel 4-round distinguisher proposed by Ito et al. for the Feistel structure under the quantum chosen-ciphertext attack (qCCA) setting. It introduces a 5-round distinguisher for Camellia. The efficacy of the distinguisher has been empirically validated. Furthermore, this paper combines Grover's algorithm with Simon's algorithm, utilizing an analysis of Camellia's key scheduling characteristics to construct a 9-round key recovery attack on Camellia algorithm. The time complexity for acquiring the correct key bits is $2^{61.5}$, and it requires 531 quantum bits. This represents the inaugural chosen-ciphertext attack on Camellia under the Q2 model.
Siniel: Distributed Privacy-Preserving zkSNARK
Uncategorized
Uncategorized
Zero-knowledge Succinct Non-interactive Argument of Knowledge (zkSNARK) is a powerful cryptographic primitive, in which a prover convinces a verifier that a given statement is true without leaking any additional information. However, existing zkSNARKs suffer from high computation overhead in the proof generation. This limits the applications of zkSNARKs, such as private payments, private smart contracts, and anonymous credentials. Private delegation has become a prominent way to accelerate proof generation.
In this work, we propose Siniel, an efficient private delegation framework for zkSNARKs constructed from polynomial interactive oracle proof (PIOP) and polynomial commitment scheme (PCS). Our protocol allows a computationally limited prover (a.k.a. delegator) to delegate its expensive prover computation to several workers without leaking any information about the private witness. Most importantly, compared with the recent work EOS (USENIX'23), the state-of-the-art zkSNARK prover delegation framework, a prover in Siniel needs not to engage in the MPC protocol after sending its shares of private witness. This means that a Siniel prover can outsource the entire computation to the workers.
We compare Siniel with EOS and show significant performance advantages of the former. The experimental results show that, under low bandwidth conditions (10MBps), Siniel saves about 16% time for delegators than that of EOS, whereas under high bandwidth conditions (1000MBps), Siniel saves about 80% than EOS.
ColliderScript: Covenants in Bitcoin via 160-bit hash collisions
We introduce a method for enforcing covenants on Bitcoin outputs without requiring any changes to Bitcoin by designing a hash collision based equivalence check which bridges Bitcoin's limited Big Script to Bitcoin's Small Script. This allows us evaluate the signature of the spending transaction (available only to Big Script) in Small Script. As Small Script enables arbitrary computations, we can introspect into the spending transaction and enforce covenants on it.
Our approach leverages finding collisions in the $160$-bit hash functions: SHA-1 and RIPEMD-160. By the birthday bound this should cost $\sim2^{80}$ work. Each spend of our covenant costs $\sim2^{86}$ hash queries and $\sim2^{56}$ bytes of space. For security, we rely on an assumption regarding the hardness of finding a $3$-way collision (with short random inputs) in $160$-bit hash functions, arguing that if the assumption holds, breaking covenant enforcement requires $\sim2^{110}$ hash queries. To put this in perspective, the work to spend our covenant is $\sim33$ hours of the Bitcoin mining network, whereas breaking our covenant requires $\sim 450,000$ years of the Bitcoin mining network.
We believe there are multiple directions of future work that can significantly improve these numbers.
Evaluating covenants and our equivalence check requires performing many operations in Small Script, which must take no more than $4$ megabytes in total size, as Bitcoin does not allow transactions greater than $4$ megabytes. We only provide rough estimates of the transaction size because, as of this writing, no Small Script implementations of the hash functions required, SHA-1 and RIPEMD-160, have been written.
Investigation of the Optimal Linear Characteristics of BAKSHEESH (Full Version)
This paper aims to provide a more comprehensive understanding of the optimal linear characteristics of BAKSHEESH. Initially, an explicit formula for the absolute correlation of the $R$-round optimal linear characteristic of BAKSHEESH is proposed when $R \geqslant 12$. By examining the linear characteristics of BAKSHEESH with three active S-boxes per round, we derive some properties of the three active S-boxes in each round. Furthermore, we demonstrate that there is only one 1-round iterative linear characteristic with three active S-boxes. Since the 1-round linear characteristic is unique, it must be included in any $R$-round ($R \geqslant 12$) linear characteristics of BAKSHEESH with three active S-boxes per round. Finally, we confirm that BAKSHEESH's total number of $R$-round optimal linear characteristics is $3072$ for $R \geqslant 12$. All of these characteristics are generated by employing the 1-round iterative linear characteristic.
Privacy-Preserving Multi-Party Search via Homomorphic Encryption with Constant Multiplicative Depth
We propose a privacy-preserving multiparty search protocol
using threshold-level homomorphic encryption, which we prove correct
and secure to honest but curious adversaries. Unlike existing approaches,
our protocol maintains a constant circuit depth. This feature enhances
its suitability for practical applications involving dynamic underlying
databases.
Consensus Under Adversary Majority Done Right
A specter is haunting consensus protocols—the specter of adversary majority. Dolev and Strong in 1983 showed an early possibility for up to 99% adversaries. Yet, other works show impossibility results for adversaries above 50% under synchrony, seemingly the same setting as Dolev and Strong's. What gives? It is high time that we pinpoint a key culprit for this ostensible contradiction: the modeling details of clients. Are the clients sleepy or always-on? Are they silent or communicating? Can validators be sleepy too? We systematize models for consensus across four dimensions (sleepy/always-on clients, silent/communicating clients, sleepy/always-on validators, and synchrony/partial-synchrony), some of which are new, and tightly characterize the achievable safety and liveness resiliences with matching possibilities and impossibilities for each of the sixteen models. To this end, we unify folklore and earlier results, and fill gaps left in the literature with new protocols and impossibility theorems.
Quantum One-Time Protection of any Randomized Algorithm
The meteoric rise in power and popularity of machine learning models dependent on valuable training data has reignited a basic tension between the power of running a program locally and the risk of exposing details of that program to the user. At the same time, fundamental properties of quantum states offer new solutions to data and program security that can require strikingly few quantum resources to exploit, and offer advantages outside of mere computational run time. In this work, we demonstrate such a solution with quantum one-time tokens.
A quantum one-time token is a quantum state that permits a certain program to be evaluated exactly once. One-time security guarantees, roughly, that the token cannot be used to evaluate the program more than once. We propose a scheme for building quantum one-time tokens for any randomized classical program, which include generative AI models. We prove that the scheme satisfies an interesting definition of one-time security as long as outputs of the classical algorithm have high enough min-entropy, in a black box model.
Importantly, the classical program being protected does not need to be implemented coherently on a quantum computer. In fact, the size and complexity of the quantum one-time token is independent of the program being protected, and additional quantum resources serve only to increase the security of the protocol. Due to this flexibility in adjusting the security, we believe that our proposal is parsimonious enough to serve as a promising candidate for a near-term useful demonstration of quantum computing in either the NISQ or early fault tolerant regime.
FLock: Robust and Privacy-Preserving Federated Learning based on Practical Blockchain State Channels
\textit{Federated Learning} (FL) is a distributed machine learning paradigm that allows multiple clients to train models collaboratively without sharing local data. Numerous works have explored security and privacy protection in FL, as well as its integration with blockchain technology. However, existing FL works still face critical issues. \romannumeral1) It is difficult to achieving \textit{poisoning robustness} and \textit{data privacy} while ensuring high \textit{model accuracy}. Malicious clients can launch \textit{poisoning attacks} that degrade the global model. Besides, aggregators can infer private data from the gradients, causing \textit{privacy leakages}. Existing privacy-preserving poisoning defense FL solutions suffer from decreased model accuracy and high computational overhead. \romannumeral2) Blockchain-assisted FL records iterative gradient updates on-chain to prevent model tampering, yet existing schemes are not compatible with practical blockchains and incur high costs for maintaining the gradients on-chain. Besides, incentives are overlooked, where unfair reward distribution hinders the sustainable development of the FL community. In this work, we propose FLock, a robust and privacy-preserving FL scheme based on practical blockchain state channels. First, we propose a lightweight secure \textit{Multi-party Computation} (MPC)-friendly robust aggregation method through quantization, median, and Hamming distance, which could resist poisoning attacks against up to $<50\%$ malicious clients. Besides, we propose communication-efficient Shamir's secret sharing-based MPC protocols to protect data privacy with high model accuracy. Second, we utilize blockchain off-chain state channels to achieve immutable model records and incentive distribution. FLock achieves cost-effective compatibility with practical cryptocurrency platforms, e.g. Ethereum, along with fair incentives, by merging the secure aggregation into a multi-party state channel. In addition, a pipelined \textit{Byzantine Fault-Tolerant} (BFT) consensus is integrated where each aggregator can reconstruct the final aggregated results. Lastly, we implement FLock and the evaluation results demonstrate that FLock enhances robustness and privacy, while maintaining efficiency and high model accuracy. Even with 25 aggregators and 100 clients, FLock can complete one secure aggregation for ResNet in $2$ minutes over a WAN. FLock successfully implements secure aggregation with such a large number of aggregators, thereby enhancing the fault tolerance of the aggregation.
Isogeny interpolation and the computation of isogenies from higher dimensional representations
The Supersingular Isogeny Diffie-Hellman (SIDH) scheme is a public key cryptosystem that was submitted to the National Institute of Standards and Technology's competition for the standardization of post-quantum cryptography protocols. The private key in SIDH consists of an isogeny whose degree is a prime power. In July 2022, Castryck and Decru discovered an attack that completely breaks the scheme by recovering Bob's secret key, using isogenies between higher dimensional abelian varieties to interpolate and reconstruct the isogenies comprising the SIDH private key. The original attack applies in theory to any prime power degree, but the implementation accompanying the original attack required one of the SIDH keys involved in a key exchange to have degree equal to a power of $2$. An implementation of the power of $3$ case was published subsequently by Decru and Kunzweiler. However, despite the passage of several years, nobody has published any implementations for prime powers other than $2$ or $3$, and for good reason --- the necessary higher dimensional isogeny computations rapidly become more complicated as the base prime increases. In this paper, we provide for the first time a fully general isogeny interpolation implementation that works for any choice of base prime, and provide timing benchmarks for various combinations of SIDH base prime pairs. We remark that the technique of isogeny interpolation now has constructive applications as well as destructive applications, and that our methods may open the door to increased flexibility in constructing isogeny-based digital signatures and cryptosystems.
How Fast Does the Inverse Walk Approximate a Random Permutation?
For a finite field $\mathbb{F}$ of size $n$, the (patched) inverse permutation $\operatorname{INV}: \mathbb{F} \to \mathbb{F}$ computes the inverse of $x$ over $\mathbb{F}$ when $x\neq 0$ and outputs $0$ when $x=0$, and the $\operatorname{ARK}_K$ (for AddRoundKey) permutation adds a fixed constant $K$ to its input, i.e.,
$$\operatorname{INV}(x) = x^{n-2} \hspace{.1in} \mbox{and} \hspace{.1in} \operatorname{ARK}_K(x) = x + K \;.$$
We study the process of alternately applying the $\operatorname{INV}$ permutation followed by a random linear permutation $\operatorname{ARK}_K$, which is a random walk over the alternating (or symmetric) group that we call the inverse walk.
We show both lower and upper bounds on the number of rounds it takes for this process to approximate a random permutation over $\mathbb{F}$. We show that $r$ rounds of the inverse walk over the field of size $n$ with $$r = \Theta\left(n\log^2 n + n\log n\log \frac{1}{\epsilon}\right)$$ rounds generates a permutation that is $\epsilon$-close (in variation distance) to a uniformly random even permutation (i.e. a permutation from the alternating group $A_{n}$). This is tight, up to logarithmic factors.
Our result answers an open question from the work of Liu, Pelecanos, Tessaro and Vaikuntanathan (CRYPTO 2023) by providing a missing piece in their proof of $t$-wise independence of (a variant of) AES. It also constitutes a significant improvement on a result of Carlitz (Proc. American Mathematical Society, 1953) who showed a reachability result: namely, that every even permutation can be generated eventually by composing $\operatorname{INV}$ and $\operatorname{ARK}$. We show a tight convergence result, namely a tight quantitative bound on the number of rounds to reach a random (even) permutation.
How Much Public Randomness Do Modern Consensus Protocols Need?
Modern blockchain-based consensus protocols
aim for efficiency (i.e., low communication and round complexity) while maintaining security against adaptive adversaries.
These goals are usually achieved using a public randomness beacon to select roles for each participant. We examine to what extent this randomness is necessary.
Specifically, we provide tight bounds on the amount of entropy a Byzantine Agreement protocol must consume from a beacon in order to enjoy efficiency and adaptive security. We first establish that no consensus protocol can simultaneously be efficient, be adaptively secure, and use $O(\log n)$ bits of beacon entropy. We then show this bound is tight and, in fact, a trilemma by presenting three consensus protocols that achieve any two of these three properties.
On the Jordan-Gauss graphs and new multivariate public keys
We suggest two families of multivariate public keys defined over arbitrary finite commutative ring \(K\) with unity. The first one has quadratic multivariate public rule, this family is an obfuscation of previously defined cryptosystem defined in terms of well known algebraic graphs \(D(n, K)\) with the partition sets isomorphic to \(K^n\). Another family of cryptosystems uses the combination of Eulerian transformation of \(K[x_1, x_2, \ldots, x_n]\) sending each variable \(x_i\) to a monomial term with the quadratic encryption map of the first cryptosystem. The resulting map has unbounded degree and the density \(O(n^4)\) like the cubic multivariate map. The space of plaintexts of the second cryptosystem is the variety \((K^*)^n\) and the space of ciphertexts is the affine space \(K^n\).
Towards Explainable Side-Channel Leakage: Unveiling the Secrets of Microarchitecture
We explore the use of microbenchmarks, small assembly code snippets, to detect microarchitectural side-channel leakage in CPU implementations. Specifically, we investigate the effectiveness of microbenchmarks in diagnosing the predisposition to side-channel leaks in two commonly used RISC-V cores: Picorv32 and Ibex. We propose a new framework that involves diagnosing side-channel leaks, identifying leakage points, and constructing leakage profiles to understand the underlying causes. We apply our framework to several realistic case studies that test our framework for explaining side-channel leaks and showcase the subtle interaction of data via order-reducing leaks.
Discrete gaussian sampling for BKZ-reduced basis
Discrete Gaussian sampling on lattices is a fundamental problem in lattice-based cryptography. In this paper, we revisit the Markov chain Monte Carlo (MCMC)-based Metropolis-Hastings-Klein (MHK) algorithm proposed by Wang and Ling
and study its complexity under the Geometric Series Assuption (GSA) when the given basis is BKZ-reduced. We give experimental evidence that the GSA is accurate in this context, and we give a very simple approximate formula for the complexity of the sampler that is accurate over a large range of parameters and easily computable. We apply our results to the dual attack on LWE of [Pouly and Shen 2024] and significantly improve the complexity estimates of the attack. Finally, we provide some results of independent interest on the Gaussian mass of a random $q$-ary lattices.
Revisiting subgroup membership testing on pairing-friendly curves via the Tate pairing
In 2023, Koshelev introduced an efficient method of subgroup membership testing for a list of non-pairing-friendly curves, using at most two small Tate pairings. In fact, this technique can also be applied to certain pairing-friendly curves, e.g., from the BLS and BW13 families. In this paper, we revisit Koshelev's method and propose simplified formulas for computing the two Tate pairings. Compared to the original formulas, ours reduce both the number of Miller's iterations and the storage requirements. Furthermore, we provide a high-speed software implementation on a 64-bit processor. Our experimental results show that the new method outperforms the state-of-the-art one by up to $62.0\%$ and $41.2\%$ on the BW13-310 and BLS48-575 curves, respectively. When special precomputation is utilized, our method achieves greater speed improvements of up to $110.6\%$ and $74.4\%$ on the two curves, respectively
Stealth and Beyond: Attribute-Driven Accountability in Bitcoin Transactions
Bitcoin enables decentralized, pseudonymous transactions, but balancing privacy with accountability remains a challenge. This paper introduces a novel dual accountability mechanism that enforces both sender and recipient compliance in Bitcoin transactions. Senders are restricted to spending Unspent Transaction Outputs (UTXOs) that meet specific criteria, while recipients must satisfy legal and ethical requirements before receiving funds. We enhance stealth addresses by integrating compliance attributes, preserving privacy while ensuring policy adherence. Our solution introduces a new cryptographic primitive, Identity-Based Matchmaking Signatures (IB-MSS), which supports streamlined auditing. Our approach is fully compatible with existing Bitcoin infrastructure and does not require changes to the core protocol, preserving both privacy and decentralization while enabling transaction auditing and compliance.
Advanced Transparency System
In contemporary times, there are many situations where users need to verify that their information is correctly retained by servers. At the same time, servers need to maintain transparency logs. Many algorithms have been designed to address this problem. For example, Certificate Transparency (CT) helps track certificates issued by Certificate Authorities (CAs), while CONIKS aims to provide key transparency for end users. However, these algorithms often suffer from either high append time or imbalanced inclusion-proof cost and consistency-proof cost. To find an optimal solution, we constructed two different but similar authenticated data structures tailored to two different lookup protocols. We propose ATS (Advanced Transparency System), which uses only linear storage cost to reduce append time and balances the time costs for both servers and users. When addressing the value-lookup problem, this system allows servers to append user information in constant time and enables radical-level inclusion proof and consistency proof. For the key transparency problem, the system requires logarithmic time complexity for the append operation and offers acceptable inclusion proof and consistency proof.
An Efficient and Secure Boolean Function Evaluation Protocol
Boolean functions play an important role in designing and analyzing many cryptographic systems, such as block ciphers, stream ciphers, and hash functions, due to their unique cryptographic properties such as nonlinearity, correlation immunity, and algebraic properties. The secure evaluation of Boolean functions or Secure Boolean Evaluation (SBE) is an important area of research. SBE allows parties to jointly compute Boolean functions without exposing their private inputs. SBE finds applications in privacy-preserving protocols and secure multi-party computations. In this manuscript, we present an efficient and generic two-party protocol (namely BooleanEval) for the secure evaluation of Boolean functions by utilizing a 1-out-of-2 Oblivious Transfer (OT) as a building block. BooleanEval only employs hash evaluations and XOR operations as the core computational step, thus making it lightweight and fast. Unlike other lightweight state-of-the-art designs of SBE, BooleanEval avoids the use of additional cryptographic primitives, such as commitment schemes to reduce the computational overhead.
Black-Box Timed Commitments from Time-Lock Puzzles
A Timed Commitment (TC) with time parameter $t$ is hiding for time at most $t$, that is, commitments can be force-opened by any third party within time $t$. In addition to various cryptographic assumptions, the security of all known TC schemes relies on the sequentiality assumption of repeated squarings in hidden-order groups. The repeated squaring assumption is therefore a security bottleneck.
In this work, we give a black-box construction of TCs from any time-lock puzzle (TLP) by additionally relying on one-way permutations and collision-resistant hashing.
Currently, TLPs are known from (a) the specific repeated squaring assumption, (b) the general (necessary) assumption on the existence of worst-case non-parallelizing languages and indistinguishability obfuscation, and (c) any iteratively sequential function and the hardness of the circular small-secret LWE problem. The latter admits a plausibly post-quantum secure instantiation.
Hence, thanks to the generality of our transform, we get i) the first TC whose timed security is based on the the existence of non-parallelizing languages and ii) the first TC that is plausibly post-quantum secure.
We first define quasi publicly-verifiable TLPs (QPV-TLPs) and construct them from any standard TLP in a black-box manner without relying on any additional assumptions. Then, we devise a black-box commit-and-prove system to transform any QPV-TLPs into a TC.
A General Quantum Duality for Representations of Groups with Applications to Quantum Money, Lightning, and Fire
Aaronson, Atia, and Susskind [Aaronson et al., 2020] established that efficiently mapping between quantum states $\ket{\psi}$ and $\ket{\phi}$ is computationally equivalent to distinguishing their superpositions $\frac{1}{\sqrt{2}}(|\psi\rangle + |\phi\rangle)$ and $\frac{1}{\sqrt{2}}(|\psi\rangle - |\phi\rangle)$. We generalize this insight into a broader duality principle in quantum computation, wherein manipulating quantum states in one basis is equivalent to extracting their value in a complementary basis. In its most general form, this duality principle states that for a given group, the ability to implement a unitary representation of the group is computationally equivalent to the ability to perform a Fourier subspace extraction from the invariant subspaces corresponding to its irreducible representations.
Building on our duality principle, we present the following applications:
* Quantum money, which captures quantum states that are verifiable but unclonable, and its stronger variant, quantum lightning, have long resisted constructions based on concrete cryptographic assumptions. While (public-key) quantum money has been constructed from indistinguishability obfuscation (iO)—an assumption widely considered too strong—quantum lightning has not been constructed from any such assumptions, with previous attempts based on assumptions that were later broken. We present the first construction of quantum lightning with a rigorous security proof, grounded in a plausible and well-founded cryptographic assumption. We extend Zhandry's construction from Abelian group actions [Zhandry, 2024] to non-Abelian group actions, and eliminate Zhandry's reliance on a black-box model for justifying security. Instead, we prove a direct reduction to a computational assumption—the pre-action security of cryptographic group actions. We show how these group actions can be realized with various instantiations, including with the group actions of the symmetric group implicit in the McEliece cryptosystem.
* We provide an alternative quantum money and lightning construction from one-way homomorphisms, showing that security holds under specific conditions on the homomorphism. Notably, our scheme exhibits the remarkable property that four distinct security notions—quantum lightning security, security against both worst-case cloning and average-case cloning, and security against preparing a specific canonical state—are all equivalent.
* Quantum fire captures the notion of a samplable distribution on quantum states that are efficiently clonable, but not efficiently telegraphable, meaning they cannot be efficiently encoded as classical information. These states can be spread like fire, provided they are kept alive quantumly and do not decohere.
The only previously known construction relied on a unitary quantum oracle, whereas we present the first candidate construction of quantum fire in the plain model.
Fine-Grained Non-Interactive Key-Exchange without Idealized Assumptions, and Lower Bounds
In this paper, we study multi-party non-interactive key exchange (NIKE) in the fine-grained setting. More precisely, we propose three multi-party NIKE schemes in three computation models, namely, the bounded parallel-time, bounded time, and bounded storage models. Their security is based on a very mild assumption (e.g., NC1 ⊊ ⊕L/poly) or even without any complexity assumption. This improves the recent work of Afshar, Couteau, Mahmoody, and Sadeghi (EUROCRYPT 2023) that requires idealized assumptions, such as random oracles or generic groups.
Additionally, we show that all our constructions satisfy a natural desirable property that we refer to as extendability, and we give generic transformations from extendable multi-party NIKE to multi-party identity-based NIKEs in the fine-grained settings.
Furthermore, we generalize the lower bound on users’ storage consumption in the bounded storage model by Dziembowski and Maurer (Eurocrypt 2004) to encompass any multi-party NIKE with extendability. This new lower bound suggests that the users’ storage consumption of our multi-party NIKE in the bounded storage model is optimal.
PriSrv: Privacy-Enhanced and Highly Usable Service Discovery in Wireless Communications
Service discovery is essential in wireless communications. However, existing service discovery protocols provide no or very limited privacy protection for service providers and clients, and they often leak sensitive information (e.g., service type, client’s identity and mobility pattern), which leads to various network-based attacks (e.g., spoofing, man-in-the-middle, identification and tracking). In this paper, we propose a private service discovery protocol, called PriSrv, which allows a service provider and a client to respectively specify a fine-grained authentication policy that the other party must satisfy before a connection is established. PriSrv consists of a private service broadcast phase and an anonymous mutual authentication phase with bilateral control, where the private information of both parties is hidden beyond the fact that a mutual match to the respective authentication policy occurred. As a core component of PriSrv, we introduce the notion of anonymous credential-based matchmaking encryption (ACME), which exerts dual-layer matching in one step to simultaneously achieve bilateral flexible policy control, selective attribute disclosure and multi-show unlinkability. As a building block of ACME, we design a fast anonymous credential (FAC) scheme to provide constant size credentials and efficient show/verification mechanisms, which is suitable for privacy-enhanced and highly usable service discovery in wireless networks. We present a concrete PriSrv protocol that is interoperable with popular wireless communication protocols, such as Wi-Fi Extensible Authentication Protocol (EAP), mDNS, BLE and Airdrop, to offer privacy-enhanced protection. We present formal security proof of our protocol and evaluate its performance on multiple hardware platforms: desktop, laptop, mobile phone and Raspberry Pi. PriSrv accomplishes private discovery and secure connection in less than 0.973 s on the first three platforms, and in less than 2.712 s on Raspberry Pi 4B. We also implement PriSrv into IEEE 802.1X in the real network to demonstrate its practicality.
Is Periodic Pseudo-randomization Sufficient for Beacon Privacy?
In this paper, we investigate whether the privacy mechanism of periodically changing the pseudorandom identities of Bluetooth Low Energy (BLE) beacons is sufficient to ensure privacy.
We consider a new natural privacy notion for BLE broadcasting beacons which we call ``Timed-sequence- indistinguishability'' of beacons. This new privacy definition is stronger than the well-known indistinguishability, since it considers not just the advertisements' content, but also the advertisements' broadcasting times which are observable in the physical world.
We then prove that beacons with periodically changing pseudorandom identities do not achieve timed-sequence- indistinguishability. We do this by presenting a novel privacy attack against BLE beacons, which we call the ``Timer Manipulation Attack.'' This new time-based privacy attack can be executed by merely inserting or reinserting the beacon's battery at the adversary's chosen time. We performed this attack against an actually deployed beacon.
To mitigate the ``Timer Manipulation Attack'' and other attacks associated with periodic signaling, we propose a new countermeasure involving quasi-periodic randomized scheduling of identity changes. We prove that our countermeasure ensures timed-sequence indistinguishability for beacons, thereby enhancing the beacon's privacy. Additionally, we show how to integrate this countermeasure in the attacked system while essentially preserving its feasibility and utility, which is crucial for practical industrial adoption.
New results in Share Conversion, with applications to evolving access structures
We say there is a share conversion from a secret-sharing scheme $\Pi$ to another scheme $\Pi'$ implementing the same access structure if each party can locally apply a deterministic function to their share to transform any valid secret-sharing under $\Pi$ to a valid (but not necessarily random) secret-sharing under $\Pi'$ of the same secret. If such a conversion exists, we say that $\Pi\ge\Pi'$.
This notion was introduced by Cramer et al. (TCC'05), where they particularly proved that for any access structure, any linear secret-sharing scheme over a given field $\mathbb{F}$, has a conversion from a CNF scheme, and is convertible to a DNF scheme.
In this work, we initiate a systematic study of convertability between secret-sharing schemes, and present a number of results with implications to the understanding of the convertibility landscape.
- In the context of linear schemes, we present two key theorems providing necessary conditions for convertibility, proved using linear-algebraic tools. It has several implications, such as the fact that Shamir secret-sharing scheme can be neither maximal or minimal. Another implication of it is that a scheme may be minimal if its share complexity is at least as high as that of DNF.
- Our second key result is a necessary condition for convertibility to CNF from a broad class of (not necessarily linear) schemes. This result is proved via information-theoretic techniques and implies non-maximality for schemes with share complexity smaller than that of CNF.
We also provide a condition which is both necessary and sufficient for the existence of a share conversion to some linear scheme. The condition is stated as a system of linear equations, such that a conversion exists if and only if a solution to the linear system exists. We note that the impossibility results for linear schemes may be viewed as identifying a subset of contradicting equations in the system.
Another contribution of our paper, is in defining and studying share conversion for evolving secret-sharing schemes. In such a schemes, recently introduced by Komargodski et al. (IEEE ToIT'18), the number of parties is not bounded apriori, and every party receives a share as it arrives, which never changes in the sequel. Our impossibility results have implications to the evolving setting as well. Interestingly, unlike in the standard setting, there is no maximum or minimum in a broad class of evolving schemes, even without any restriction on the share size.
Finally, we show that, generally, there is no conversion between additive schemes over different fields, even from CNF to DNF! However by relaxing from perfect to statistical security, it may be possible to convert, and examplify this for (n,n)-threshold access structures.
ABE for Circuits with $\mathsf{poly}(\lambda)$-sized Keys from LWE
We present a key-policy attribute-based encryption (ABE) scheme for circuits based on the Learning With Errors (LWE) assumption whose key size is independent of the circuit depth. Our result constitutes the first improvement for ABE for circuits from LWE in almost a decade, given by Gorbunov, Vaikuntanathan, and Wee (STOC 2013) and Boneh, et al. (EUROCRYPT 2014) -- we reduce the key size in the latter from
$\mathsf{poly}(\mbox{depth},\lambda)$ to $\mathsf{poly}(\lambda)$. The starting point of our construction is a recent ABE scheme of Li, Lin, and Luo (TCC 2022), which achieves $\mathsf{poly}(\lambda)$ key size but requires pairings and generic bilinear groups in addition to LWE; we introduce new lattice techniques to eliminate the additional requirements.
Ciphertext-Policy ABE from Inner-Product FE
The enormous potential of Attribute-Based Encryption (ABE) in the context of IoT has driven researchers to propose pairing-free ABE schemes that are suitable for resource-constrained devices. Unfortunately, many of these schemes turned out to be insecure. This fact seems to reinforce the point of view of some authors according to which instantiating an Identity-Based Encryption (IBE) in plain Decision Diffie-Hellman (DDH) groups is impossible. In this paper, we provide a generic AND gate access structured Ciphertext-Policy ABE (CP-ABE) scheme with secret access policy from Inner-Product Functional Encryption (IPFE). We also propose an instantiation of that generic CP-ABE scheme from the DDH assumption. From our generic CP-ABE scheme we derive an IBE scheme by introducing the concept of Clustered Identity-Based Encryption (CIBE). Our schemes show that it is indeed possible to construct practical and secure IBE and ABE schemes based on the classical DDH assumption.
Construction of quadratic APN functions with coefficients in $\mathbb{F}_2$ in dimensions $10$ and $11$
Yu et al. described an algorithm for conducting computational searches for quadratic APN functions over the finite field $\mathbb{F}_{2^n}$, and used this algorithm to give a classification of all quadratic APN functions with coefficients in $\mathbb{F}_{2}$ for dimensions $n$ up to 9. In this paper, we speed up the running time of that algorithm by a factor of approximately $\frac{n \times 2^n}{n^3}$. Based on this result, we give a complete classification of all quadratic APN functions over $\mathbb{F}_{2^{10}}$ with coefficients in $\mathbb{F}_{2}$. We also perform some partial computations for quadratic APN functions over $\mathbb{F}_{2^{11}}$ with coefficients in $\mathbb{F}_{2}$ , and conjecture that they form 6 CCZ-inequivalent classes which also correspond to known APN functions.
Masking Gaussian Elimination at Arbitrary Order, with Application to Multivariate- and Code-Based PQC
Digital signature schemes based on multivariate- and code-based hard problems are promising alternatives for lattice-based signature schemes, due to their small signature size. Gaussian Elimination (GE) is a critical operation in the signing procedure of these schemes. In this paper, we provide a masking scheme for GE with back substitution to defend against first- and higher-order attacks. To the best of our knowledge, this work is the first to analyze and propose masking techniques for multivariate- or code-based DS algorithms.
We propose a masked algorithm for transforming a system of linear equations into row-echelon form. This is realized by introducing techniques for efficiently making leading (pivot) elements one while avoiding costly conversions between Boolean and multiplicative masking at all orders. We also propose a technique for efficient masked back substitution, which eventually enables a secure unmasking of the public output. All novel gadgets are proven secure in the $t$-probing model. Additionally, we evaluate the overhead of our countermeasure for several post-quantum candidates and their different security levels at first-, second-, and third-order, including UOV, MAYO, SNOVA, QR-UOV, and MQ-Sign. Notably, the operational cost of first-, second-, and third-order masked GE is 2.3$\times$ higher, and the randomness cost is 1.2$\times$ higher in MAYO compared to UOV for security levels III and V. In contrast, these costs are similar in UOV and MAYO for one version of level I. We also show detailed performance results for masked GE implementations for all three security versions of UOV on the Arm Cortex-M4 and compare them with unmasked results. Our masked implementation targeting UOV parameters has an overhead of factor 15.1$\times$, 15.2$\times$, and 15.4$\times$ compared to the unprotected implementation for NIST security level I, III, and V.
An efficient collision attack on Castryck-Decru-Smith’s hash function
In 2020, Castryck-Decru-Smith constructed a hash function using the (2,2)-isogeny graph of superspecial principally polarized abelian surfaces. In their construction, the initial surface was chosen from vertices quite "close" to the square of a supersingular elliptic curve with a known endomorphism ring. In this paper, we propose an algorithm for recovering a collision on their hash function. Under some heuristic assumptions, the time complexity and space complexity of our algorithm are estimated to be $\widetilde{O}(p^{3/10})$ which is smaller than the complexity $\widetilde{O}(p^{3/2})$ the authors had claimed necessary to recover such a collision, where $p$ is the characteristic of the base field. In particular case where $p$ has a special form, then both the time and space complexities of our algorithm are polynomial in $\log{p}$. We implemented our algorithm in MAGMA, and succeeded in recovering a collision in 17 hours (using 64 parallel computations) under a parameter setting the authors had claimed to be 384-bit secure. Finally, we propose a simple countermeasure against our attack, which is expected to restore the complexity required to recover a collision to $\widetilde{O}(p)$ currently.
zkMarket : Privacy-preserving Digital Data Trade System via Blockchain
Ensuring fairness in blockchain-based data trading presents significant challenges, as the transparency of blockchain can expose sensitive details and compromise fairness. Fairness ensures that the seller receives payment only if they provide the correct data, and the buyer gains access to the data only after making the payment. Existing approaches face limitations in efficiency particularly when applied to large-scale data. Moreover, preserving privacy has also been a significant challenge in blockchain.
In this paper, we introduce zkMarket, a privacy-preserving fair trade system on the blockchain.
we make the data registration process more concise and improve the seller's proving time by leveraging our novel pseudorandom generator named matrix-formed PRG (MatPRG), and existing commit-and-prove SNARK (CP-SNARK).
To ensure transaction privacy, zkMarket is built upon an anonymous transfer protocol. By combining encryption with zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARK), both the seller and the buyer are enabled to trade fairly.
Experimental results demonstrate that zkMarket significantly reduces the computational overhead associated with traditional blockchain solutions while maintaining robust security and privacy. Our evaluation demonstrates that zkMarket achieves high efficiency while maintaining robust security and privacy. The seller can register 1MB of data in 2.8 seconds, the buyer can generate the trade transaction in 0.2 seconds, and the seller can finalize the trade within 0.4 seconds.
PANTHER: Private Approximate Nearest Neighbor Search in the Single Server Setting
Approximate nearest neighbor search (ANNS), also known as
vector search, is an important building block for varies applications,
such as databases, biometrics, and machine learning.
In this work, we are interested in the private ANNS problem,
where the client wants to learn (and can only learn) the ANNS
results without revealing the query to the server. Previous private
ANNS works either suffers from high communication
cost (Chen et al., USENIX Security 2020) or works under
a weaker security assumption of two non-colluding servers
(Servan-Schreiber et al., SP 2022). We present Panther, an
efficient private ANNS framework under the single server
setting. Panther achieves its high performance via several
novel co-designs of private information retrieval (PIR), secretsharing,
garbled circuits, and homomorphic encryption. We
made extensive experiments using Panther on four public
datasets, results show that Panther could answer an ANNS
query on 10 million points in 23 seconds with 318 MB of
communication. This is more than 6× faster and 18× more
compact than Chen et al..
Universal Adaptor Signatures from Blackbox Multi-Party Computation
Adaptor signatures (AS) extend the functionality of traditional digital signatures by enabling the generation of a pre-signature tied to an instance of a hard NP relation, which can later be turned (adapted) into a full signature upon revealing a corresponding witness. The recent work by Liu et al. [ASIACRYPT 2024] devised a generic AS scheme that can be used for any NP relation---which here we will refer to as universal adaptor signatures scheme, in short UAS---from any one-way function. However, this generic construction depends on the Karp reduction to the Hamiltonian cycle problem, which adds significant overhead and hinders practical applicability.
In this work, we present an alternative approach to construct universal adaptor signature schemes relying on the multi-party computation in the head (MPCitH) paradigm. This overcomes the reliance on the costly Karp reduction, while inheriting the core property of the MPCitH---which makes it an invaluable tool in efficient cryptographic protocols---namely, that the construction is black-box with respect to the underlying cryptographic primitive (while it remains non-black-box in the relation being proven). Our framework simplifies the design of UAS and enhances their applicability across a wide range of decentralized applications, such as blockchain and privacy-preserving systems. Our results demonstrate that MPCitH-based UAS schemes offer strong security guarantees while making them a promising tool in the design of real-world cryptographic protocols.
Byte-wise equal property of ARADI
ARADI is a low-latency block cipher proposed by the NSA (National Security Agency) in 2024 for memory encryption. Bellini et al. experimentally demonstrated that in specific cubes of 5-round ARADI, the cube sums are byte-wise equal, for example, to 0x9d9dc5c5. This paper modifies the MILP-based division property algorithm to prove this and observes that the rotation amount of 8 in ARADI causes cancellations of monomials, allowing us to extend the byte-wise equal property up to 8 rounds. As a result, we obtained distinguishers for rounds 6 and 7 with lower data complexities of $2^{77}$ and $2^{112}$, respectively, compared to previous methods.
PRIME: Differentially Private Distributed Mean Estimation with Malicious Security
Distributed mean estimation (DME) is a fundamental and important task as it serves as a subroutine in convex optimization, aggregate statistics, and, more generally, federated learning. The inputs for distributed mean estimation (DME) are provided by clients (such as mobile devices), and these inputs often contain sensitive information. Thus, protecting privacy and mitigating the influence of malicious adversaries are critical concerns in DME. A surge of recent works has focused on building multiparty computation (MPC) based protocols tailored for the task of secure aggregation. However, MPC fails to directly address these two issues: (i) the potential manipulation of input by adversaries, and (ii) the leakage of information from the underlying function. This paper presents a novel approach that addresses both these issues. We propose a secure aggregation protocol with a robustness guarantee, effectively protecting the system from "faulty" inputs introduced by malicious clients. Our protocol further ensures differential privacy, so that the underlying function will not leak significant information about individuals.
Notably, this work represents the first comprehensive effort to combine robustness and differential privacy guarantees in the context of DME. In particular, we capture the security of the protocol via a notion of "usefulness" combined with differential privacy inspired by the work of Mironov et al. (CRYPTO 2009) and formally analyze this security guarantee for our protocol.
Improved Attacks for SNOVA by Exploiting Stability under a Group Action
SNOVA is a post-quantum digital signature scheme based on multivariate polynomials. It is a second-round candidate in an ongoing NIST standardization process for post-quantum signatures, where it stands out for its efficiency and compactness. Since its initial submission, there have been several improvements to its security analysis, both on key recovery and forgery attacks. All these works reduce to solving a structured system of quadratic polynomials, which we refer to as SNOVA system.
In this work, we propose a polynomial solving algorithm tailored for SNOVA systems, which exploits the \textit{stability} of the system under the action of a commutative group of matrices. This new algorithm reduces the complexity of solving SNOVA systems over generic ones. We show how to adapt the \textit{reconciliation} and \textit{direct} attacks in order to profit from the new algorithm. Consequently, we improve the reconciliation attack for all SNOVA parameter sets with speedup factors ranging between $2$ and $2^{20}$. We also show how to use similar ideas to carry on a forgery attack. In this case, we use experimental results to estimate its complexity, and we discuss its impact. The empirical evidence suggests that our attack is more efficient than previous attacks, and it takes some SNOVA parameter sets below NIST's security threshold.
A Closer Look at Falcon
Falcon is a winner of NIST's six-year post-quantum cryptography standardisation competition. Based on the celebrated full-domain-hash framework of Gentry, Peikert and Vaikuntanathan (GPV) (STOC'08), Falcon leverages NTRU lattices to achieve the most compact signatures among lattice-based schemes.
Its security hinges on a Rényi divergence-based argument for Gaussian samplers, a core element of the scheme. However, the GPV proof, which uses statistical distance to argue closeness of distributions, fails when applied naively to Falcon due to parameter choices resulting in statistical distances as large as $2^{-34}$. Additional implementation-driven deviations from the GPV framework further invalidate the original proof, leaving Falcon without a security proof despite its selection for standardisation.
This work takes a closer look at Falcon and demonstrates that introducing a few minor, conservative modifications allows for the first formal proof of the scheme in the random oracle model. At the heart of our analysis lies an adaptation of the GPV framework to work with the Rényi divergence, along with an optimised method for parameter selection under this measure.
Unfortunately, our analysis shows that despite our modification of Falcon-512 and Falcon-1024 we do not achieve strong unforgeability for either scheme. For plain unforgeability we are able to show that our modifications to Falcon-512 barely satisfy the claimed 120-bit security target and for Falcon-1024 we confirm the claimed security level. As such we recommend revisiting Falcon and its parameters.
Push-Button Verification for BitVM Implementations
Bitcoin, while being the most prominent blockchain with the largest market capitalization, suffers from scalability and throughput limitations that impede the development of ecosystem projects like Bitcoin Decentralized Finance (BTCFi). Recent advancements in BitVM propose a promising Layer 2 (L2) solution to enhance Bitcoin's scalability by enabling complex computations off-chain with on-chain verification. However, Bitcoin's constrained programming environment—characterized by its non-Turing-complete Script language lacking loops and recursion, and strict block size limits—makes developing complex applications labor-intensive, error-prone, and necessitates manual partitioning of scripts. Under this complex programming model, subtle mistakes could lead to irreversible damage in a trustless environment like Bitcoin. Ensuring the correctness and security of such programs becomes paramount.
To address these challenges, we introduce the first formal verifier for BitVM implementations. Our approach involves designing a register-based, higher-level domain-specific language (DSL) that abstracts away complex stack operations, allowing developers to reason about program correctness more effectively while preserving the semantics of the low-level program. We present a formal computational model capturing the semantics of BitVM execution and Bitcoin script, providing a foundation for rigorous verification. To efficiently handle large programs and complex constraints arising from unrolled computations that simulate loops, we summarize repetitive "loop-style" computations using loop invariant predicates in our DSL. We leverage a counterexample-guided inductive synthesis (CEGIS) procedure to lift low-level Bitcoin script into our DSL, facilitating efficient verification without sacrificing accuracy. Evaluated on 78 benchmarks from BitVM implementations, our tool successfully verifies 83% of cases within 12.55 seconds on average and identified one previously unknown vulnerability, demonstrating its effectiveness in enhancing the security and reliability of BitVM.
ECPM Cryptanalysis Resource Estimation
Elliptic Curve Point Multiplication (ECPM) is a key component of the Elliptic Curve Cryptography (ECC) hierarchy protocol. However, the specific estimation of resources required for this process remains underexplored despite its significance in the cryptanalysis of ECC algorithms, particularly binary ECC in GF (2𝑚). Given the extensive use of ECC algorithms in various security protocols and devices, it is essential to conduct this examination to gain valuable insights into its cryptanalysis, specifically in terms of providing precise resource estimations, which serve as a solid basis for further investigation in solving the Elliptic Curve Discrete Logarithm Problem. Expanding on several significant prior research, in this work, we refer to as ECPM cryptanalysis, we estimate quantum resources, including qubits, gates, and circuit depth, by integrating point addition (PA) and point-doubling (PD) into the ECPM scheme, culminating in a Shor’s algorithm-based binary ECC cryptanalysis circuit. Focusing on optimizing depth, we elaborate on and implement the most efficient PD circuit and incorporate optimized Karatsuba multiplication and FLT-based inversion algorithms for PA and PD operations. Compared to the latest PA-only circuits, our preliminary results showcase significant resource optimization for various ECPM implementations, including single-step ECPM, ECPM with combined or selective PA/PD utilization, and total−step ECPM (2𝑛 PD+2 PA).
Critical Round in Multi-Round Proofs: Compositions and Transformation to Trapdoor Commitments
In many multi-round public-coin interactive proof systems, challenges in different rounds serve different roles, but a formulation that actively utilizes this aspect has not been studied extensively. In this paper, we propose new notions called critical-round special honest verifier zero-knowledge and critical-round special soundness. Our notions are simple, intuitive, easy to apply, and capture several practical multi-round proof protocols including, but not limited to, those from the MPC-in-the-Head paradigm.
We demonstrate the usefulness of these notions with two fundamental applications where three-round protocols are known to be useful, but multi-round ones generally fail. First, we show that critical-round proofs yield trapdoor commitment schemes. This result also enables the instantiation of post-quantum secure adaptor signatures and threshold ring signatures from MPCitH, resolving open questions in (Haque and Scafuro, PKC 2020) and in (Liu et al., ASIACRYPT 2024). Second, we show that critical-round proofs can be securely composed using the Cramer-Schoenmakers-Damgård method. This solves an open question posed by Abe et al. in CRYPTO 2024.
Overall, these results shed new light on the potential of multi-round proofs in both theoretical and practical cryptographic protocol design
Compact and Tightly Secure (Anonymous) IBE from Module LWE in the QROM
We present a new compact and tightly secure (anonymous) identity-based encryption (IBE) scheme based on structured lattices. This is the first IBE scheme that is (asymptotically) as compact as the most practical NTRU-based schemes and tightly secure under the module learning with errors (MLWE) assumption, known as the standard lattice assumption, in the (quantum) random oracle model. In particular, our IBE scheme is the most compact lattice-based scheme (except for NTRU-based schemes). We design our IBE scheme by instantiating the framework of Gentry, Peikert, and Vaikuntanathan (STOC`08) using the compact trapdoor proposed by Yu, Jia, and Wang (CRYPTO'23). The tightness of our IBE scheme is achieved by extending the proof technique of Katsumata et al. (ASIACRYPT'18, JoC'21) to the hermit normal form setting. To achieve this, we developed some new results on module lattices that may be of independent interest.
Fully Homomorphic Encryption with Efficient Public Verification
We present an efficient Publicly Verifiable Fully Homomorphic Encryption scheme that, along with being able to evaluate arbitrary boolean circuits over ciphertexts, also generates a succinct proof of correct homomorphic computation. Our scheme is based on FHEW proposed by Ducas and Micciancio (Eurocrypt'15), and we incorporate the GINX homomorphic accumulator (Eurocrypt'16) for improved bootstrapping efficiency. In order to generate the proof efficiently, we generalize the widely used Rank-1 Constraint System (R1CS) to the ring setting and obtain Ring R1CS, to natively express homomorphic computation in FHEW.
In particular, we develop techniques to efficiently express in our Ring R1CS the "non-arithmetic" operations, such as gadget decomposition and modulus switching used in the FHEW construction. We further construct a SNARG for Ring R1CS instances, by translating the Ring R1CS instance into a sum-check protocol over polynomials, and then compiling it into a succinct non-interactive proof by incorporating the lattice-based polynomial commitment scheme of Cini, Malavolta, Nguyen, and Wee (Crypto'24). Putting together, our Publicly Verifiable FHE scheme relies on standard hardness assumptions about lattice problems such that it generates a succinct proof of homomorphic computation of circuit $C$ in time $O(|C|^2\cdot poly(\lambda))$ and of size $O(\log^2{|C|}\cdot poly(\lambda))$. Besides, our scheme achieves the recently proposed IND-SA (indistinguishability under semi-active attack) security by Walter (EPrint 2024/1207) that exactly captures client data privacy when a homomorphic computation can be verified.
Quantum Black-Box Separations: Succinct Non-Interactive Arguments from Falsifiable Assumptions
In their seminal work, Gentry and Wichs (STOC'11) established an impossibility result for the task of constructing an adaptively-sound SNARG via black-box reduction from a falsifiable assumption.
An exciting set of recent SNARG constructions demonstrated that, if one adopts a weaker but still quite meaningful notion of adaptive soundness, then impossibility no longer holds (Waters-Wu, Waters-Zhandry, Mathialagan-Peters-Vaikunthanathan ePrint'24). These fascinating new results raise an intriguing possibility: is there a way to remove this slight weakening of adaptive soundness, thereby completely circumventing the Gentry-Wichs impossibility?
A natural route to closing this gap would be to use a quantum black-box reduction, i.e., a reduction that can query the SNARG adversary on superpositions of inputs. This would take advantage of the fact that Gentry-Wichs only consider classical reductions. In this work, we show that this approach cannot succeed. Specifically, we extend the Gentry-Wichs impossibility result to quantum black-box reductions, and thereby establish an important limit on the power of such reductions.
Homomorphic Matrix Operations under Bicyclic Encoding
Homomorphically encrypted matrix operations are extensively used in various privacy-preserving applications. Consequently, reducing the cost of encrypted matrix operations is a crucial topic on which numerous studies have been conducted. In this paper, we introduce a novel matrix encoding method, named bicyclic encoding, under which we propose two new algorithms BMM-I and BMM-II for encrypted matrix multiplication. BMM-II outperforms the stat-of-the-art algorithms in theory, while BMM-I, combined with the segmented strategy, performs well in practice, particularly for matrices with high dimensions. Another noteworthy advantage of bicyclic encoding is that it allows for transposing an encrypted matrix entirely free. A comprehensive experimental study based on our proof-of-concept implementation shows that each algorithm introduced in this paper has specific scenarios outperforming existing algorithms, achieving speedups ranging from 2x to 38x.
Resilience-Optimal Lightweight High-threshold Asynchronous Verifiable Secret Sharing
Shoup and Smart (SS24) recently introduced a lightweight asynchronous verifiable secret sharing (AVSS) protocol with optimal resilience directly from cryptographic hash functions (JoC 2024), offering plausible quantum resilience and computational efficiency. However, SS24 AVSS only achieves standard secrecy to keep the secret confidential against $n/3$ corrupted parties \textit{if no honest party publishes its share}. In contrast, from ``heavyweight'' public-key cryptography, one can realize so-called \textit{high-threshold} asynchronous verifiable secret sharing (HAVSS), with a stronger \textit{high-threshold} secrecy to tolerate $n/3$ corrupted parties and additional leaked shares from $n/3$ honest parties. This raises the following question: can we bridge the remaining gap to design an efficient HAVSS using only lightweight cryptography?
We answer the question in the affirmative by presenting a lightweight HAVSS with optimal resilience. When executing across $n$ parties to share a secret, it attains a worst-case communication complexity of $\Tilde{\bigO}(\lambda n^3)$ (where $\lambda$ is the cryptographic security parameter) and realizes high-threshold secrecy to tolerate a fully asynchronous adversary that can control $t= \lfloor \frac{n-1}{3} \rfloor$ malicious parties and also learn $t$ additional secret shares from some honest parties.
The (worst-case) communication complexity of our lightweight HAVSS protocol matches that of SS24 AVSS---the state-of-the-art lightweight AVSS without high-threshold secrecy.
Notably, our design is a direct and concretely efficient reduction to hash functions in the random oracle model, without extra setup assumptions like CRS/PKI or heavy intermediate steps like hash-based zk-STARK.
Somewhat Homomorphic Encryption from Linear Homomorphism and Sparse LPN
We construct somewhat homomorphic encryption from the sparse learning-parities-with-noise problem, along with an assumption that implies linearly homomorphic encryption (e.g., the decisional Diffie-Hellman or decisional composite residuosity assumptions). Our resulting schemes support an a-priori bounded number of homomorphic operations: $O(\log \lambda / \log \log \lambda)$ multiplications followed by poly($\lambda$) additions, where $\lambda$ is a security parameter. These schemes have compact ciphertexts: before and after homomorphic evaluation, the bit length of each ciphertext is a fixed polynomial in the security parameter $\lambda$, independent of the number of homomorphic operations that the scheme supports. This gives the first constructions of somewhat homomorphic encryption that can evaluate the class of bounded-degree polynomials without relying on lattice assumptions or bilinear maps.
Our new encryption schemes are conceptually simple: much as in Gentry, Sahai, and Waters’ fully homomorphic encryption scheme, ciphertexts in our scheme are matrices, homomorphic addition is matrix addition, and homomorphic multiplication is matrix multiplication. Moreover, when encrypting many messages at once and performing many homomorphic evaluations at once, the bit length of the ciphertexts in (some of) our schemes can be made arbitrarily close to the bit length of the plaintexts. The main limitation of our schemes is that they require a large evaluation key, whose size scales with the complexity of the homomorphic computation performed, though this key can be re-used across any polynomial number of encryptions and evaluations.
A Forgery Attack on a Code-based Signature Scheme
With the advent of quantum computers, the security of cryptographic primitives, including digital signature schemes, has been compromised. To deal with this issue, some signature schemes have been introduced to resist against these computers. These schemes are known as post-quantum signature schemes. One group of these schemes is based on the hard problems of coding theory, called code-based cryptographic schemes. Several code-based signature schemes are inspired by the McEliece encryption scheme using three non-singular, parity-check, and permutation matrices as the only components of the private keys, and their product as the public key. In this paper, we focus on the analysis of a class of such signature schemes. For this purpose, we first prove that the linear relationships between the columns of the parity-check/generator matrix appear in the public key matrix, and by exploiting this feature we perform a forgery attack on one of the signature schemes of this class as an evidence. The complexity of this attack is of O(n^4).
A comprehensive analysis of Regev's quantum algorithm
Public key cryptography can be based on integer factorization and
the discrete logarithm problem (DLP), applicable in multiplicative groups and
elliptic curves. Regev’s recent quantum algorithm was initially designed for the
factorization and was later extended to the DLP in the multiplicative group.
In this article, we further extend the algorithm to address the DLP for elliptic
curves. Notably, based on celebrated conjectures in Number Theory, Regev’s
algorithm is asymptotically faster than Shor’s algorithm for elliptic curves.
Our analysis covers all cases where Regev’s algorithm can be applied. We
examine the general framework of Regev’s algorithm and offer a geometric
description of its parameters. This preliminary analysis enables us to certify
the success of the algorithm on a particular instance before running it.
In the case of integer factorization, we demonstrate that there exists an in-
finite family of RSA moduli for which the algorithm always fails. On the other
hand, when the parameters align with the Gaussian heuristics, we prove that
Regev’s algorithm succeeds. By noting that the algorithm naturally adapts
to the multidimensional DLP, we proved that it succeeds for a certain range
of parameters.
On the Sample Complexity of Linear Code Equivalence for all Code Rates
In parallel with the standardization of lattice-based cryptosystems, the research community in Post-quantum Cryptography focused on non-lattice-based hard problems for constructing public-key cryptographic primitives. The Linear Code Equivalence (LCE) Problem has gained attention regarding its practical applications and cryptanalysis.
Recent advancements, including the LESS signature scheme and its candidacy in the NIST standardization for additional signatures, supported LCE as a foundation for post-quantum cryptographic primitives. However, recent cryptanalytic results have revealed vulnerabilities in LCE-based constructions when multiple related public keys are available for one specific code rate. In this work, we generalize the proposed attacks to cover all code rates. We show that the complexity of recovering the private key from multiple public keys is significantly reduced for any code rate scenario. Thus, we advise against constructing specific cryptographic primitives using LCE.
$\mathsf{Graphiti}$: Secure Graph Computation Made More Scalable
Privacy-preserving graph analysis allows performing computations on graphs that store sensitive information while ensuring all the information about the topology of the graph, as well as data associated with the nodes and edges, remains hidden. The current work addresses this problem by designing a highly scalable framework, $\mathsf{Graphiti}$, that allows securely realising any graph algorithm. $\mathsf{Graphiti}$ relies on the technique of secure multiparty computation (MPC) to design a generic framework that improves over the state-of-the-art framework of GraphSC by Araki et al. (CCS'21). The key technical contribution is that $\mathsf{Graphiti}$ has round complexity independent of the graph size, which in turn allows attaining the desired scalability. Specifically, this is achieved by (i) decoupling the $\mathsf{Scatter}$ primitive of GraphSC into separate operations of $\mathsf{Propagate}$ and $\mathsf{ApplyE}$, (ii) designing a novel constant-round approach to realise $\mathsf{Propagate}$, as well as (iii) designing a novel constant-round approach to realise the $\mathsf{Gather}$ primitive of GraphSC by leveraging the linearity of the aggregation operation. We benchmark the performance of $\mathsf{Graphiti}$ for the application of contact tracing via BFS for 10 hops and observe that it takes less than 2 minutes when computing over a graph of size $10^7$. Concretely it improves over the state-of-the-art up to a factor of $1034\times$ in online run time. Similar to GraphSC by Araki et al., since $\mathsf{Graphiti}$ relies on a secure protocol for shuffle, we additionally design a shuffle protocol secure against a semi-honest adversary in the 2-party with a helper setting. Given the versatility of shuffle protocol, the designed solution is of independent interest. Hence, we also benchmark the performance of the designed shuffle where we observe improvements of up to $1.83\times$ in online run time when considering an input vector of size $10^7$, in comparison to the state-of-the-art in the considered setting.
Exponential sums in linear cryptanalysis
It is shown how bounds on exponential sums derived from modern algebraic geometry, and l-adic cohomology specifically, can be used to upper bound the absolute correlations of linear approximations for cryptographic constructions of low algebraic degree. This is illustrated by applying results of Deligne, Denef and Loeser, and Rojas-León, to obtain correlation bounds for a generalization of the Butterfly construction, three-round Feistel ciphers, and a generalization of the Flystel construction. For each of these constructions, bounds obtained using other methods are significantly weaker. In the case of the Flystel construction, our bounds resolve a conjecture by the designers.
Correlation bounds of this type are relevant for the development of security arguments against linear cryptanalysis, especially in the weak-key setting or for primitives that do not involve a key. Since the methods used in this paper are applicable to constructions defined over arbitrary finite fields, the results are also relevant for arithmetization-oriented primitives such as Anemoi, which uses S-boxes based on the Flystel construction.
PQNTRU: Acceleration of NTRU-based Schemes via Customized Post-Quantum Processor
Post-quantum cryptography (PQC) has rapidly evolved in response to the emergence of quantum computers, with the US National Institute of Standards and Technology (NIST) selecting four finalist algorithms for PQC standardization in 2022, including the Falcon digital signature scheme. The latest round of digital signature schemes introduced Hawk, both based on the NTRU lattice, offering compact signatures, fast generation, and verification suitable for deployment on resource-constrained Internet-of-Things (IoT) devices. Despite the popularity of Crystal-Dilithium and Crystal-Kyber, research on NTRU-based schemes has been limited due to their complex algorithms and operations. Falcon and Hawk's performance remains constrained by the lack of parallel execution in crucial operations like the Number Theoretic Transform (NTT) and Fast Fourier Transform (FFT), with data dependency being a significant bottleneck. This paper enhances NTRU-based schemes Falcon and Hawk through hardware/software co-design on a customized Single-Instruction-Multiple-Data (SIMD) processor, proposing new SIMD hardware units and instructions to expedite these schemes along with software optimizations to boost performance. Our NTT optimization includes a novel layer merging technique for SIMD architecture to reduce memory accesses, and the use of modular algorithms (Signed Montgomery and Improved Plantard) targets various modulus data widths to enhance performance. We explore applying layer merging to accelerate fixed-point FFT at the SIMD instruction level and devise a dual-issue parser to streamline assembly code organization to maximize dual-issue utilization. A System-on-chip (SoC) architecture is devised to improve the practical application of the processor in real-world scenarios. Evaluation on 28 nm technology and FPGA platform shows that our design and optimizations can increase the performance of Hawk signature generation and verification by over 7 times.
HTCNN: High-Throughput Batch CNN Inference with Homomorphic Encryption for Edge Computing
Homomorphic Encryption (HE) technology allows for processing encrypted data, breaking through data isolation barriers and providing a promising solution for privacy-preserving computation. The integration of HE technology into Convolutional Neural Network (CNN) inference shows potential in addressing privacy issues in identity verification, medical imaging diagnosis, and various other applications. The CKKS HE algorithm stands out as a popular option for homomorphic CNN inference due to its capability to handle real number computations. However, challenges such as computational delays and resource overhead present significant obstacles to the practical implementation of homomorphic CNN inference, largely due to the complex nature of HE operations. In addition, current methods for speeding up homomorphic CNN inference primarily address individual images or large batches of input images, lacking a solution for efficiently processing a moderate number of input images with fast homomorphic inference capabilities, which is more suitable for edge computing applications. In response to these challenges, we introduce a novel leveled homomorphic CNN inference scheme aimed at reducing latency and improving throughput using the CKKS scheme. Our proposed inference strategy involves mapping multiple inputs to a set of ciphertext by exploiting the sliding window properties of convolutions to utilize CKKS's inherent Single-Instruction-Multiple-Data (SIMD) capability. To mitigate the delay associated with homomorphic CNN inference, we introduce optimization techniques, including mask-weight merging, rotation multiplexing, stride convolution segmentation, and folding rotations. The efficacy of our homomorphic inference scheme is demonstrated through evaluations carried out on the MNIST and CIFAR-10 datasets. Specifically, results from the MNIST dataset on a single CPU thread show that inference for 163 images can be completed in 10.4 seconds with an accuracy of 98.9%, which is a 6.9 times throughput improvement over state-of-the-art works. Comparative analysis with existing methodologies highlights the superior performance of our proposed inference scheme in terms of latency, throughput, communication overhead, and memory utilization.
DEEP Commitments and Their Applications
This note studies a method of committing to a polynomial in a way that allows executions of low degree tests such as FRI to be batched and even deferred. In particular, it achieves (unlimited-depth) aggregation for STARKs.
Offline-Online Indifferentiability of Cryptographic Systems
The indifferentiability framework has become a standard methodology that enables us to study the security of cryptographic constructions in idealized models of computation. Unfortunately, while indifferentiability provides strong guarantees whenever the security of a construction is captured by a ``single-stage'' security game, it may generally provide no meaningful guarantees when the security is captured by a ``multi-stage'' one. In particular, the indifferentiability framework does not capture offline-online games, where the adversary can perform an extensive offline computation to later speed up the online phase. Such security games are extremely common, both in practice and in theory. Over the past decade, there has been numerous attempts to meaningfully extend the indifferentiability framework to offline-online games, however, they all ultimately met with little success.
In this work, our contribution is threefold. First, we propose an extension of the classical indifferentiability framework, we refer to as *offline-online-indifferentiability*, that applies in the context of attackers with an expensive offline phase (á la Ghoshal and Tessaro, CRYPTO '23). Second, we show that our notion lends itself to a natural and meaningful composition theorem for offline-online security games. Lastly, as our main technical contribution, we analyze the offline-online-indifferentiability of two classical variants of the Merkle-Damg\aa rd hashing mechanism, one where the key is fed only to the first block in the chain and the other where the key is fed to each block in the chain. For both constructions, we prove a *tight* bound on their offline-online-indifferentiability (i.e., an upper bound and an attack that matches it). Notably, our bound for the second variant shows that the construction satisfies *optimal* offline-online-indifferentiability.
Robust Double Auctions for Resource Allocation
In a zero-knowledge proof market, we have two sides. On one side, bidders with proofs of different sizes and some private value to have this proof computed. On the other side, we have distributors (also called sellers) which have compute available to process the proofs by the bidders, and these distributors have a certain private cost to process these proofs (dependent on the size). More broadly, this setting applies to any online resource allocation where we have bidders who desire a certain amount of a resource and distributors that can provide this resource. In this work, we study how to devise double auctions for this setting which are truthful for users, weak group strategy proof, weak budget balanced, computationally efficient, and achieve a good approximation of the maximum welfare possible by the set of bids. We denote such auctions as $\textit{robust}$.
Revisiting the “improving the security of multi-party quantum key agreement with five- qubit Brown states”
In 2018 Cai et al. proposed a multi-party quantum key agreement with five-qubit Brown states. They confirmed the security of their proposed scheme. However, Elhadad, Ahmed, et al. found the scheme cannot resist the collusion attack launched by legal participants. They suggested a modification and declared that their improved version is capable of resisting this type of attack. Nevertheless, after analysis, we found that the collusion attack still exists. Subsequently, we proposed a straightforward modification to prevent the attack. After analysis, we conclude that our modification meets the required security and collusion attack requirements, which are very important in the quantum key agreement scheme.
New Experimental Evidences For the Riemann Hypothesis
The zeta function $\zeta(z)=\sum_{n=1}^{\infty} \frac{1}{n^z}$ is convergent for $\text{Re}(z)>1$, and the eta function $\eta(z)=\sum_{n=1}^{\infty}\frac{(-1)^{n-1}}{n^z}$ is convergent for $\text{Re}(z)>0$. The eta function and the analytic continuation of zeta function have the same zeros in the critical strip $0<\text{Re}(z)<1$, owing to that $\eta(z)=\left(1-2^{1-z}\right)\zeta(z)$. In this paper, we present the new experimental evidences which show that for any $a\in (0, 1), b\in (-\infty, \infty)$, there exists a zero $\frac{1}{2}+it$ such that the modulus $|\eta(a+ib)|\geq |\eta(a+it)|>|\eta(\frac{1}{2}+it)|=0$. These evidences further confirm that all zeros are on the critical line $\text{Re}(z)=\frac{1}{2}$.
POMS : Proxy Offloading for Multicloud Storage with Keyword Search
Cloud storage offers convenient data access and sharing, but security concerns remain. Existing secure cloud storage solutions often lack essential features like data integrity, multi-cloud support, user-friendly file sharing, and efficient search. This paper proposes a novel secure cloud storage system that addresses these limitations. Our system uses distributed storage and attribute-based encryption to enhance data availability, access control, and user experience. It also enables private and efficient file search and data retrievability verification. This approach overcomes the trade-offs present in prior work, offering a secure and user-friendly solution for cloud data management.
Secure and Privacy-preserving CBDC Offline Payments using a Secure Element
Uncategorized
Uncategorized
Offline payments present an opportunity for central bank digital currency to address the lack of digital financial inclusion plaguing existing digital payment solutions. However, the design of secure offline payments is a complex undertaking; for example, the lack of connectivity during the payments renders double spending attacks trivial. While the identification of double spenders and penal sanctions may curb attacks by individuals, they may not be sufficient against concerted efforts by states or well-funded institutions. It is hence important to also rely on preventive measures that reduce the scale of such attacks. An example of such a measure is secure elements. These however are limited in compute and storage, making the design of solutions that offer comparable privacy guarantees to those of physical cash challenging.
We address this with a protocol that offloads most of the payment computation to the user’s mobile device and restricts the computation on the secure element to deleting spent tokens, and generating a signature with a computation equivalent to that of ECDSA. We claim that the use of mobile devices or enhanced smart card-based devices are required for secure consumer-to-consumer payments. To further harden the protocol, we enable the efficient identification of double spenders on the off-chance an attacker successfully double spends. Finally, we prove its security in the ideal/real world paradigm, and evaluate its performance to demonstrate its practicality.
Pseudorandomness in the (Inverseless) Haar Random Oracle Model
We study the (in)feasibility of quantum pseudorandom notions in a quantum analog of the random oracle model, where all the parties, including the adversary, have oracle access to the same Haar random unitary. In this model, we show the following:
• (Unbounded-query secure) pseudorandom unitaries (PRU) exist. Moreover, the PRU construction makes two calls to the Haar oracle.
• We consider constructions of PRUs making a single call to the Haar oracle. In this setting, we show that unbounded-query security is impossible to achieve. We complement this result by showing that bounded-query secure PRUs do exist with a single query to the Haar oracle.
• We show that multi-copy pseudorandom state generators and function-like state generators (with classical query access), making a single call to the Haar oracle, exist.
Our results have two consequences: (a) when the Haar random unitary is instantiated suitably, our results present viable approaches for building quantum pseudorandom objects without relying upon one-way functions and, (b) for the first time, we show that the key length in pseudorandom unitaries can be generically shrunk (relative to the output length). Our results are also some of the first usecases of the new ``path recording'' formalism for Haar random unitaries, introduced in the recent breakthrough work of Ma and Huang.
PEARL-SCALLOP: Parameter Extension Applicable in Real-Life SCALLOP
A crucial ingredient for many cryptographic primitives such as key exchange protocols and advanced signature schemes is a commutative group action where the structure of the underlying group can be computed efficiently. SCALLOP provides such a group action, based on oriented supersingular elliptic curves.
We present PEARL-SCALLOP, a variant of SCALLOP that changes several parameter and design choices, thereby improving on both efficiency and security and enabling feasible parameter generation for larger security levels. Within the SCALLOP framework, our parameters are essentially optimal; the orientation is provided by a $2^e$-isogeny, where $2^e$ is roughly equal to the discriminant of the acting class group.
As an important subroutine we present a practical algorithm for generating oriented supersingular elliptic curves. To demonstrate our improvements, we provide a proof-of-concept implementation which instantiates PEARL-SCALLOP at all relevant security levels. Our timings are more than an order of magnitude faster than any previous implementation.
The Window Heuristic: Automating Differential Trail Search in ARX Ciphers with Partial Linearization Trade-offs
The search for optimal differential trails for ARX ciphers is known to be difficult and scale poorly as the word size (and the branching through the carries of modular additions) increases.To overcome this problem, one may approximate the modular addition with the XOR operation, a process called linearization. The immediate drawback of this approach is that many valid and good trails are discarded. In this work, we explore different partial linearization trade-offs to model the modular addition through the \emph{window heuristic}, which restricts carry propagation to windows of $w_s$ consecutive positions. This strategy
enables the exploration of full linearization ($w_s = 0$), normal modelling ($w_s = n$), and all the different trade-offs between completeness and speed in between.
We give the corresponding SAT and MILP model and their parallel versions, and apply them to \chachacore, \speckfamily, \leafamily, and \hightfamily. Our method greatly outperforms all previous modeling of modular addition.
In particular, we find the first differential path for 4 rounds of \chachacore with a probability greater than $2^{-256}$, and a corresponding 6 rounds boomerang distinguisher.
This indicates that purely differential-based attacks have the potential to become competitive with differential-linear attacks,
currently, the best-known attacks against \chachacore and other ARX ciphers.
Finally, we exhibit an improved key recovery attack on reduced \leafamily.
Pseudorandom Obfuscation and Applications
We introduce the notion of pseudorandom obfuscation, a way to obfuscate (keyed) pseudorandom functions $f_K$ in an average-case sense. We study several variants of pseudorandom obfuscation and show a number of applications.
1. Applications in the iO World: Our weakest variant of pseudorandom obfuscation, named obfuscation for identical pseudorandom functions (iPRO), is weaker than indistinguishability obfuscation (iO): rather than obfuscating arbitrary circuits as in iO, iPRO only obfuscates circuits computing pseudorandom functions. We show that iPRO already enables several applications of iO, such as unleveled fully homomorphic encryption (without assuming circular security) and succinct randomized encodings.
2. From iPRO to iO: Despite being a weaker notion than iO, we show two transformations from iPRO to iO. Our first (and main) construction builds iO from iPRO plus standard assumptions on cryptographic bilinear maps. Our second construction builds iO, and even ideal obfuscation, from iPRO alone in the pseudorandom oracle model (Jain, Lin, Luo and Wichs, CRYPTO 2023).
We also formulate stronger variants of pseudorandom obfuscation that help us go beyond the applications of indistinguishability obfuscation.
3. Applications outside the iO World: We show how to construct a succinct witness encryption scheme from a strong version of PRO, where the size of the ciphertext is independent of the witness size. Such a witness encryption scheme is not known to exist even assuming iO.
4. Construction: We show a *heuristic* construction of the strongest version of PRO, secure under the sub-exponential hardness of the learning with errors (LWE) problem, and the private-coin evasive LWE heuristic (Wee, EUROCRYPT 2022; Tsabary, CRYPTO 2022).
Finally, we highlight some barriers in achieving the strongest version of pseudorandom obfuscation.
The Learning Stabilizers with Noise problem
Random classical codes have good error correcting properties, and yet they are notoriously hard to decode in practice. Despite many decades of extensive study, the fastest known algorithms still run in exponential time. The Learning Parity with Noise (LPN) problem, which can be seen as the task of decoding a random linear code in the presence of noise, has thus emerged as a prominent hardness assumption with numerous applications in both cryptography and learning theory.
Is there a natural quantum analog of the LPN problem? In this work, we introduce the Learning Stabilizers with Noise (LSN) problem, the task of decoding a random stabilizer code in the presence of local depolarizing noise. We give both polynomial-time and exponential-time quantum algorithms for solving LSN in various depolarizing noise regimes, ranging from extremely low noise, to low constant noise rates, and even higher noise rates up to a threshold. Next, we provide concrete evidence that LSN is hard. First, we show that LSN includes LPN as a special case, which suggests that it is at least as hard as its classical counterpart. Second, we prove a worst-case to average-case reduction for variants of LSN. We then ask: what is the computational complexity of solving LSN? Because the task features quantum inputs, its complexity cannot be characterized by traditional complexity classes. Instead, we show that the LSN problem lies in a recently introduced (distributional and oracle) unitary synthesis class. Finally, we identify several applications of our LSN assumption, ranging from the construction of quantum bit commitment schemes to the computational limitations of learning from quantum data.
OpenNTT: An Automated Toolchain for Compiling High-Performance NTT Accelerators in FHE
Modern cryptographic techniques such as fully homomorphic encryption (FHE) have recently gained broad attention. Most of these cryptosystems rely on lattice problems wherein polynomial multiplication forms the computational bottleneck. A popular method to accelerate these polynomial multiplications is the Number-Theoretic Transformation (NTT). Recent works aim to improve the practical deployability of NTT and propose toolchains supporting the NTT hardware accelerator design processes. However, existing design tools do not provide on-the-fly twiddle factor generation (TFG) which leads to high memory demands. Inspired by this situation, we present OpenNTT, a fully automated, open-source framework to compile NTT hardware accelerators with TFG for various NTT types and parameter sets. We address the challenge of combining conflict-free memory accesses and efficient, linear twiddle factor generation through a dedicated NTT processing order. Following this order, we develop a flexible twiddle factor generation method with minimal memory usage. These core concepts together with a frequency-optimized hardware architecture form our OpenNTT framework. We use OpenNTT to compile and test NTT hardware designs with various parameter sets on FPGAs. The obtained results show a clear memory reduction due to TFG and a speedup by 2.7× in latency and 2.2× in area-time-product, compared to prior arts.
Provably Robust Watermarks for Open-Source Language Models
The recent explosion of high-quality language models has necessitated new methods for identifying AI-generated text. Watermarking is a leading solution and could prove to be an essential tool in the age of generative AI. Existing approaches embed watermarks at inference and crucially rely on the large language model (LLM) specification and parameters being secret, which makes them inapplicable to the open-source setting. In this work, we introduce the first watermarking scheme for open-source LLMs. Our scheme works by modifying the parameters of the model, but the watermark can be detected from just the outputs of the model. Perhaps surprisingly, we prove that our watermarks are unremovable under certain assumptions about the adversary's knowledge. To demonstrate the behavior of our construction under concrete parameter instantiations, we present experimental results with OPT-6.7B and OPT-1.3B. We demonstrate robustness to both token substitution and perturbation of the model parameters. We find that the stronger of these attacks, the model-perturbation attack, requires deteriorating the quality score to 0 out of 100 in order to bring the detection rate down to 50%.
More Efficient Isogeny Proofs of Knowledge via Canonical Modular Polynomials
Proving knowledge of a secret isogeny has recently been proposed as a means to generate supersingular elliptic curves of unknown endomorphism ring, but is equally important for cryptographic protocol design as well as for real world deployments. Recently, Cong, Lai and Levin (ACNS'23) have investigated the use of general-purpose (non-interactive) zero-knowledge proof systems for proving the knowledge of an isogeny of degree $2^k$ between supersingular elliptic curves. In particular, their approach is to model this relation via a sequence of $k$ successive steps of a walk in the supersingular isogeny graph and to show that the respective $j$-invariants are roots of the second modular polynomial. They then arithmetize this relation and show that this approach, when compared to state-of-the-art tailor-made proofs of knowledge by Basso et al. (EUROCRYPT'23), gives a 3-10$\times$ improvement in proof and verification times, with comparable proof sizes.
In this paper we ask whether we can further improve the modular polynomial-based approach and generalize its application to primes ${\ell>2}$, as used in some recent isogeny-based constructions. We will answer these questions affirmatively, by designing efficient arithmetizations for each ${\ell \in \{2, 3, 5, 7, 13\}}$ that achieve an improvement over Cong, Lai and Levin of up to 48%.
Our main technical tool and source of efficiency gains is to switch from classical modular polynomials to canonical modular polynomials. Adapting the well-known results on the former to the latter polynomials, however, is not straight-forward and requires some technical effort. We prove various interesting connections via novel use of resultant theory, and advance the understanding of canonical modular polynomials, which might be of independent interest.
Embedded Curves and Embedded Families for SNARK-Friendly Curves
Based on the CM method for primality testing (ECPP) by Atkin and Morain published in 1993, we present two algorithms: one to generate embedded elliptic curves of SNARK-friendly curves, with a variable discriminant D; and another to generate families (parameterized by polynomials) with a fixed discriminant D. When D = 3 mod 4, it is possible to obtain a prime-order curve, and form a cycle. We apply our technique first to generate more embedded curves like Bandersnatch with BLS12-381 and we propose a plain twist-secure cycle above BLS12-381 with D = 6673027. We also devise about the scarcity of Bandersnatch-like CM curves, and show that with our algorithm, it is only a question of core-hours to find them. Second, we obtain families of prime-order embedded curves of discriminant D = 3 for BLS and KSS18 curves. Our method obtains families of embedded curves above KSS16 and can work for any KSS family. Our work generalizes the work on Bandersnatch (Masson, Sanso, and Zhang, and Sanso and El Housni).
A graph-theoretic approach to analyzing decoding failures of BIKE
We present experimental findings on the decoding failure rate (DFR) of BIKE, a fourth-round candidate in the NIST Post-Quantum Standardization process, at the 20-bit security level using graph-theoretic approaches. We select parameters according to BIKE design principles and conduct a series of experiments using Rust to generate significantly more decoding failure instances than in prior work using SageMath. For each decoding failure, we study the internal state of the decoder at each iteration and find that for 97% of decoding failures at block size $r=587$, the decoder reaches a fixed point within 7 iterations. We then consider the corresponding Tanner graphs of each decoding failure instance to determine whether the decoding failures are due to absorbing sets. We find that 81% of decoding failures at $r=587$ were caused by absorbing sets, and of these the majority were $(d,d)$-near codewords.
The Mysteries of LRA: Roots and Progresses in Side-channel Applications
Evaluation of cryptographic implementations with respect to side-channels has been mandated at high security levels nowadays. Typically, the evaluation involves four stages: detection, modeling, certification and secret recovery. In pursuit of specific goal at each stage, inherently different techniques used to be considered necessary. However, since the recent works of Eurocrypt2022 and Eurocrypt2024, linear regression analysis (LRA) has uniquely become the technique that is well-applied throughout all the stages. In this paper, we concentrate on this silver bullet technique within the field of side-channel. First, we address the fundamental problems of why and how to use LRA. The discussion of nominal and binary nature explains its strong applicability. To sustain effective outcomes, we provide in-depth analyses about the design matrix, regarding the sample distribution of plaintext and the chosen polynomial degree. We summarize ideal conditions that totally avoid multicollinearity problem, and explore the novel evaluator-advantageous property of LRA by means of model diagnosis. Then, we trace the roots where we theoretically elaborate its connections with traditional side-channel techniques, including Correlation Power Analysis (CPA), Distance-of-Means analysis (DoM) and Partition Power Analysis (PPA), in terms of regression coefficients, regression model and coefficient of determination. Finally, we probe into the state-of-the-art combined LRA with the so-called collapse function, demonstrating its relationship with another refined technique, G-DoM. We argue that properly relaxing the definition of bit groups equally satisfies our conclusions. Experimental results are in line with the theory, confirming its correctness.
Optimizing Message Range and Ciphertext Storage in GSW Encryption Using CRT and PVW-like Compression Scheme
This paper explores advancements in the Gentry-Sahai-Waters (GSW) fully homomorphic encryption scheme, addressing challenges related to message data range limitations and ciphertext size constraints. We introduce a novel approach utilizing the Chinese Remainder Theorem (CRT) for message decomposition, significantly expanding the allowable message range to the entire plaintext space. This method enables unrestricted message selection and supports parallel homomorphic operations without intermediate decryption. Additionally, we adapt existing ciphertext compression techniques, such as the PVW-like scheme, to reduce memory overhead associated with ciphertexts. Our experimental results demonstrate the effectiveness of the CRT-based decomposition in increasing the upper bound of message values and improving the scheme's capacity for consecutive homomorphic operations. However, compression introduces a trade-off, necessitating a reduced message range due to error accumulation. This research contributes to enhancing the practicality and efficiency of the GSW encryption scheme for complex computational scenarios while managing the balance between expanded message range, computational complexity, and storage requirements.
One Time Pad and the Short Key Dream
This is a survey on the One Time Pad (OTP) and its derivatives, from its origins to modern times. OTP, if used correctly, is (the only) cryptographic code that no computing power, present or future, can break. Naturally, the discussion shifts to the creation of long random sequences, starting from short ones, which can be easily shared. We could call it the Short Key Dream. Many problems inevitably arise, which affect many fields of computer science, mathematics and knowledge in general. This work presents a vast bibliography that includes fundamental classical works and current papers on randomness, pseudorandom number generators, compressibility, unpredictability and more.
Radical 2-isogenies and cryptographic hash functions in dimensions 1, 2 and 3
We provide explicit descriptions for radical 2-isogenies in dimensions
one, two and three using theta coordinates. These formulas allow us to efficiently
navigate in the corresponding isogeny graphs.
As an application of this, we implement different versions of the CGL hash func-
tion. Notably, the three-dimensional version is fastest, which demonstrates yet
another potential of using higher dimensional isogeny graphs in cryptography.
Arc: Accumulation for Reed--Solomon Codes
Proof-Carrying Data (PCD) is a foundational tool for ensuring the correctness of incremental distributed computations that has found numerous applications in theory and practice. The state-of-the-art PCD constructions are obtained via accumulation or folding schemes. Unfortunately, almost all known constructions of accumulation schemes rely on homomorphic vector commitments (VCs), which results in relatively high computational costs and insecurity in the face of quantum adversaries. A recent work of Bünz, Mishra, Nguyen, and Wang removes the dependence on homomorphic VCs by relying only on the random oracle model, but introduces a bound on the number of consecutive accumulation steps, which in turn bounds the depth of the PCD computation graph and greatly affects prover and verifier efficiency.
In this work, we propose Arc, a novel hash-based accumulation scheme that overcomes this restriction and supports an unbounded number of accumulation steps. The core building block underlying Arc is a new accumulation scheme for claims about proximity of claimed codewords to the Reed--Solomon code. Our approach achieves near-optimal efficiency, requiring a small number of Merkle tree openings relative to the code rate, and avoids the efficiency loss associated with bounded accumulation depth. Unlike prior work, our scheme is also able to accumulate claims up to list-decoding radius, resulting in concrete efficiency improvements.
We use this accumulation scheme to construct two distinct accumulation schemes, again relying solely on random oracles. The first approach accumulates RS proximity claims and can be used as an almost-drop-in replacement in existing PCD deployments based on IOP-based SNARKs.
The second approach directly constructs an accumulation scheme for rank-1 constraint systems (and more generally polynomial constraint systems) that is simpler and more efficient than the former and prior approaches.
We introduce the notion of Interactive Oracle Reductions (IORs) to enable a modular and simple security analysis. These extend prior notions of Reductions of Knowledge to the setting of IOPs.
Secure and Efficient Outsourced Matrix Multiplication with Homomorphic Encryption
Fully Homomorphic Encryption (FHE) is a promising privacy-enhancing technique that enables secure and private data processing on untrusted servers, such as privacy-preserving neural network (NN) evaluations. However, its practical application presents significant challenges. Limitations in how data is stored within homomorphic ciphertexts and restrictions on the types of operations that can be performed create computational bottlenecks. As a result, a growing body of research focuses on optimizing existing evaluation techniques for efficient execution in the homomorphic domain.
One key operation in this space is matrix multiplication, which forms the foundation of most neural networks. Several studies have even proposed new FHE schemes specifically to accelerate this operation. The optimization of matrix multiplication is also the primary goal of our work. We leverage the Single Instruction Multiple Data (SIMD) capabilities of FHE to increase data packing and significantly reduce the KeySwitch operation count— an expensive low-level routine in homomorphic encryption. By minimizing KeySwitching, we surpass current state-of-the-art solutions, requiring only a minimal multiplicative depth of two.
The best-known complexity for matrix multiplication at this depth is $\mathcal{O}(d)$ for matrices of size $d\times d$. Remarkably, even the leading techniques that require a multiplicative depth of three still incur a KeySwitch complexity of $\mathcal{O}(d)$. In contrast, our method reduces this complexity to $\mathcal{O}(\log{d})$ while maintaining the same level of data packing. Our solution broadly applies to all FHE schemes supporting Single Instruction Multiple Data (SIMD) operations.
We further generalize the technique in two directions: allowing arbitrary packing availability and extending it to rectangular matrices. This versatile approach offers significant improvements in matrix multiplication performance and enables faster evaluation of privacy-preserving neural network applications.
cuTraNTT: A Novel Transposed Number Theoretic Transform Targeting Low Latency Homomorphic Encryption for IoT Applications
Large polynomial multiplication is one of the computational bottlenecks in fully homomorphic encryption implementations. Usually, these multiplications are implemented using the number-theoretic transformation to speed up the computation. State-of-the-art GPU-based implementation of fully homomorphic encryption computes the number theoretic transformation in two different kernels, due to the necessary synchronization between GPU blocks to ensure correctness in computation. This can be a serious limitation in embedded systems that only have constrained computational resources to support the time-consuming homomorphic encryption. In this paper, we proposed a series of techniques to improve the performance of number theoretic transform targeting homomorphic encryption on a GPU device. Firstly, we proposed to arrange the polynomials in a transposed manner and skip the last two levels of radix-4 number theoretic transformation, allowing us to completely
avoid the block synchronization in NTT implementation. This technique improved the performance of homomorphic encryption by 1.37× and 1.34× on RTX 4060 and Jetson Orin Nano respectively, compared to the conventional approach that uses full NTT without skipping any levels. However, such an approach also introduces extra overhead in the subsequent point-wise multiplication, which slows down the homomorphic multiplication. To reduce this negative impact, a fast 16 × 16 point-wise
multiplication implementation was proposed, which relies on the heavily optimized Toom-Cook 4-way algorithm. Experimental results show that our proposed homomorphic multiplication can achieve similar latency compared to Jung et al. and Yang et al., which are the best results to date. This shows that the proposed cuTraNTT is able to reduce the latency of homomorphic encryption without sacrificing the performance in homomorphic multiplication.
On Key Substitution Attacks against Aggregate Signatures and Multi-Signatures
When we use signature schemes in practice, we sometimes should consider security beyond unforgeability.
This paper considers security against key substitution attacks of multi-signer signatures (i.e., aggregate signatures and multi-signatures).
Intuitively, this security property ensures that a malicious party cannot claim the ownership of a signature that is created by an honest signer.
We investigate security against key substitution attacks of a wide range of aggregate signature schemes and multi-signature schemes: the Boneh-Gentry-Lynn-Shacham aggregate signature scheme, the sequential aggregate signature scheme by Lysyanskaya et al., the multi-signature scheme by Bellare and Neven, MuSig2, and the ordered multi-signature scheme by Boldyreva et al.
Furthermore, if the scheme does not provide security against key substitution attacks, then we modify the scheme to become secure against the attacks.
(Quantum) Indifferentiability and Pre-Computation
Indifferentiability is a popular cryptographic paradigm for analyzing the security of ideal objects---both in a classical as well as in a quantum world. It is typically stated in the form of a composable and simulation-based definition, and captures what it means for a construction (e.g., a cryptographic hash function) to be ``as good as'' an ideal object (e.g., a random oracle). Despite its strength, indifferentiability is not known to offer security against pre-processin} attacks in which the adversary gains access to (classical or quantum) advice that is relevant to the particular construction. In this work, we show that indifferentiability is (generically) insufficient for capturing pre-computation. To accommodate this shortcoming, we propose a strengthening of indifferentiability which is not only composable but also takes arbitrary pre-computation into account. As an application, we show that the one-round sponge is indifferentiable (with pre-computation) from a random oracle. This yields the first (and tight) classical/quantum space-time trade-off for one-round sponge inversion.
Certified Randomness implies Secure Classical Position-Verification
Liu et al. (ITCS22) initiated the study of designing a secure position verification protocol based on a specific proof of quantumness protocol and classical communication. In this paper, we study this interesting topic further and answer some of the open questions that are left in that paper. We provide a new generic compiler that can convert any single round proof of quantumness-based certified randomness protocol to a secure classical communication-based position verification scheme. Later, we extend our compiler to different kinds of multi-round proof of quantumness-based certified randomness protocols. Moreover, we instantiate our compiler with a random circuit sampling (RCS)-based certified randomness protocol proposed by Aaronson and Hung (STOC 23). RCS-based techniques are within reach of today's NISQ devices; therefore, our design overcomes the limitation of the Liu et al. protocol that would require a fault-tolerant quantum computer to realize. Moreover, this is one of the first cryptographic applications of RCS-based techniques other than certified randomness.
PISA: Privacy-Preserving Smart Parking
In recent years, urban areas have experienced a rapid increase in vehicle numbers, while the availability of parking spaces has remained largely static, leading to a significant shortage of parking spots. This shortage creates considerable inconvenience for drivers and contributes to traffic congestion. A viable solution is the temporary use of private parking spaces by homeowners during their absence, providing a means to alleviate the parking problem and generate additional income for the owners. However, current systems for sharing parking spaces often neglect security and privacy concerns, exposing users to potential risks.
This paper presents PISA, a novel Privacy-Preserving Smart Parking scheme designed to address these issues through a cryptographically secure protocol. PISA enables the anonymous sharing of parking spots and allows vehicle owners to park without revealing any personal identifiers. Our primary contributions include the development of a comprehensive bi-directional anonymity framework that ensures neither party can identify the other, and the use of formal verification methods to substantiate the soundness and reliability of our security measures. Unlike existing solutions, which often lack a security focus, fail to provide formal validation, or are computationally intensive, PISA is designed to be both secure and efficient.
Straight-Line Knowledge Extraction for Multi-Round Protocols
Uncategorized
Uncategorized
The Fiat-Shamir (FS) transform is the standard approach to compiling interactive proofs into non-interactive ones. However, the fact that knowledge extraction typically requires rewinding limits its applicability without having to rely on further heuristic conjectures. A better alternative is a transform that guarantees straight-line knowledge extraction. Two such transforms were given by Pass (CRYPTO '03) and Fischlin (CRYPTO '05), respectively, with the latter giving the most practical parameters. Pass's approach, which is based on cut-and-choose, was also adapted by Unruh (EUROCRYPT '12, '14, '15) to the quantum setting, where rewinding poses a different set of challenges. All of these transforms are tailored at the case of three-round Sigma protocols, and do not apply to a number of popular paradigms for building succinct proofs (e.g., those based on folding or sumcheck) which rely on multi-round protocols.
This work initiates the study of transforms with straight-line knowledge extraction for multi-round protocols. We give two transforms, which can be thought of as multi-round analogues of those by Fischlin and Pass. Our first transform leads to more efficient proofs, but its usage applies to a smaller class of protocols than the latter one. Our second transform also admits a proof of security in the Quantum Random Oracle Model (QROM), making it the first transform for multi-round protocols which does not incur the super-polynomial security loss affecting the existing QROM analysis of the FS transform (Don et al., CRYPTO '20).
Proving the Security of the Extended Summation-Truncation Hybrid
Since designing a dedicated secure symmetric PRF is difficult, various works studied optimally secure PRFs from the sum of independent permutations (SoP).
At CRYPTO'20, Gunsing and Mennink proposed the Summation-Truncation Hybrid (STH).
While based on SoP, STH releases additional $a \leq n$ bits of the permutation calls and sums $n-a$ bits of them.
Thus, it produces $n+a$ bits at $O(n-a/2)$-bit PRF security.
Both SoP or STH can be used directly in encryption schemes or MACs in place of permutation calls for higher security.
However, simply replacing every call as in GCM-SIV$r$ would demand more calls.
For encryption schemes, Iwata's XORP scheme is long known to provide a better trade-off between efficiency and security. It extends SoP to variable-length-outputs by using $r+1$ calls to a block cipher where the output of one call is added to each of the other $r$ outputs.
A similar extension can be conducted for STH that we call XTH, the XORP-Truncation Hybrid.
Such an extension was already suggested in the final discussion by Gunsing and Mennink, but left as an open problem. This work fills the gap by formalizing and proving the security of XTH.
For a rate of $r/(r+1)$ as in XORP, we show $O(n-a/2-1.5\log(r))$-bit security for XTH.
Revisiting Fermat's Factorization Method
This paper addresses the problem of factoring composite numbers by introducing a novel approach to represent their prime divisors. We develop a method to efficiently identify smaller divisors based on the difference between the primes involved in forming the composite number. Building on these insights, we propose an algorithm that significantly reduces the computational complexity of factoring, requiring half as many iterations as traditional quadratic residue-based methods. The presented algorithm offers a more efficient solution for factoring composite numbers, with potential applications in fields such as cryptography and computational number theory.
An Efficient Noncommutative NTRU from Semidirect Product
NTRU is one of the most extensively studied lattice-based schemes. Its flexible design has inspired different proposals constructed over different rings, with some aiming to enhance security and others focusing on improving performance. The literature has introduced a line of noncommutative NTRU-like designs that claim to offer greater resistance to existing attacks. However, most of these proposals are either theoretical or fall short in terms of time and memory requirements when compared to standard NTRU. To our knowledge, DiTRU (Africacrypt 2024) is the first noncommutative analog of NTRU provided as a complete package. Although DiTRU is practical, it operates at two times slower than NTRU with no decryption failure. Additionally, key generation, encryption, and decryption are 1.2, 1.7, and 1.7 times slower, respectively, with negligible decryption failure. In this work, we introduce a noncommutative version of NTRU that offers comparable performance and key sizes to NTRU while improving upon DiTRU. Our cryptosystem is based on the GR-NTRU framework, utilizing the group ring of a semidirect product of cyclic groups over the ring of Eisenstein integers. This design allows for an efficient construction with key generation speeds approximately two (three) times faster than NTRU (DiTRU). Further, the proposed scheme provides roughly a speed-up by a factor of 1.2 (2) while encrypting/decrypting messages of the same length over NTRU (DiTRU). We provide a reference implementation in C for the proposed cryptosystem to prove our claims.
Pseudorandom Multi-Input Functional Encryption and Applications
We construct the first multi-input functional encryption (MIFE) and indistinguishability obfuscation (iO) schemes for pseudorandom functionalities, where the output of the functionality is pseudorandom for every input seen by the adversary. Our MIFE scheme relies on LWE and evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) for constant arity functions, and a strengthening of evasive LWE for polynomial arity. Thus, we obtain the first MIFE and iO schemes for a nontrivial functionality from conjectured post-quantum assumptions.
Along the way, we identify subtle issues in the proof of witness encryption from evasive LWE by prior work and believe that a similar strengthening of evasive LWE should also be required for their proof, for the same reasons as ours. We demonstrate the power of our new tools via the following applications:
1. Multi Input Predicate Encryption for Constant Arity. Assuming evasive LWE and LWE, we construct a multi-input predicate encryption scheme (MIPE) for P, supporting constant arity. The only prior work to support MIPE for P with constant arity by Agrawal et al. (Crypto, 2023) relies on a strengthening of Tensor LWE in addition to LWE and evasive LWE.
2. Multi Input Predicate Encryption for Polynomial Arity. Assuming a stronger variant of evasive LWE and LWE, we construct MIPE for P for polynomial arity. MIPE for polynomial arity supporting P was not known before, to the best of our knowledge.
3. Two Party ID Based Key Exchange. Assuming a stronger variant of evasive LWE and LWE, along with Decision Bilinear Diffie-Hellman, we provide the first two-party ID based Non-Interactive Key Exchange (ID-NIKE) scheme in the standard model. This leads to the first ID-NIKE in the standard model without using multilinear maps or indistinguishability obfuscation.
4. Instantiating the Random Oracle. We use our pseudorandom iO to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more.
Our tools of MIFE and iO for pseudorandom functionalities appear quite powerful and yield extremely simple constructions when used in applications. We believe they provide a new pathway for basing “extreme” cryptography, which has so far required full fledged iO, on the presumably weaker evasive LWE in the post quantum regime.
Compact Pseudorandom Functional Encryption from Evasive LWE
We provide the first construction of compact Functional Encryption (FE) for pseudorandom functionalities from the evasive LWE and LWE assumptions. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for every input seen by the adversary. This yields the first compact FE for a nontrivial class of functions which does not rely on pairings.
We demonstrate the power of our new tool by using it to achieve optimal parameters for both key-policy and ciphertext-policy Attribute Based Encryption (ABE) schemes for circuits of unbounded depth, from just the LWE and evasive LWE assumptions. This improves prior work along the twin axes of assumptions and performance. In more detail, this allows to: (i) replace the assumption of circular evasive LWE used in the work of Hseih, Lin and Luo (FOCS 2023) by plain evasive LWE, (ii) remove the need for the circular tensor LWE assumption in the work of Agrawal, Kumari and Yamada (CRYPTO, 2024), (iii) improve parameters obtained by both aforementioned works to achieve asymptotic optimality.
Previously, optimal parameters for ABE schemes were only achieved using compact FE for P (Jain, Lin and Luo, Eurocrypt 2023) – we show that compact FE for a much weaker class (albeit with incomparable security) suffices. Thus we obtain the first optimal ABE schemes for unbounded depth circuits which can be conjectured post-quantum secure. Along the way, we define and construct a new primitive which we term laconic pseudorandom obfuscation from the same assumptions – this may be of independent interest.
Drifting Towards Better Error Probabilities in Fully Homomorphic Encryption Schemes
There are two security notions for FHE schemes the traditional notion of IND-CPA, and a more stringent notion of IND-CPA$^D$. The notions are equivalent if the FHE schemes are perfectly correct, however for schemes with negligible failure probability the FHE parameters needed to obtain IND-CPA$^D$ security can be much larger than those needed to obtain IND-CPA security. This paper uses the notion of ciphertext drift in order to understand the practical difference between IND-CPA and IND-CPA$^D$ security in schemes such as FHEW, TFHE and FINAL. This notion allows us to define a modulus switching operation (the main culprit for the difference in parameters) such that one does not require adapting IND-CPA cryptographic parameters to meet the IND-CPA$^D$ security level. Further, the extra cost incurred by the new techniques has no noticeable performance impact in practical applications. The paper also formally defines a stronger version for IND-CPA$^D$ security called sIND-CPA$^D$, which is proved to be strictly separated from the IND-CPA$^D$ notion. Criterion for turning an IND-CPA$^D$ secure public-key encryption into an sIND-CPA$^D$ one is also provided.
Practical Asynchronous MPC from Lightweight Cryptography
We present an asynchronous secure multi-party computation (MPC) protocol that is practically efficient. Our protocol can evaluate any arithmetic circuit with linear communication in the number of parties per multiplication gate, while relying solely on computationally lightweight cryptography such as hash function and symmetric encryption. Our protocol is optimally resilient and tolerates $t$ malicious parties among $n = 3t+1$ parties.
At the technical level, we manage to apply the \emph{player-elimination} paradigm to asynchronous MPC. This framework enables the detection and eviction of cheating parties by repeatedly attempting to generate Beaver triples. Once all malicious parties are eliminated, honest parties can proceed with efficient Beaver triple generation.
While this approach is standard in synchronous MPC, it presents several technical challenges when adopted in an asynchronous network, which we address in this work.
Rate-1 Statistical Non-Interactive Zero-Knowledge
We give the first construction of a rate-1 statistical non-interactive zero-knowledge argument of knowledge. For the $\mathsf{circuitSAT}$ language, our construction achieves a proof length of $|w| + |w|^\epsilon \cdot \mathsf{poly}(\lambda)$ where $w$ denotes the witness, $\lambda$ is the security parameter, $\epsilon$ is a small constant less than 1, and $\mathsf{poly}(\cdot)$ is a fixed polynomial that is independent of the instance or the witness size. The soundness of our construction follows from either the LWE assumption, or the $O(1)$-$\mathsf{LIN}$ assumption on prime-order groups with efficiently computable bilinear maps, or the sub-exponential DDH assumption. Previously, Gentry et al. (Journal of Cryptology, 2015) achieved NIZKs with statistical soundness and computational zero-knowledge with the aforementioned proof length by relying on the Learning with Errors (LWE) assumption.
OT-PCA: New Key-Recovery Plaintext-Checking Oracle Based Side-Channel Attacks on HQC with Offline Templates
In this paper, we introduce OT-PCA, a novel approach for conducting Plaintext-Checking (PC) oracle based side-channel attacks, specifically designed for Hamming Quasi-Cyclic (HQC). By calling the publicly accessible HQC decoder, we build offline templates that enable efficient extraction of soft information for hundreds of secret positions with just a single PC oracle call. Our method addresses critical challenges in optimizing key-related information extraction, including maximizing decryption output entropy and ensuring error pattern independence, through the use of genetic-style algorithms.
Extensive simulations demonstrate that our new attack method significantly reduces the required number of oracle calls, achieving a 2.4-fold decrease for hqc-128 and even greater reductions for hqc-192 and hqc-256 compared to current state-of-the-art methods. Notably, the attack shows strong resilience against inaccuracy in the PC oracle—when the oracle accuracy decreases to 95%, the reduction factor in oracle call requirements increases to 7.6 for hqc-128.
Lastly, a real-world evaluation conducted using power analysis on a platform with an ARM Cortex-M4 microcontroller validates the practical applicability and effectiveness of our approach.
Theoretical Approaches to Solving the Shortest Vector Problem in NP-Hard Lattice-Based Cryptography with Post-SUSY Theories of Quantum Gravity in Polynomial Time by Orch-Or
The Shortest Vector Problem (SVP) is a cornerstone of lattice-based cryptography, underpinning the security of numerous cryptographic schemes like NTRU. Given its NP-hardness, efficient solutions to SVP have profound implications for both cryptography and computational complexity theory. This paper presents an innovative framework that integrates concepts from quantum gravity, noncommutative geometry, spectral theory, and post-supersymmetry (post-SUSY) particle physics to address SVP. By mapping high-dimensional lattice points to spinfoam networks and by means of Hamiltonian engineering, it is theoretically possible to devise new algorithms that leverage the interactions topologically protected Majorana fermion particles have with the gravitational field through the spectral action principle to loop through these spinfoam networks where SVP vectors could then be encoded onto the spectrum of the corresponding Dirac-like dilation operators within the system. We establish a novel approach that leverages post-SUSY physics and theories of quantum gravity to achieve algorithmic speedups beyond those expected by conventional quantum computers. This interdisciplinary methodology not only proposes potential polynomial-time algorithms for SVP, but also bridges gaps between theoretical physics and cryptographic applications, providing further insights into the Riemann Hypothesis (RH) and the Hilbert-Pólya Conjecture. Possible directions for experimental realization through biologically inspired hardware or biological tissues by orchestrated objective reduction (Orch-Or) theory are discussed.
Universally Composable Non-Interactive Zero-Knowledge from Sigma Protocols via a New Straight-line Compiler
Non-interactive zero-knowledge proofs (NIZK) are essential building blocks in threshold cryptosystems like multiparty signatures, distributed key generation, and verifiable secret sharing, allowing parties to prove correct behavior without revealing secrets. Furthermore, universally composable (UC) NIZKs enable seamless composition in the larger cryptosystems. A popular way to construct NIZKs is to compile interactive protocols using the Fiat-Shamir transform. Unfortunately, Fiat-Shamir transformed NIZK requires rewinding the adversary and is not $\textit{straight-line extractable}$, making it at odds with UC. Using Fischlin's transform gives straight-line extractability, but at the expense of many repetitions of the underlying protocol leading to poor concrete efficiency and difficulty in setting parameters.
In this work, we propose a simple new transform that compiles a Sigma protocol for an algebraic relation into a UC-NIZK protocol $\textit{without any overheads of repetition}$.
- Given a Sigma protocol for proving m algebraic statements over n witnesses, we construct a compiler to transform it into a $\textit{straight-line extractable}$ protocol using an additively homomorphic encryption scheme AHE. Our prover executes the Sigma protocol's prover once and computes 2n encryptions. The verification process involves running the Sigma protocol verifier once and then computing n encryptions, which are homomorphically verified against the prover generated encryptions.
- We apply the Fiat-Shamir transform to the above straight-line extractable Sigma protocol to obtain a UC-NIZK. We instantiate AHE using class group-based encryption where the public key of the encryption scheme is obliviously sampled using a suitable hash function. This yields a UC-NIZK protocol in the random oracle model.
Efficient Updatable PSI from Asymmetric PSI and PSU
Private Set Intersection (PSI) allows two mutually untrusted parties to compute the intersection of their private sets without revealing additional information. In general, PSI operates in a static setting, where the computation is performed only once on the input sets of both parties. Badrinarayanan et al. initiated the study of Updatable PSI (UPSI), which extends this capability to dynamically updating sets, enabling both parties to securely compute the intersection as their sets are modified while incurring significantly less overhead than re-executing a conventional PSI. However, existing UPSI protocols either do not support arbitrary deletion of elements or incur high computational and communication overhead. This work combines asymmetric PSI with Private Set Union (PSU) to present a novel UPSI protocol, which supports arbitrary additions and deletions of elements, offering a flexible approach to update sets. Furthermore, our protocol enjoys efficient performance compared to previous work. Specifically, we implement our protocol and compare it against state-of-the-art conventional PSI and UPSI protocols. Experimental results demonstrate that our UPSI protocol achieves up to three orders of magnitude reduction in computational overhead and incurs $190 \sim 707 \times$ less communication overhead than the state-of-the-art UPSI protocol that supports arbitrary additions and deletions. Our implementation is available at: \url{https://github.com/ShallMate/upsi}.
Good Things Come to Those Who Wait: Dishonest-Majority Coin-Flipping Requires Delay Functions
We reconsider Cleve's famous 1986 impossibility result on coin-flipping without an honest majority. Recently proposed constructions have circumvented this limit by using cryptographic delay functions. We show that this is necessary: a (weak) notion of delay functions is in fact implied by the existence of a protocol circumventing Cleve's impossibility. However, such delay functions are weaker than those used in existing constructions. We complete our result by showing an equivalence, that these weaker delay functions are also sufficient to construct not just fair dishonest-majority coin-flipping protocols, but also the stronger notion of a distributed randomness beacon. We also show that this is possible in a weaker communication model than previously considered, without the assumption of reliable broadcast or a public bulletin board.