Papers updated in last 365 days (Page 28 of 3021 results)
Key lifting : Multi-key Fully Homomorphic Encryption in plain model without noise flooding
Multi-key Fully Homomorphic Encryption (\MK), based on the Learning With Error assumption (\LWE), usually lifts ciphertexts of different users to new ciphertexts under a common public key to enable homomorphic evaluation. The efficiency of the current Multi-key Fully Homomorphic Encryption (\MK) scheme is mainly restricted by two aspects:
Expensive ciphertext expansion operation : In a boolean circuit with input length $N$, multiplication depth $L$, security parameter $\lambda$, the number of additional encryptions introduced to achieve ciphertext expansion is $O(N\lambda^6L^4)$.
Noise flooding technology resulting in a large modulus $q$ : In order to prove the security of the scheme, the noise flooding technology introduced in the encryption and distributed decryption stages will lead to a huge modulus $q = 2^{O(\lambda L)}B_\chi$, which corrodes the whole scheme and leads to sub-exponential approximation factors $\gamma = \tilde{O}(n\cdot 2^{\sqrt{nL}})$.
This paper solves the first problem by presenting a framework called Key-Lifting Multi-key Fully Homomorphic Encryption (\KL). With this \emph{key lifting} procedure, the number of encryptions for a local user is reduced to $O(N)$, similar to single-key fully homomorphic encryption (\FHE). For the second problem, we prove the discrete Gaussian version of the Smudging lemma, and combined with the anti-leakage properties of the encryption, we remove the noise flooding technique introduced in the distributed decryption. Secondly, we propose an analysis method based on R\'{e}nyi divergence, which removes the noise flooding technology in the encryption stage. These approaches significantly reduces the size of the modulus $q$ (with $\log q = O(L)$) and the computational overhead of the entire scheme.
Scalable Private Signaling
Private messaging systems that use a bulletin board, like privacy-preserving blockchains, have been a popular topic during the last couple of years. In these systems, typically a private message is posted on the board for a recipient and the privacy requirement is that no one can determine the sender and the recipient of the message. Until recently, the efficiency of these recipients was not considered, and the party had to perform a naive scan of the board to retrieve their messages.
More recently, works like Fuzzy Message Detection (FMD), Private Signaling (PS), and Oblivious Message Retrieval (OMR) have studied the problem of protecting recipient privacy by outsourcing the message retrieval process to an untrusted server. However, FMD only provides limited privacy guarantees, and PS and OMR greatly lack scalability.
In this work, we present a new construction for private signaling which is both asymptotically superior and concretely orders of magnitude faster than all prior works while providing full privacy. Our constructions make use of a trusted execution environment (TEE) and an Oblivious RAM to improve the computation complexity of the server. We also improve the privacy guarantees by keeping the recipient hidden even during the retrieval of signals from the server. Our proof-of-concept open-source implementation shows that for a server serving a million recipients and ten million messages, it only takes $< 60$ milliseconds to process a sent message, and $< 6$ seconds to process a retrieval (of 100 signals) request from a recipient.
Data Independent Order Policy Enforcement: Limitations and Solutions
Order manipulation attacks such as frontrunning and sandwiching have become an increasing concern in blockchain applications such as DeFi. To protect from such attacks, several recent works have designed order policy enforcement (OPE) protocols to order transactions fairly in a data-independent fashion. However, while the manipulation attacks are motivated by monetary profits, the defenses assume honesty among a significantly large set of participants. In existing protocols, if all participants are rational, they may be incentivized to collude and circumvent the order policy without incurring any penalty.
This work makes two key contributions. First, we explore whether the need for the honesty assumption is fundamental. Indeed, we show that it is impossible to design OPE protocols under some requirements when all parties are rational. Second, we explore the tradeoffs needed to circumvent the impossibility result. In the process, we propose a novel concept of rationally binding transactions that allows us to construct AnimaguSwap, the first content-oblivious Automated Market Makers (AMM) interface that is secure under rationality. A key design in AnimaguSwap is that user orders may transform to a different direction---like the fictional creatures Animagi in Harry Potter---in order to achieve the desired game theoretic properties. We report on a prototype implementation of AnimaguSwap and performance evaluation results demonstrating its practicality.
Leveled Homomorphic Encryption Schemes with Hensel Codes
We propose the use of Hensel codes (a mathematical tool lifted from the theory of $p$-adic numbers) as an alternative way to construct homomorphic encryption (HE) schemes that rely on the hardness of some instance of the approximate common divisor (AGCD) problem. We provide a self-contained introduction to Hensel codes which covers all the properties of interest for this work. Two constructions are presented: a private-key leveled HE scheme and a public-key leveled HE scheme. The public-key scheme is obtained via minor modifications to the private-key scheme in which we explore asymmetric properties of Hensel codes. The efficiency and security (under an AGCD variant) of the public-key scheme are discussed in detail. Our constructions take messages from large specialized subsets of the rational numbers that admit fractional numerical inputs and associated computations for virtually any real-world application. Further, our results can be seen as a natural unification of error-free computation (computation free of rounding errors over rational numbers) and homomorphic encryption. Experimental results indicate the scheme is practical for a large variety of applications.
Are Your Keys Protected? Time will Tell
Side channel attacks, and in particular timing attacks, are a fundamental obstacle to obtaining secure implementation of algorithms and cryptographic protocols, and have been widely researched for decades.
While cryptographic definitions for the security of cryptographic systems have been well established for decades, none of these accepted definitions take into account the running time information leaked from executing the system.
In this work, we give the foundation of new cryptographic definitions for cryptographic systems that take into account information about their leaked running time, focusing mainly on keyed functions such as signature and encryption schemes. Specifically,
(1) We define several cryptographic properties to express the claim that the timing information does not help an adversary to extract sensitive information, e.g. the key or the queries made. We highlight the definition of key-obliviousness, which means that an adversary cannot tell whether it received the timing of the queries with the actual key or the timing of the same queries with a random key.
(2) We present a construction of key-oblivious pseudorandom permutations on a small or medium-sized domain. This construction is not ``fixed-time,'' and at the same time is secure against any number of queries even in case the adversary knows the running time exactly. Our construction, which we call Janus Sometimes Recurse, is a variant of the ``Sometimes Recurse'' shuffle by Morris and Rogaway.
(3) We suggest a new security notion for keyed functions, called noticeable security, and prove that cryptographic schemes that have noticeable security remain secure even when the exact timings are leaked, provided the implementation is key-oblivious. We show that our notion applies to cryptographic signatures, private key encryption and PRPs.
On round elimination for special-sound multi-round identification and the generality of the hypercube for MPCitH
A popular way to build post-quantum signature schemes is by first constructing an identification scheme (IDS) and applying the Fiat-Shamir transform to it. In this work we tackle two open questions related to the general applicability of techniques around this approach that together allow for efficient post-quantum signatures with optimal security bounds in the QROM.
First we consider a recent work by Aguilar-Melchor, Hülsing, Joseph, Majenz, Ronen, and Yue (Asiacrypt'23) that showed that an optimal bound for three-round commit & open IDS by Don, Fehr, Majenz, and Schaffner (Crypto'22) can be applied to the five-round Syndrome-Decoding in the Head (SDitH) IDS. For this, they first applied a transform that replaced the first three rounds by one. They left it as an open problem if the same approach applies to other schemes beyond SDitH. We answer this question in the affirmative, generalizing their round-elimination technique and giving a generic security proof for it. Our result applies to any IDS with $2n+1$ rounds for $n>1$. However, a scheme has to be suitable for the resulting bound to not be trivial. We find that IDS are suitable when they have a certain form of special-soundness which many commit & open IDS have.
Second, we consider the hypercube technique by Aguilar-Melchor, Gama, Howe, Hülsing, Joseph, and Yue (Eurocrypt'23). An optimization that was proposed in the context of SDitH and is now used by several of the contenders in the NIST signature on-ramp. It was conjectured that the technique applies generically for the MPC-in-the-Head (MPCitH) technique that is used in the design of many post-quantum IDS if they use an additive secret sharing scheme but this was never proven. In this work we show that the technique generalizes to MPCitH IDS that use an additively homomorphic MPC protocol, and we prove that security is preserved.
We demonstrate the application of our results to the identification scheme of RYDE, a contender in the recent NIST signature on-ramp. While RYDE was already specified with the hypercube technique applied, this gives the first QROM proof for RYDE with an optimally tight bound.
Access Structure Hiding Verifiable Tensor Designs
The field of verifiable secret sharing schemes was introduced by Verheul et al. and has evolved over time, including well-known examples by Feldman and Pedersen. Stinson made advancements in combinatorial design-based secret sharing schemes in 2004. Desmedt et al. introduced the concept of frameproofness in 2021, while recent research by Sehrawat et al. in 2021 focuses on LWE-based access structure hiding verifiable secret sharing with malicious-majority settings. Furthermore, Roy et al. combined the concepts of reparable threshold schemes by Stinson et al. and frameproofness by Desmedt et al. in 2023, to develop extendable tensor designs built from balanced incomplete block designs, and also presented a frameproof version of their design.
This paper explores ramp-type verifiable secret sharing schemes, and the application of hidden access structures in such cryptographic protocols. Inspired by Sehrawat et al.'s access structure hiding scheme, we develop an $\epsilon$-almost access structure hiding scheme, which is verifiable as well as frameproof. We detail how the concept $\epsilon$-almost hiding is important for incorporating ramp schemes, thus making a fundamental generalisation of this concept.
FRIDA: Data Availability Sampling from FRI
As blockchains like Ethereum continue to grow, clients with limited resources can no longer store the entire chain.
Light nodes that want to use the blockchain, without verifying that it is in a good state overall, can just download the block headers without the corresponding block contents.
As those light nodes may eventually need some of the block contents, they would like to ensure that they are in principle available.
Data availability sampling, introduced by Bassam et al., is a process that allows light nodes to check the availability of data without download it.
In a recent effort, Hall-Andersen, Simkin, and Wagner have introduced formal definitions and analyzed several constructions.
While their work thoroughly lays the formal foundations for data availability sampling, the constructions are either prohibitively expensive, use a trusted setup, or have a download complexity for light clients scales with a square root of the data size.
In this work, we make a significant step forward by proposing an efficient data availability sampling scheme without a trusted setup and only polylogarithmic overhead.
To this end, we find a novel connection with interactive oracle proofs of proximity (IOPPs).
Specifically, we prove that any IOPP meeting an additional consistency criterion can be turned into an erasure code commitment, and then, leveraging a compiler due to Hall-Andersen, Simkin, and Wagner, into a data availability sampling scheme.
This new connection enables data availability to benefit from future results on IOPPs.
We then show that the widely used FRI IOPP satisfies our consistency criterion and demonstrate that the resulting data availability sampling scheme outperforms the state-of-the-art asymptotically and concretely in multiple parameters.
Practical Committing Attacks against Rocca-S
This note shows practical committing attacks against Rocca-S, an authenticated encryption with associated data scheme designed for 6G applications. Previously, the best complexity of the attack was $2^{64}$ by Derbez et al. in ToSC 2024(1)/FSE 2024. We show that the committing attack against Rocca by Takeuchi et al. in ToSC 2024(2)/FSE 2025 can be applied to Rocca-S, where Rocca is an earlier version of Rocca-S. We show a concrete test vector of our attack. We also point out a committing attack that exploits equivalent keys.
Fully Malicious Authenticated PIR
Authenticated PIR enables a server to initially commit to a database of $N$ items, for which a client can later privately obtain individual items with complexity sublinear in $N$, with the added guarantee that the retrieved item is consistent with the committed database. A crucial requirement is privacy with abort, i.e., the server should not learn anything about a query even if it learns whether the client aborts.
This problem was recently considered by Colombo et al. (USENIX '23), who proposed solutions secure under the assumption that the database is committed to honestly. Here, we close this gap for their DDH-based scheme, and present a solution that tolerates fully malicious servers that provide potentially malformed commitments. Our scheme has communication and client computational complexity $\mathcal{O}_{\lambda}(\sqrt{N})$, does not require any additional assumptions, and does not introduce heavy machinery (e.g., generic succinct proofs). We do so by introducing validation queries, which, from the server's perspective, are computationally indistinguishable from regular PIR queries. Provided that the server succeeds in correctly answering $\kappa$ such validation queries, the client is convinced with probability $1-\frac{1}{2^\kappa}$ that the server is unable to break privacy with abort.
Monotone-Policy Aggregate Signatures
The notion of aggregate signatures allows for combining signatures from different parties into a short certificate that attests that *all* parties signed a message. In this work, we lift this notion to capture different, more expressive signing policies. For example, we can certify that a message was signed by a (weighted) threshold of signers.
We present the first constructions of aggregate signatures for monotone policies based on standard polynomial-time cryptographic assumptions. The aggregate signatures in our schemes are succinct, i.e., their size is *independent* of the number of signers. Moreover, verification is also succinct if all parties sign the same message (or if the messages have a succinct representation). All prior work requires either interaction between the parties or non-standard assumptions (that imply SNARKs for NP).
Our signature schemes are based on non-interactive batch arguments (BARGs) for monotone policies [Brakerski-Brodsky-Kalai-Lombardi-Paneth, Crypto'23]. In contrast to previous constructions, our BARGs satisfy a new notion of *adaptive* security which is instrumental to our application. Our new BARGs for monotone policies can be constructed from standard BARGs and other standard assumptions.
Edit Distance Robust Watermarks for Language Models
Motivated by the problem of detecting AI-generated text, we consider the problem of watermarking the output of language models with provable guarantees. We aim for watermarks which satisfy: (a) undetectability, a cryptographic notion introduced by Christ, Gunn & Zamir (2024) which stipulates that it is computationally hard to distinguish watermarked language model outputs from the model's actual output distribution; and (b) robustness to channels which introduce a constant fraction of adversarial insertions, substitutions, and deletions to the watermarked text. Earlier schemes could only handle stochastic substitutions and deletions, and thus we are aiming for a more natural and appealing robustness guarantee that holds with respect to edit distance.
Our main result is a watermarking scheme which achieves both undetectability and robustness to edits when the alphabet size for the language model is allowed to grow as a polynomial in the security parameter. To derive such a scheme, we follow an approach introduced by Christ & Gunn (2024), which proceeds via first constructing pseudorandom codes satisfying undetectability and robustness properties analogous to those above; our key idea is to handle adversarial insertions and deletions by interpreting the symbols as indices into the codeword, which we call indexing pseudorandom codes. Additionally, our codes rely on weaker computational assumptions than used in previous work. Then we show that there is a generic transformation from such codes over large alphabets to watermarking schemes for arbitrary language models.
Laconic Function Evaluation and ABE for RAMs from (Ring-)LWE
Laconic function evaluation (LFE) allows us to compress a circuit $f$ into a short digest. Anybody can use this digest as a public-key to efficiently encrypt some input $x$. Decrypting the resulting ciphertext reveals the output $f(x)$, while hiding everything else about $x$. In this work we consider LFE for Random-Access Machines (RAM-LFE) where, instead of a circuit $f$, we have a RAM program $f_{\mathsf{DB}}$ that potentially contains some large hard-coded data $\mathsf{DB}$. The decryption run-time to recover $f_{\mathsf{DB}}(x)$ from the ciphertext should be roughly the same as a plain evaluation of $f_{\mathsf{DB}}(x)$ in the RAM model, which can be sublinear in the size of $\mathsf{DB}$. Prior works constructed LFE for circuits under LWE, and RAM-LFE under indisitinguishability obfuscation (iO) and Ring-LWE. In this work, we construct RAM-LFE with essentially optimal encryption and decryption run-times from just Ring-LWE and a standard circular security assumption, without iO.
RAM-LFE directly yields 1-key succinct functional encryption and reusable garbling for RAMs with similar parameters.
If we only want an attribute-based LFE for RAMs (RAM-AB-LFE), then we can replace Ring-LWE with plain LWE in the above. Orthogonally, if we only want leveled schemes, where the encryption/decryption efficiency can scale with the depth of the RAM computation, then we can remove the need for a circular-security. Lastly, we also get a leveled many-key attribute-based encryption for RAMs (RAM-ABE), from LWE.
Laconic Function Evaluation, Functional Encryption and Obfuscation for RAMs with Sublinear Computation
Laconic function evaluation (LFE) is a "flipped" version of fully homomorphic encryption, where the server performing the computation gets the output. The server commits itself to a function $f$ by outputting a small digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest in much less time than computing $f$, and ensure that the server only decrypts $f(x)$, but does not learn anything else about $x$. Prior works constructed LFE for circuits under LWE, and for Turing Machines (TMs) from indistinguishability obfuscation (iO). In this work we introduce LFE for Random-Access Machines (RAM-LFE). The server commits itself to a potentially huge database $y$ via a short digest. Clients can later efficiently encrypt inputs $x$ with respect to the digest and the server decrypts $f(x,y)$ for some specified RAM program $f$ (e.g., a universal RAM), without learning anything else about $x$. The main advantage of RAM-LFE is that the server's decryption run-time only scales with the RAM run-time $T$ of the computation $f(x,y)$, which can be sublinear in both $|x|$ and $|y|$. We consider a weakly efficient variant, where the client's run-time is also allowed to scale linearly with $T$, but not $|y|$, and a fully efficient variant, where the client's run-time must be sublinear in both $T$ and $|y|$. We construct the former from doubly efficient private information retrieval (DEPIR) and laconic OT (LOT), both of which are known from RingLWE, and the latter from an additional use of iO. We then show how to leverage fully efficient RAM-LFE to also get (many-key) functional encryption for RAMs (RAM-FE) where secret keys are associate with big databases $y$ and the decryption time is sublinear in $|y|$, as well as iO for RAMs where the obfuscated program contains a big database $y$ and the evaluation time is sublinear in $|y|$.
Fast Evaluation of S-boxes with Garbled Circuits
Garbling schemes are vital primitives for privacy-preserving protocols and secure two-party computation. In projective garbling schemes, $n$ values are assigned to each wire in the circuit. Current state-of-the-art schemes project two values.
This paper presents a projective garbling scheme that assigns $2^n$ values to wires in a circuit comprising XOR and unary projection gates. A generalization of FreeXOR allows the XOR of wires with $2^n$ values to be very efficient. We then analyze the performance of our scheme by evaluating substitution-permutation ciphers. Using our proposal, we measure high-speed evaluation of the ciphers with a moderately increased cost in garbling and bandwidth. Theoretical analysis suggests that for evaluating the nine examined ciphers, one can expect a 4- to 70-fold improvement in evaluation performance with, at most, a 4-fold increase in garbling cost and, at most, an 8-fold increase in communication cost compared to state-of-the-art garbling schemes. In an offline/online setting, such as secure function evaluation as a service, the circuit garbling and communication to the evaluator can proceed before the input phase. Thus, our scheme offers a fast online phase. Furthermore, we present efficient Boolean circuits for the S-boxes of TWINE and Midori64 ciphers. To our knowledge, our formulas give the smallest number of AND gates for the S-boxes of these two ciphers.
Universal Composable Password Authenticated Key Exchange for the Post-Quantum World
In this paper, we construct the first password authenticated key exchange (PAKE) scheme from isogenies with Universal Composable (UC) security in the random oracle model (ROM). We also construct the first two PAKE schemes with UC security in the quantum random oracle model (QROM), one is based on the learning with error (LWE) assumption, and the other is based on the group-action decisional Diffie- Hellman (GA-DDH) assumption in the isogeny setting.
To obtain our UC-secure PAKE scheme in ROM, we propose a generic construction of PAKE from basic lossy public key encryption (LPKE) and CCA-secure PKE. We also introduce a new variant of LPKE, named extractable LPKE (eLPKE). By replacing the basic LPKE with eLPKE, our generic construction of PAKE achieves UC security in QROM. The LPKE and eLPKE have instantiations not only from LWE but also from GA-DDH, which admit four specific PAKE schemes with UC security in ROM or QROM, based on LWE or GA-DDH.
That’s not my Signature! Fail-Stop Signatures for a Post-Quantum World
The Snowden's revelations kick-started a community-wide effort to develop cryptographic tools against mass surveillance.
In this work, we propose to add another primitive to that toolbox: Fail-Stop Signatures (FSS) [EC'89].
FSS are digital signatures enhanced with a forgery-detection mechanism that can protect a PPT signer from more powerful attackers.
Despite the fascinating concept, research in this area stalled after the '90s. However, the ongoing transition to post-quantum cryptography, with its hiccups due to the novelty of underlying assumptions, has become the perfect use case for FSS.
This paper aims to reboot research on FSS with practical use in mind: Our framework for FSS includes ``fine-grained'' security definitions (that assume a powerful, but bounded adversary e.g: can break $128$-bit of security, but not $256$-bit).
As an application, we show new FSS constructions for the post-quantum setting.
We show that FSS are equivalent to standard, provably secure digital signatures that do not require rewinding or programming random oracles, and that this implies lattice-based FSS.
Our main construction is an FSS version of SPHINCS, which required building FSS versions of all its building blocks: WOTS, XMSS, and FORS.
In the process, we identify and provide generic solutions for two fundamental issues arising when deriving a large number of private keys from a single seed, and when building FSS for Hash-and-Sign-based signatures.
A multistep strategy for polynomial system solving over finite fields and a new algebraic attack on the stream cipher Trivium
In this paper we introduce a multistep generalization of the guess-and-determine or hybrid strategy for solving a system of multivariate polynomial equations over a finite field. In particular, we propose performing the exhaustive evaluation of a subset of variables stepwise, that is, by incrementing the size of such subset each time that an evaluation leads to a polynomial system which is possibly unfeasible
to solve. The decision about which evaluation to extend is based on
a preprocessing consisting in computing an incomplete Grobner basis after the current evaluation, which possibly generates linear polynomials that are used to eliminate further variables. If the number of remaining variables in the system is deemed still too high, the evaluation is extended and the preprocessing is iterated.
Otherwise, we solve the system by a complete Grobner basis computation.
Having in mind cryptanalytic applications, we present an implementation of this strategy in an algorithm called MultiSolve which is designed for polynomial systems having at most one solution. We prove explicit formulas for its complexity which are based on probability distributions that can be easily estimated by performing the proposed preprocessing on a testset of evaluations for different subsets of variables. We prove that an optimal complexity of MultiSolve is achieved by using a full multistep strategy with a maximum number of steps and in turn the standard guess-and-determine strategy, which essentially is a strategy consisting of a single step, is the worst choice. Finally, we extensively study the behaviour of MultiSolve when performing an algebraic attack on the well-known stream cipher Trivium.
Concurrently Secure Blind Schnorr Signatures
Many applications of blind signatures, e.g. in blockchains, require compatibility of the resulting signatures with the existing system. This makes blind issuing of Schnorr signatures (now being standardized and supported by major cryptocurrencies) desirable. Concurrent security of the signing protocol is required to thwart denial-of-service attacks.
We present a concurrently secure blind-signing protocol for Schnorr signatures, using the standard primitives NIZK and PKE and assuming that Schnorr signatures themselves are unforgeable. Our protocol is the first to be compatible with standard Schnorr implementations over 256-bit elliptic curves. We cast our scheme as a generalization of blind and partially blind signatures: we introduce the notion of predicate blind signatures, in which the signer can define a predicate that the blindly signed message must satisfy.
We provide implementations and benchmarks for various choices of primitives and scenarios, such as blindly signing Bitcoin transactions only when they meet certain conditions specified by the signer.
LUNA: Quasi-Optimally Succinct Designated-Verifier Zero-Knowledge Arguments from Lattices
We introduce the first candidate Lattice-based designated verifier (DV) zero knowledge sUccinct Non-interactive Argument (ZK-SNARG) protocol, LUNA, with quasi-optimal proof length (quasi-linear in the security/privacy parameter). By simply relying on mildly stronger security assumptions, LUNA is also a candidate ZK-SNARK (i.e. argument of knowledge). LUNA achieves significant improvements in concrete proof sizes, reaching below 6 KB (compared to >32 KB in prior work) for 128-bit security/privacy level.
To achieve our quasi-optimal succinct LUNA, we give a new regularity result for `private' re-randomization of Module LWE (MLWE) samples using discrete Gaussian randomization vectors, also known as a lattice-based leftover hash lemma with leakage, which applies with a discrete Gaussian re-randomization parameter that is polynomial in the statistical privacy parameter (avoiding exponential smudging), and hides the coset of the re-randomization vector support set. Along the way, we derive bounds on the smoothing parameter of the intersection of short integer solution (SIS), gadget, and Gaussian perp module lattices over the power of 2 cyclotomic rings.
We then introduce a new candidate linear-only homomorphic encryption scheme called Module Half-GSW (HGSW), and apply our regularity theorem to provide smudging-free circuit-private homomorphic linear operations for Module HGSW. Our implementation and experimental performance evaluation show that, for typical instance sizes, Module HGSW provides favourable performance for ZK-SNARG applications involving lightweight verifiers. It enables significantly (around 5x) shorter proof lengths while speeding up CRS generation and encryption time by 4-16x and speeding up decryption time by 4.3x, while incurring just 1.2-2x time overhead in linear homomorphic proof generation operations, compared to a Regev encryption used in prior work in the ZK-SNARG context. We believe our techniques are of independent interest and will find application in other privacy-preserving applications of lattice-based cryptography.
How to Construct Quantum FHE, Generically
We construct a (compact) quantum fully homomorphic encryption (QFHE) scheme starting from any (compact) classical fully homomorphic encryption scheme with decryption in $\mathsf{NC}^{1}$, together with a dual-mode trapdoor function family. Compared to previous constructions (Mahadev, FOCS 2018; Brakerski, CRYPTO 2018) which made non-black-box use of similar underlying primitives, our construction provides a pathway to instantiations from different assumptions. Our construction uses the techniques of Dulek, Schaffner and Speelman (CRYPTO 2016) and shows how to make the client in their QFHE scheme classical using dual-mode trapdoor functions. As an additional contribution, we show a new instantiation of dual-mode trapdoor functions from group actions.
SyRA: Sybil-Resilient Anonymous Signatures with Applications to Decentralized Identity
We introduce a new cryptographic primitive, called Sybil-Resilient Anonymous (SyRA) signatures, which enable users to generate, on demand, unlinkable pseudonyms tied to any given context, and issue signatures on behalf of these pseudonyms. Concretely, given a personhood relation, an issuer (who may be a distributed entity) enables users to prove their personhood and extract an associated long-term key, which can then be used to issue signatures for any given context and message. Sybil-resilient anonymous signatures achieve two key security properties: 1) Sybil resilience, which ensures that every user is entitled to at most one pseudonym per context, and 2) anonymity, which requires that no information about the user is leaked through their various pseudonyms or the signatures they issue on their pseudonyms’ behalf.
We conceptualize SyRA signatures as an ideal functionality in the Universal Composition (UC) setting and realize the functionality via an efficient, pairing-based construction that utilizes two levels of verifiable random functions (VRFs), which may be of independent interest. One of the key features of this approach is the statelessness of the issuer: we achieve the core properties of Sybil resilience and anonymity without requiring the issuer to retain any information about past user interactions. SyRA signatures have various applications in multiparty systems, such as e-voting (e.g., for decentralized governance), privacy-preserving regulatory compliance (e.g., AML/CFT checks), and cryptocurrency airdrops, making them an attractive option for deployment in decentralized identity (DID) systems. Furthermore, we demonstrate the practicality of SyRA signatures for use in such systems by providing a performance evaluation of our construction.
Conan: Distributed Proofs of Compliance for Anonymous Data Collection
We consider how to design an anonymous data collection protocol that enforces compliance rules.
Imagine that each client contributes multiple data items (e.g., votes, location crumbs, or secret shares of its input) to an anonymous network, which mixes all clients' data items so that the receiver cannot determine which data items belong to the same user. Now, each user must prove to an auditor
that the set it contributed satisfies a compliance predicate, without identifying which items it contributed. For example, the auditor may want to ensure that no one voted for the same candidate twice, or that a user's location crumbs are not too far apart in a given time interval.
Our main contribution is a novel anonymous, compliant data collection protocol that realizes the above goal. In comparison with naive approaches such as generic multi-party computation or earlier constructions of collaborative zero-knowledge proofs, the most compelling advantage of our approach is that each client's communication and computation overhead do not grow with respect to the number of clients $n$. In this sense, we save a factor of at least $n$ over prior work, which allows our technique to scale to applications with a large number of clients, such as anonymous voting and privacy-preserving federated learning.
We first describe our protocol using generic cryptographic primitives that can be realized from standard assumptions. We then suggest a concrete instantiation called {\sc Conan} which we implement and evaluate. In this concrete instantiation, we are willing to employ SNARKs and the random oracle model for better practical efficiency. Notably, in this practical instantiation, each client's additional communication overhead (not counting the overhead of sending its data items over the anonymous network) is only $\widetilde{O}(1)$. We evaluated our technique in various application settings, including secure voting, and secure aggregation protocols for histogram, summation, and vector summation. Our evaluation results show that each client's additional communication overhead is only 2.2KB or 2.6KB, depending on which SNARK implementation we use. Further, each client's computation is only 0.2s - 0.5s for almost all cases, except for the vector summation application where the data items are high-dimensional and each client's computation is 8.5-10.6s.
Flock: A Framework for Deploying On-Demand Distributed Trust
Recent years have exhibited an increase in applications that distribute trust across $n$ servers to protect user data from a central point of attack. However, these deployments remain limited due to a core obstacle: establishing $n$ distinct trust domains. An application provider, a single trust domain, cannot directly deploy multiple trust domains. As a result, application providers forge business relationships to enlist third-parties as trust domains, which is a manual, lengthy, and expensive process, inaccessible to many application developers.
We introduce the on-demand distributed-trust architecture that enables an application provider to deploy distributed trust automatically and immediately without controlling the other trust domains. The insight lies in reversing the deployment method such that each user's client drives deployment instead of the application provider. While at a first glance, this approach appears infeasible due to cost, performance, and resource abuse concerns, our system Flock resolves these challenges. We implement and evaluate Flock on 3 major cloud providers and 8 distributed-trust applications. On average, Flock achieves 1.05x the latency and 0.68-2.27x the cloud cost of a traditional distributed-trust deployment, without reliance on third-party relationships.
Multivariate Correlation Attacks and the Cryptanalysis of LFSR-based Stream Ciphers
Cryptanalysis of modern symmetric ciphers may be done by using linear equation systems with multiple right hand sides, which describe the encryption process. The tool was introduced by Raddum and Semaev where several solving methods were developed. In this work, the probabilities are ascribed to the right hand sides and a statistical attack is then applied. The new approach is a multivariate generalisation of the correlation attack by Siegenthaler. A fast version of the attack is provided too. It may be viewed as an extension of the fast correlation attack by Meier and Staffelbach, based on exploiting so called parity-checks for linear recurrences. Parity-checks are a particular case of the relations that we introduce in the present work. The notion of a relation is irrelevant to linear recurrences. We show how to apply the method to some LFSR-based stream ciphers including those from the Grain family. The new method generally requires a lower number of the keystream bits to recover the initial states than other techniques reported in the literature.
A New Perspective on Key Switching for BGV-like Schemes
Fully homomorphic encryption is a promising approach when computing on encrypted data, especially when sensitive data is involved. For BFV, BGV, and CKKS, three state-of-the-art encryption schemes, the most costly homomorphic primitive is the so-called key switching. While a decent amount of research has been devoted to optimizing other aspects of these schemes, key switching has gone largely untouched. One exception has been a recent work [26] introducing a new double-decomposition technique. Its contributions are a great addition to the current state-of-the-art with one flaw: The authors take a skewed perspective on key switching parameters and their asymptotic complexity leading to incorrect conclusions about how effective their approach really is. In this work, we deep dive into key switching and correct, enhance, and improve the current state-of-the-art. We provide a new perspective on the key switching parameters $P$, $\omega$, and $\tilde\omega$ resulting in the asymptotic bounds $\mathcal{O}(\omega \ell)$ and $\mathcal{O}(\omega \ell / \tilde\omega + \tilde\omega \ell / \omega)$ for the single- and double-decomposition technique, respectively. We also revisit an idea by Gentry, Halevi, and Smart [18] to reduce the number of multiplications, which speeds up key switching by up to 63% and up to 11.6%, respectively.
zkCross: A Novel Architecture for Cross-Chain Privacy-Preserving Auditing
One of the key areas of focus in blockchain research is how to realize privacy-preserving auditing without sacrificing the system’s security and trustworthiness. However, simultaneously achieving auditing and privacy protection, two seemingly contradictory objectives, is challenging because an auditing system would require transparency and accountability which might create privacy and security vulnerabilities. This becomes worse in cross-chain scenarios, where the information silos from multiple chains further complicate the problem. In this paper, we identify three important challenges in cross-chain privacy-preserving auditing, namely Cross-chain Linkability Exposure (CLE), Incompatibility of Privacy and Auditing (IPA), and Full Auditing Inefficiency (FAI). To overcome these challenges, we propose $\mathsf{zkCross}$, which is a novel two-layer cross-chain architecture equipped with three cross-chain protocols to achieve privacy-preserving cross-chain auditing. Among these three protocols, two are privacy-preserving cross-chain protocols for transfer and exchange, respectively; the third one is an efficient cross-chain auditing protocol. These protocols are built on solid cross-chain schemes to guarantee privacy protection and audit efficiency. We implement $\mathsf{zkCross}$ on both local and cloud servers and perform comprehensive tests to validate that $\mathsf{zkCross}$ is well-suited for processing large-scale privacy-preserving auditing tasks. We evaluate the performance of the proposed protocols in terms of run time, latency, throughput, gas consumption, audit time, and proof size to demonstrate their practicality.
Stickel's Key Agreement Algebraic Variation
In this document we present a further development of non-commutative algebra based key agreement due to E. Stickel and a way to deal with the algebraic break due to V. Sphilrain.
Advancing Scalability in Decentralized Storage: A Novel Approach to Proof-of-Replication via Polynomial Evaluation
Proof-of-Replication (PoRep) plays a pivotal role in decentralized storage networks, serving as a mechanism to verify that provers consistently store retrievable copies of specific data. While PoRep’s utility is unquestionable, its implementation in large-scale systems, such as Filecoin, has been hindered by scalability challenges. Most existing PoRep schemes, such as Fisch’s (Eurocrypt 2019), face an escalating number of challenges and growing computational overhead as the number of stored files increases. This paper introduces a novel PoRep scheme distinctively tailored for expansive decentralized storage networks. At its core, our approach hinges on polynomial evaluation, diverging from the probabilistic checking prevalent in prior works. Remarkably, our design requires only a single challenge, irrespective of the number of files, ensuring both prover’s and verifier’s run-times remain manageable even as file counts soar. Our approach introduces a paradigm shift in PoRep designs, offering a blueprint for highly scalable and efficient decentralized storage solutions.
Sublinear-Round Broadcast without Trusted Setup against Dishonest Majority
Byzantine broadcast is one of the fundamental problems in distributed computing. Many practical applications from secure multiparty computation to consensus mechanisms for blockchains require increasingly weaker trust assumptions, as well as scalability for an ever-growing number of users, which rules out existing solutions with linear number of rounds or trusted setup requirements. In this paper, we propose the first sublinear-round and trustless Byzantine broadcast protocol. Unlike previous sublinear-round protocols, our protocol does not assume the existence of a trusted dealer who honestly issues keys and common random strings to the parties.
Our protocol is based on a new cryptographic protocol called verifiable graded consensus, designed to act as an untrusted online setup, enabling $n$ parties to almost agree on shared random strings. We propose an implementation of the verifiable graded consensus protocol using transparent setup verifiable delay functions and random oracles, which is then used to run a committee-based Byzantine protocol, similar to Chan et al. (PKC 2020), in an unbiased fashion. Finally, we obtain a polylog-round trustless Byzantine broadcast with amortized communication complexity of $\tilde O(n^2)$, which can be further improved to $\tilde O(n)$ per instance for multiple instances of parallel broadcast.
Bruisable Onions: Anonymous Communication in the Asynchronous Model
In onion routing, a message travels through the network via a series of intermediaries, wrapped in layers of encryption to make it difficult to trace. Onion routing is an attractive approach to realizing anonymous channels because it is simple and fault tolerant. Onion routing protocols provably achieving anonymity in realistic adversary models are known for the synchronous model of communication so far.
In this paper, we give the first onion routing protocol that achieves anonymity in the asynchronous model of communication. The key tool that our protocol relies on is the novel cryptographic object that we call bruisable onion encryption. The idea of bruisable onion encryption is that even though neither the onion’s path nor its message content can be altered in transit, an intermediate router on the onion’s path that observes that the onion is delayed can nevertheless slightly damage, or bruise it. An onion that is chronically delayed will have been bruised by many intermediaries on its path and become undeliverable. This prevents timing attacks and, as we show, yields a provably secure onion routing protocol in the asynchronous setting.
Security of Fixed-Weight Repetitions of Special-Sound Multi-Round Proofs
Interactive proofs are a cornerstone of modern cryptography and as such used in many areas, from digital signatures to multy-party computation. Often the knowledge error $\kappa$ of an interactive proof is not small enough, and thus needs to be reduced. This is usually achieved by repeating the interactive proof in parallel t times. Recently, it was shown that parallel repetition of any $(k_1, \ldots , k_\mu)$-special-sound multi-round public-coin interactive proof reduces the knowledge error from $\kappa$ to $\kappa^t$, which is optimal. However, in many cases parallel repetitions lead to a significant increase in transcript size. A common technique to mitigate this drawback, which is often used in digital signatures obtained by using the Fiat-Shamir transform, is to use fixed-weight challenges, i.e. vectors of challenges having a constant number of entries equal to a fixed value. While widely used, this method has not been fully assessed from a security standpoint. In particular, the effect of the technique on the knowledge error of the special-sound repeated interactive proof has remained unstudied. In this work, we fill the gap and prove that a fixed-weight repetition of a $(k_1, \ldots, k_\mu)$-special-sound multi-round public-coin interactive proof is still knowledge sound. We provide an explicit bound for the knowledge error of the protocol, proving that it matches with the cheating probability of a dishonest prover. Our results apply to some recently-proposed digital signatures which are supposed to be quantum resistant, for example CROSS.
Optimized Homomorphic Evaluation of Boolean Functions
We propose a new framework to homomorphically evaluate Boolean functions using the Torus Fully Homomorphic Encryption (TFHE) scheme. Compared to previous approaches focusing on Boolean gates, our technique can evaluate more complex Boolean functions with several inputs using a single bootstrapping. This allows us to greatly reduce the number of bootstrapping operations necessary to evaluate a Boolean circuit compared to previous works, thus achieving significant improvements in terms of performances. We define theoretically our approach which consists in adding an intermediate homomorphic layer between the plain Boolean space and the ciphertext space. This layer relies on so-called $p$-encodings embedding bits into $\mathbb{Z}_p$. We analyze the properties of these encodings to enable the evaluation of a given Boolean function and provides a deterministic algorithm (as well as an efficient heuristic) to find valid sets of encodings for a given function. We also propose a method to decompose any Boolean circuit into Boolean functions which are efficiently evaluable using our approach. We apply our framework to homomorphically evaluate various cryptographic primitives, and in particular the AES cipher. Our implementation results show significant improvements compared to the state of the art.
Security Analysis of an Image Encryption Based on the Kronecker Xor Product, the Hill Cipher and the Sigmoid Logistic Map
In 2023, Mfungo et al. introduce an image encryption scheme that employs the Kronecker xor product, the Hill cipher and a chaotic map. Their proposal uses the chaotic map to dynamically generate two out of the three secret keys employed by their scheme. Note that both keys are dependent on the size of the original image, while the Hill key is static. Despite the authors' assertion that their proposal offers sufficient security ($149$ bits) for transmitting color images over unsecured channels, we found that this is not accurate. To support our claim, we present a chosen plaintext attack that requires $2$ oracle queries and has a worse case complexity of $\mathcal O(2^{32})$. Note that in this case Mfungo et al.'s scheme has a complexity of $\mathcal O(2^{33})$, and thus our attack is two times faster than an encryption. The reason why this attack is viable is that the two keys remain unchanged for different plaintext images of the same size, while the Hill key remains unaltered for all images.
Sprints: Intermittent Blockchain PoW Mining
Cryptocurrencies and decentralized platforms have been rapidly gaining traction since Nakamoto's discovery of Bitcoin's blockchain protocol. Prominent systems use Proof of Work (PoW) to achieve unprecedented security for digital assets. However, the significant carbon footprint due to the manufacturing and operation of PoW mining hardware is leading policymakers to consider stark measures against them and various systems to explore alternatives. But these alternatives imply stepping away from key security aspects of PoW.
We present Sprints, a blockchain protocol that achieves almost the same security guarantees as PoW blockchains, but with an order-of-magnitude lower carbon footprint while increasing the number of mining rigs by a factor 1.27x. Our conservative estimate of environmental footprint uses common metrics, taking into account both power and hardware. To achieve this reduction, Sprints forces miners to mine intermittently. It interleaves Proof of Delay (PoD, e.g., using a Verifiable Delay Function) and PoW, where only the latter bears a significant resource expenditure. We prove that in Sprints the attacker's success probability is the same as that of legacy PoW. To evaluate practical performance, we analyze the effect of shortened PoW duration, showing a minor reduction in resilience (49% instead of 50%). We confirm the results with a full implementation using 100 patched Bitcoin clients in an emulated network.
Improving Generic Attacks Using Exceptional Functions
Over the past ten years, there have been many attacks on symmetric constructions using the statistical properties of random functions. Initially, these attacks targeted iterated hash constructions and their combiners, developing a wide array of methods based on internal collisions and on the average behavior of iterated random functions. More recently, Gilbert et al. (EUROCRYPT 2023) introduced a forgery attack on so-called duplex-based Authenticated Encryption modes which was based on exceptional random functions, i.e., functions whose graph admits a large component with an exceptionally small cycle.
In this paper, we expand the use of such functions in generic cryptanalysis with several new attacks. First, we improve the attack of Gilbert et al. from $\mathcal{O}(2^{3c/4})$ to $\mathcal{O}(2^{2c/3})$, where $c$ is the capacity. This new attack uses a nested pair of functions with exceptional behavior, where the second function is defined over the cycle of the first one. Next, we introduce several new generic attacks against hash combiners, notably using small cycles to improve the complexities of the best existing attacks on the XOR combiner, Zipper Hash and Hash-Twice.
Last but not least, we propose the first quantum second preimage attack against Hash-Twice, reaching a quantum complexity $\mathcal{O}(2^{3n/7})$.
Low-Latency Linear Transformations with Small Key Transmission for Private Neural Network on Homomorphic Encryption
In the field of Artificial Intelligence (AI), convolution operations have primarily been used in Convolutional Neural Networks (CNNs). However, its utility is increasing with the appearance of convolution integrated transformers or state space models where convolution is a constituent element. In the field of private AI, generalized algorithm, multiplexed parallel convolution was recently proposed to implement CNNs based on the Homomorphic Encryption scheme, residue number system variant Cheon-Kim-Kim-Song. Multiplexed parallel convolution is highly applicable, but its usage has been partly limited due to requiring many rotation operations. In this paper, we propose rotation optimized convolution, which reduces the rotation required for multiplexed parallel convolution, thus lowering latency, enhancing usability, and additionally decreasing the required rotation key. We additionally reduce the size of rotation keys by applying the hierarchical rotation key system, and our proposed small level key system. We also propose a new form of matrix-vector multiplication called Parallel Baby-Step Giant-Step matrix-vector multiplication which also reduces the number of
rotations. In our experiment case, rotation optimized convolution achieved a maximum 70% reduction in execution time and 29× reduction for rotation keys using our method. Also, our proposed matrix-vector multiplication method achieved a reduction of execution time by up to 64%.
Mithril: Stake-based Threshold Multisignatures
Stake-based multiparty cryptographic primitives operate in a setting where participants are associated with their stake, security is argued against an adversary that is bounded by the total stake it possesses —as opposed to number of parties— and we are interested in scalability, i.e., the complexity of critical operations depends only logarithmically in the number of participants (who are assumed to be numerous).
In this work we put forth a new stake-based primitive, stake-based threshold multisignatures (STM, or “Mithril” signatures), which allows the aggregation of individual signatures into a compact multisignature provided the stake that supports a given message exceeds a stake threshold. This is achieved by having for each message a pseudorandomly sampled subset of participants eligible to issue an individual signature; this ensures the scalability of signing, aggregation and verification.
We formalize the primitive in the universal composition setting and propose efficient constructions for STMs. We also showcase that STMs are eminently useful in the cryptocurrency setting by providing two applications: (i) stakeholder decision-making for Proof of Work (PoW) blockchains, specifically, Bitcoin, and (ii) fast bootstrapping for Proof of Stake (PoS) blockchains.
Lattice-based Fault Attacks against ECMQV
ECMQV is a standardized key agreement protocol based on ECC with an additional implicit signature authentication. In this paper we investigate the vulnerability of ECMQV against fault attacks and propose two efficient lattice-based fault attacks. In our attacks, by inducing a storage fault to the ECC parameter $a$ before the execution of ECMQV, we can construct two kinds of weak curves and successfully pass the public-key validation step in the protocol. Then, by solving ECDLP and using a guess-and-determine method, some information of the victim's temporary private key and the implicit-signature result can be deduced. Based on the retrieved information, we build two new lattice-attack models and recover the upper half of the static private key. Compared with the previous lattice-attack models, our models relax the attack conditions and do not require the exact partial knowledge of the nonces. The validity of the attacks is proven by experimental simulations, which show our attacks pose real threats to the unprotected ECMQV implementations since only one permanent fault is sufficient to retrieve half bits of the secret key.
Lookup Arguments: Improvements, Extensions and Applications to Zero-Knowledge Decision Trees
Lookup arguments allow to prove that the elements of a committed vector come from a (bigger) committed table. They enable novel approaches to reduce the prover complexity of general-purpose zkSNARKs, implementing ``non-arithmetic operations" such as range checks, XOR and AND more efficiently. We extend the notion of lookup arguments along two directions and improve their efficiency: (1) we extend vector lookups to matrix lookups (where we can prove that a committed matrix is a submatrix of a committed table). (2) We consider the notion of zero-knowledge lookup argument that keeps the privacy of both the sub-vector/sub-matrix and the table. (3) We present new zero-knowledge lookup arguments, dubbed cq+, zkcq+ and cq++, more efficient than the state of the art, namely the recent work by Eagen, Fiore and Gabizon named cq. Finally, we give a novel application of zero-knowledge matrix lookup argument to the domain of zero-knowledge decision tree where the model provider releases a commitment to a decision tree and can prove zero-knowledge statistics over the committed data structure. Our scheme based on lookup arguments has succinct verification, prover's time complexity asymptotically better than the state of the art, and is secure in a strong security model where the commitment to the decision tree can be malicious.
CODING - Stream Cipher Methods by Varying Components during Ciphering Data
Uncategorized
Uncategorized
Kernel of the symmetric block ciphering methods presented here is the coupling of XOR operations and of invertible substitution tables S with all possible 256**t byte groups (with t=1, 2, 3, ... bytes, fixed at the beginning) being derived from keys:
K(block) := S(S(block) $\otimes$ E$_{o}$) $\otimes$ E$_{u}$ with
- E$_{o}$ upper and E$_{u}$ lower triangular (byte-group-)matrix with (byte-block-length/t)**2 values, value 1 at all non-zero positions,
- $\oplus$ the byte-group-wise addition without carry ('xor'; 'not xor' is possible too),
- $\otimes$ the (vector) multiplication which belongs to $\oplus$.
Variable block lengths (v*t or (mod t)>0) are possible. This kernel can be applied n-times:
K$_{n}$(block) := K(...K(block)...) with n K-operations, in which n can be variable.
Because XOR operations and S-tables only operate in a useful manner if 'block' is not to "homogeneous" and for safety, two further components are determined from keys
- parameters of 2 pseudo random processes, - operation key
used at beginning and at end to get a ciphered block:
cblock := S(ZZ$_{2}$ $\oplus$ S(Op$_{E}$ $\oplus$ S(K$_{n}$(Op$_{A}$ $\oplus$ S(ZZ$_{1}$ $\oplus$ S(block)))))) with
- ZZ$_{1}$ and ZZ$_{2}$ are the bytes of the 1. and 2. pseudo random number process in block length,
- Op$_{A}$ and Op$_{E}$ is the (1./front and 2./back part of the or multiple of the) operation key.
An initial key is first expanded to t*256**t bytes (all further keys have this size too) and can be modified so the result key does not statistically differ from a random key.
Using an invertible S-table, the value (modulo n) of only as much consecutive bits of a key as to represent the number n-1 is determined to shift the last n S-table elements cyclically in accordance with this value, n=2 to 256**t. So all such 256**t! tables can be generated by the top bits of all possible keys and have length of t*256**t bytes.
The byte-group-value +1 at a position of a S-table determines the byte-group in the key from which up 2*7 bytes are used to initialize two floating point numbers (IEEE 754) for a pseudo random process. Floating point numbers are initialized again if a process will be cyclic.
Idea is, to modify (operation) keys similar to data blocks to generate and use more or less continual new S-tables, new pseudo random processes, and new operation keys during ciphering data.
Inspections show that in spite of knowledge of 2 of the 3 components S-table, pseudo random parameters, and operation key as well as the knowledge of original and ciphered data it can not infer the missing 3. component if component modifications are carried out "some time".
As well it is shown that by knowledge of the 3 components generated by a key the key itself can not be inferred (because of usage of interim operation keys). That is compromising of data and with that of components does not concern data ciphered before component-changing to the compromised components. By add-on usage of separate components only for the modifications of keys, it will be guaranteed that data sections ciphered after a component-changing started from compromised components are not compromised automatically.
Because of that a safety stream ciphering should be possible as already constructed for t=1,2,3.
The Multiple Millionaires' Problem: New Algorithmic Approaches and Protocols
We study a fundamental problem in Multi-Party Computation,
which we call the Multiple Millionaires’ Problem (MMP). Given a
set of private integer inputs, the problem is to identify the subset of inputs that equal the maximum (or minimum) of that set,
without revealing any further information on the inputs beyond
what is implied by the desired output. Such a problem is a natural
extension of the Millionaires’ Problem, which is the very first Multi-
Party Computation problem that was presented in Andrew Yao’s
seminal work [30 ]. A closely related problem is MaxP, in which
the value of the maximum is sought. We study these fundamental
problems and describe several algorithmic approaches and protocols for their solution. In addition, we compare the performance
of the protocols under several selected settings. As applications of
privacy-preserving computation are more and more commonly implemented in industrial systems, MMP and MaxP become important
building blocks in privacy-preserving statistics, machine learning,
auctions and other domains. One of the prominent advantages of
the protocols that we present here is their simplicity. As they solve
fundamental problems that are essential building blocks in various
application scenarios, we believe that the presented solutions to
those problems, and the comparison between them, will serve well
future researchers and practitioners of secure distributed computing.
Radical Vélu Isogeny Formulae
We provide explicit radical $N$-isogeny formulae for all odd integers $N$. The formulae are compact closed-form expressions which require one $N$th root computation and $\mathcal{O}(N)$ basic field operations. The formulae are highly efficient to compute a long chain of $N$-isogenies, and have the potential to be extremely beneficial for speeding up certain cryptographic protocols such as CSIDH. Unfortunately, the formulae are conjectured, but we provide ample supporting evidence which strongly suggests their correctness.
For CSIDH-512, we notice an additional 35% speed-up when using radical isogenies up to $N=199$, compared to the work by Castryck, Decru, Houben and Vercauteren, which uses radical isogenies up to $N=19$ only. The addition of our radical isogenies also speeds up the computation of larger class group actions in a comparable fashion.
Multiple Sampling Fast Correlation Attack on Small State Stream Ciphers with Limited Round Key Period
The fast correlation attack (FCA) is a powerful cryptanalysis technique that targets stream ciphers based on linear feedback shift registers (LFSRs). Several FCAs were applied to small state stream ciphers (SSCs). In this paper, the idea of multiple sampling is proposed to use the available keystream bits more efficiently and decrease the data complexity of the attacks. This idea helps to overcome the limitation of SSCs on the number of output keystream bits. Moreover, we classify the parity check equations obtained from the different sampling rounds into different groups to ensure that the round keys used in these equations are the same. Our attack is applied to the Fruit-80 and reduces the data complexity from 2^56.82 to 2^49.82. This modified FCA can be applied to all SSCs with limited round key periods. Finally, we suggest a new design idea to strengthen SSCs against FCAs.
SPRINT: High-Throughput Robust Distributed Schnorr Signatures
We describe high-throughput threshold protocols with guaranteed output delivery for generating Schnorr-type signatures. The protocols run a single message-independent interactive ephemeral randomness generation procedure (e.g., DKG) followed by a \emph{non-interactive} multi-message signature generation procedure. The protocols offer significant increase in throughput already for as few as ten parties while remaining highly-efficient for many hundreds of parties with thousands of signatures generated per minute (and over 10,000 in normal optimistic case).
These protocols extend seamlessly to the dynamic/proactive setting, where each run of the protocol uses a new committee, and they support sub-sampling the committees from among an effectively unbounded number of nodes. The protocols work over a broadcast channel in both synchronous and asynchronous networks.
The combination of these features makes our protocols a good match for implementing a signature service over an (asynchronous) public blockchain with many validators, where guaranteed output delivery is an absolute must. In that setting, there is a system-wide public key, where the corresponding secret signature key is distributed among the validators. Clients can submit messages (under suitable controls, e.g. smart contracts), and authorized messages are signed relative to the global public key.
Asymptotically, when running with committees of $n$ parties, our protocols can generate $\Omega(n^2)$ signatures per run, while providing resilience against $\Omega(n)$ corrupted nodes, and using broadcast bandwidth of only $O(n^2)$ group elements and scalars. For example, we can sign about $n^2/16$ messages using just under $2n^2$ total bandwidth while supporting resilience against $n/4$ corrupted parties, or sign $n^2/8$ messages using just over $2n^2$ total bandwidth with resilience against $n/5$ corrupted parties.
We prove security of our protocols by reduction to the hardness of the discrete logarithm problem in the random-oracle model.
Multi-Server Doubly Efficient PIR
Doubly Efficient Private Information Retrieval (DEPIR) pertains to a Private Information Retrieval (PIR) scheme with both near-linear server-side preprocessing time and sublinear query time. Very recently, Lin et al. (STOC '23) devised the first single-server DEPIR scheme from standard assumptions. However, their work left one major question open: can we build a DEPIR scheme in the multi-server setting, without relying on heavy cryptographic machinery?
In this work, we propose the first doubly efficient information-theoretic PIR scheme, in the multi-server setting. For a database of size $N$, our scheme allows servers to answer an infinite number of client queries in $N^{o(1)}$ time, after a single preprocessing phase which takes $N^{1+o(1)}$ time. Our result is only a $N^{o(1)}$ factor from the lower bound proven by Persiano and Yeo (SODA '22) for this setting.
In addition, we devise a second information-theoretic PIR scheme which pushes the state of the art for the setting where the number of servers is more constrained. It achieves equally fast query times as our first scheme above, and a preprocessing time of $N^{2+o(1)}$.
Cryptanalysis of Algebraic Verifiable Delay Functions
Verifiable Delay Functions (VDF) are a class of cryptographic primitives aiming to guarantee a minimum computation time, even for an adversary with massive parallel computational power. They are useful in blockchain protocols, and several practical candidates have been proposed based on exponentiation in a large finite field: Sloth++, Veedo, MinRoot. The underlying assumption of these constructions is that computing an exponentiation $x^e$ requires at least $\log_2 e$ sequential multiplications.
In this work, we analyze the security of these algebraic VDF candidates. In particular, we show that the latency of exponentiation can be reduced using parallel computation, against the preliminary assumptions.
Epistle: Elastic Succinct Arguments for Plonk Constraint System
We study elastic SNARKs, a concept introduced by the elegant work of Gemini (EUROCRYPTO 2022). The prover of elastic SNARKs has multiple configurations with different time and memory tradeoffs and the output proof is independent of the chosen configuration. In addition, during the execution of the protocol, the space-efficient prover can pause the protocol and save the current state. The time-efficient prover can then resume the protocol from that state. Gemini constructs an elastic SNARK for R1CS.
We present Epistle, an elastic SNARK for Plonk constraint system. For an instance with size $N$, in the time-efficient configuration, the prover uses $O_{\lambda} (N)$ cryptographic operations and $O(N)$ memory; in the space-efficient configuration, the prover uses $O_{\lambda} (N \log N)$ cryptographic operations and $O(\log N)$ memory. Compared to Gemini, our approach reduces the asymptotic time complexity of the space-efficient prover by a factor of $\log N$. The key technique we use is to make the toolbox for multivariate PIOP provided by HyperPlonk (EUROCRYPTO 2023) elastic, with the most important aspect being the redesign of each protocol in the toolbox in the streaming model.
We implement Epistle in Rust. Our benchmarks show that Epistle maintains a stable memory overhead of around $1.5$ GB for instance sizes exceeding $2^{21}$, while the time overhead shows a linear growth trend.
Sometimes You Can’t Distribute Random-Oracle-Based Proofs
We investigate the conditions under which straight-line extractable NIZKs in the random oracle model (i.e. without a CRS) permit multiparty realizations that are black-box in the same random oracle. We show that even in the semi-honest setting, any MPC protocol to compute such a NIZK cannot make black-box use of the random oracle or a hash function instantiating it if security against all-but-one corruptions is desired, unless the number of queries made by the verifier to the oracle grows linearly with the number of parties. This presents a fundamental barrier to constructing efficient protocols to securely distribute the computation of NIZKs (and signatures) based on MPC-in-the-head, PCPs/IOPs, and sigma protocols compiled with transformations due to Fischlin, Pass, or Unruh.
When the adversary is restricted to corrupt only a constant fraction of parties, we give a positive result by means of a tailored construction, which demonstrates that our impossibility does not extend to weaker corruption models in general.
Concretely Efficient Lattice-based Polynomial Commitment from Standard Assumptions
Polynomial commitment is a crucial cryptographic primitive in constructing zkSNARKs. Most practical constructions to date are either vulnerable against quantum adversaries or lack homomorphic properties, which are essential for recursive proof composition and proof batching. Recently, lattice-based constructions have drawn attention for their potential to achieve all the desirable properties, though they often suffer from concrete inefficiency or rely on newly introduced assumptions requiring further cryptanalysis.
In this paper, we propose a novel construction of a polynomial commitment scheme based on standard lattice-based assumptions. Our scheme achieves a square-root proof size and verification complexity, ensuring concrete efficiency in proof size, proof generation, and verification. Additionally, it features a transparent setup and publicly verifiability.
When compared with Brakedown (CRYPTO 2023), a recent code-based construction, our scheme offers comparable performance across all metrics. Furthermore, its proof size is approximately 4.1 times smaller than SLAP (EUROCRYPT 2024), a recent lattice-based construction.
On cycles of pairing-friendly abelian varieties
One of the most promising avenues for realizing scalable proof systems relies on the existence of 2-cycles of pairing-friendly elliptic curves. Such a cycle consists of two elliptic curves E/GF(p) and E'/GF(q) that both have a low embedding degree and also satisfy q = #E and p = #E'. These constraints turn out to be rather restrictive; in the decade that has passed since 2-cycles were first proposed for use in proof systems, no new constructions of 2-cycles have been found.
In this paper, we generalize the notion of cycles of pairing-friendly elliptic curves to study cycles of pairing-friendly abelian varieties, with a view towards realizing more efficient pairing-based SNARKs. We show that considering abelian varieties of dimension larger than 1 unlocks a number of interesting possibilities for finding pairing-friendly cycles, and we give several new constructions that can be instantiated at any security level.
Loquat: A SNARK-Friendly Post-Quantum Signature based on the Legendre PRF with Applications in Ring and Aggregate Signatures
We design and implement a novel post-quantum signature scheme based on the Legendre PRF, named Loquat. Prior to this work, efficient approaches for constructing post-quantum signatures with comparable security assumptions mainly used the MPC-in-the-head paradigm or hash trees. Our method departs from these paradigms and, notably, is SNARK-friendly, a feature not commonly found in earlier designs. Loquat requires significantly fewer computational operations for verification than other symmetric-key-based post-quantum signature schemes that support stateless many-time signing. Notably, the performance of Loquat remains practical even when employing algebraic hash functions. Our Python-based implementations of Loquat demonstrate a signature size of 46KB, with a signing time of 5.04 seconds and a verification time of merely 0.21 seconds. Instantiating the random oracle with an algebraic hash function results in the R1CS constraints for signature verification being about 148K, 7 to 175 times smaller than those required for state-of-the-art MPC-in-the-head-based signatures and 3 to 9 times less than those for SPHINCS+ [Bernstein et al. CCS’19].
We explore two applications of Loquat. First, we incorporate it into the ID-based ring signature scheme [Buser et al. ACNS’22], achieving a significant reduction in signature size from 1.9 MB to 0.9 MB with stateless signing and practical master key generation. Our second application presents a SNARK-based aggregate signature scheme. We use the implementations of Aurora [Ben-Sasson et al. EC’19] and Fractal [Chiesa et al. EC’20] to benchmark our aggregate signature’s performance. Our findings show that aggregating 32 Loquat signatures using Aurora results in a proving time of about 7 minutes, a verification time of 66 seconds, and an aggregate signature size of 197 KB. Furthermore, by leveraging the recursive proof composition feature of Fractal, we achieve an aggregate signature with a constant size of 145 KB, illustrating Loquat’s potential for scalability in cryptographic applications.
Result Pattern Hiding Boolean Searchable Encryption: Achieving Negligible False Positive Rates in Low Storage Overhead
Boolean Searchable Symmetric Encryption (SSE) enables secure outsourcing of databases to an untrusted server in encrypted form and allows the client to execute secure Boolean queries involving multiple keywords. The leakage of keyword pair result pattern (KPRP) in a Boolean search poses a significant threat, which reveals the intersection of documents containing any two keywords involved in a search and can be exploited by attackers to recover plaintext information about searched keywords (USENIX Security’16). However, existing KPRP-hiding schemes either rely on Bloom filters (S&P’14, CCS’18), leading to high false positive search results (where non-matching documents could be erroneously identified as matches) that hinder the extension to multi-client settings (CCS’13), or require excessive server storage (PETS’23), making them impractical for large-scale sparse databases.
In this paper, we introduce Hidden Boolean Search (HBS), the first KPRP-hiding Boolean SSE scheme with both negligible false positives (essential for satisfying the standard correctness definition of SSE) and low server storage requirements. HBS leverages a novel cryptographic tool called Result-hiding Filter (RH-filter). It distinguishes itself as the first tool that supports computationally correct membership queries with hiding results at nearly constant overhead. With the help of RH-filter, compared to the most efficient KPRP-hiding scheme (CCS’18) in terms of overall storage and search efficiency, HBS surpasses it across all performance metrics, mitigates false positives, and achieves significantly stronger query expressiveness. We further extend HBS to the dynamic setting, resulting in a scheme named DHBS, which maintains KPRP-hiding while ensuring forward and backward privacy—two critical security guarantees in the dynamic setting.
More Efficient Approximate $k$-wise Independent Permutations from Random Reversible Circuits via log-Sobolev Inequalities
We prove that the permutation computed by a reversible circuit with $\widetilde{O}(nk\cdot \log(1/\epsilon))$ random $3$-bit gates is $\epsilon$-approximately $k$-wise independent. Our bound improves on currently known bounds in the regime when the approximation error $\epsilon$ is not too small. We obtain our results by analyzing the log-Sobolev constants of appropriate Markov chains rather than their spectral gaps.
Collaborative, Segregated NIZK (CoSNIZK) and More Efficient Lattice-Based Direct Anonymous Attestation
Direct Anonymous Attestation (DAA) allows a (host) device with a Trusted Platform Module (TPM) to prove that it has a certified configuration of hardware and software whilst preserving the privacy of the device. All deployed DAA schemes are based on classical security assumptions. Despite a long line of works proposing post-quantum designs, the vast majority give only theoretical schemes and where concrete parameters are computed, their efficiency is far from practical.
Our first contribution is to define collaborative, segregated, non-interactive zero knowledge (CoSNIZK). This notion generalizes the property of collaborative zero-knowledge as formalised by Ozdemir and Boneh (USENIX ’22) so that the zero-knowledge property need only apply to a subset of provers during collaborative proof generation. This if of particular interest for proxy-cryptography in which part of an expensive but sensitive computation may be delegated to another party. We give a lattice-based CoSNIZK based on the highly efficient proof framework in (Crypto ’22).
Our main contribution is the construction of a DAA based on the hardness of problems over module lattices as well as the ISISf assumption recently introduced by Bootle et al. (Crypto ’23). A key component of our work is the CoSNIZK construction which allows the TPM and host to jointly create attestations whilst protecting TPM key material from a potentially corrupt host. We prove the security of our DAA scheme according to the well-established UC definition of Camenisch et al. (PKC ’16). Our design achieves DAA signatures more than 1.5 orders of magnitude smaller than previous works at only 38KB making it the first truly practical candidate for post-quantum DAA.
Formal Security Proofs via Doeblin Coefficients: Optimal Side-channel Factorization from Noisy Leakage to Random Probing
Masking is one of the most popular countermeasures to side-
channel attacks, because it can offer provable security. However, depend-
ing on the adversary’s model, useful security guarantees can be hard
to provide. At first, masking has been shown secure against t-threshold
probing adversaries by Ishai et al. at Crypto’03. It has then been shown
secure in the more generic random probing model by Duc et al. at Euro-
crypt’14. Prouff and Rivain have introduced the noisy leakage model to
capture more realistic leakage at Eurocrypt’13. Reduction from noisy
leakage to random probing has been introduced by Duc et al. at Euro-
crypt’14, and security guarantees were improved for both models by
Prest et al. at Crypto’19, Duc et al. in Eurocrypt’15/J. Cryptol’19,
and Masure and Standaert at Crypto’23. Unfortunately, as it turns out,
we found that previous proofs in either random probing or noisy leakage
models are flawed, and such flaws do not appear easy to fix.
In this work, we show that the Doeblin coefficient allows one to over-
come these flaws. In fact, it yields optimal reductions from noisy leakage
to random probing, thereby providing a correct and usable metric to
properly ground security proofs. This shows the inherent inevitable cost
of a reduction from the noisy leakages to the random probing model. We
show that it can also be used to derive direct formal security proofs using
the subsequence decomposition of Prouff and Rivain.
BackdoorIndicator: Leveraging OOD Data for Proactive Backdoor Detection in Federated Learning
In a federated learning (FL) system, decentralized data owners (clients) could upload their locally trained models to a central server, to jointly train a global model. Malicious clients may plant backdoors into the global model through uploading poisoned local models, causing misclassification to a target class when encountering attacker-defined triggers. Existing backdoor defenses show inconsistent performance under different system and adversarial settings, especially when the malicious updates are made statistically close to the benign ones. In this paper, we first reveal the fact that planting subsequent backdoors with the same target label could significantly help to maintain the accuracy of previously planted back- doors, and then propose a novel proactive backdoor detection mechanism for FL named BackdoorIndicator, which has the server inject indicator tasks into the global model leveraging out-of-distribution (OOD) data, and then utilizing the fact that any backdoor samples are OOD samples with respect to benign samples, the server, who is completely agnostic of the potential backdoor types and target labels, can accurately detect the presence of backdoors in uploaded models, via evaluating the indicator tasks. We perform systematic and extensive empirical studies to demonstrate the consistently superior performance and practicality of BackdoorIndicator over baseline defenses, across a wide range of system and adversarial settings.
A new multivariate primitive from CCZ equivalence
Multivariate Cryptography is one of the main candidates for Post-quantum Cryptography. Multivariate schemes are usually constructed by applying two secret affine invertible transformations $\mathcal S,\mathcal T$ to a set of multivariate polynomials $\mathcal{F}$ (often quadratic). The secret polynomials $\mathcal{F}$ posses a trapdoor that allows the legitimate user to find a solution of the corresponding system, while the public polynomials $\mathcal G=\mathcal S\circ\mathcal F\circ\mathcal T$ look like random polynomials. The polynomials $\mathcal G$ and $\mathcal F$ are said to be affine equivalent. In this article, we present a more general way of constructing a multivariate scheme by considering the CCZ equivalence, which has been introduced and studied in the context of vectorial Boolean functions.
HAWKEYE – Recovering Symmetric Cryptography From Hardware Circuits
We present the first comprehensive approach for detecting and analyzing symmetric cryptographic primitives in gate-level descriptions of hardware. To capture both ASICs and FPGAs, we model the hardware as a directed graph, where gates become nodes and wires become edges. For modern chips, those graphs can easily consist of hundreds of thousands of nodes. More abstractly, we find subgraphs corresponding to cryptographic primitives in a potentially huge graph, the sea-of-gates, describing an entire chip. As we are particularly interested in unknown cryptographic algorithms, we cannot rely on searching for known parts such as S-boxes or round constants.
Instead, we are looking for parts of the chip that perform highly local computations. A major result of our work is that many symmetric algorithms can be reliably located and sometimes even identified by our approach, which we call HAWKEYE. Our findings are verified by extensive experimental results, which involve SPN, ARX, Feistel, and LFSR-based ciphers implemented for both FPGAs and ASICs. We demonstrate the real-world applicability of HAWKEYE by evaluating it on OpenTitan's Earl Grey chip, an open-source secure micro-controller design. HAWKEYE locates all major cryptographic primitives present in the netlist comprising 424341 gates in 44.3 seconds.
Almost optimal succinct arguments for Boolean circuit on RAM
The significance of succinct zero-knowledge proofs has increased considerably in recent times. However, one of the major challenges that hinder the prover's efficiency is when dealing with Boolean circuits. In particular, the conversion of each bit into a finite field element incurs a blow-up of more than 100x in terms of both memory usage and computation time.
This work focuses on data-parallel Boolean circuits that contain numerous identical sub-circuits. These circuits are widely used in real-world applications, such as proving a large number of hash-preimages in zkEVM and zkBridge. We develop a method for constructing succinct arguments with $2^{-\lambda}$ soundness error and $O(\omega(1)\frac{N}{\log{N}} \log{\log{N}})$ RAM operations, or $O(\frac{N}{\log{N}}\log\log N)$ finite field additions, along with a negligible number of finite field multiplications.
Our approach is based on using the GKR protocol to obtain the succinct argument.
Novel approximations of elementary functions in zero-knowledge proofs
In this paper, we study the computation of complex mathematical functions in statements executed on top of zero-knowledge proofs (ZKP); these functions may include roots, exponentials and logarithms, trigonometry etc. While existing approaches to these functions in privacy-preserving computations (and sometimes also in general-purpose processors) have relied on polynomial approximation, more powerful methods are available for ZKP. In this paper, we note that in ZKP, all algebraic functions are exactly computable. Recognizing that, we proceed to the approximation of transcendental functions with algebraic functions. We develop methods of approximation, instantiate them on a number of common transcendental functions, and benchmark their precision and efficiency in comparison with best polynomial approximations.
Ascon-Keccak AEAD Algorithm
The Ascon specification defines among others an encryption scheme offering authenticated encryption with associated data (AEAD) which is based on a duplex mode of a sponge. With that it is the first of such algorithm selected and about to be standardized by NIST.
The sponge size is comparatively small, 320 bits, as expected for lightweight cryptography. With that, the strength of the defined AEAD algorithm is limited to 128 bits. Albeit, the definition of the Ascon AEAD algorithm integrates with the associated sponge, it is mathematically not bound to exactly this sponge function. Thus, the Ascon AEAD specification can be used with a different sponge and still operate as defined by the Ascon authors.
This specification defines the Ascon-Keccak AEAD algorithm which replaces the Ascon sponge with the Keccak sponge, leaving the Ascon AEAD algorithm unchanged. The selected parameters for Ascon-Keccak AEAD offer two algorithm strengths: Ascon-Keccak 256 with a classic security strength of 256 bits and a quantum security strength of 128 bits. In addition, Ascon-Keccak 512 provides an algorithm with 512 bit classic security strength and 256 bit quantum security strength. The selected parameters for Ascon-Keccak 256 offer a significant higher performance on 64-bit architectures than Ascon-128 and Ascon-128a. The performance of Ascon-Keccak 512 is in league with Ascon-128. Yet, with the Keccak sponge size of 1600 bits, Ascon-Keccak cannot be considered a lightweight cryptographic algorithm any more. A reference implementation of the algorithm is provided as referenced in the document.
Speeding up Preimage and Key-Recovery Attacks with Highly Biased Differential-Linear Approximations
We present a framework for speeding up the search for preimages of candidate one-way functions based on highly biased differential-linear distinguishers. It is naturally applicable to preimage attacks on hash functions. Further, a variant of this framework applied to keyed functions leads to accelerated key-recovery attacks. Interestingly, our technique is able to exploit related-key differential-linear distinguishers in the single-key model without querying the target encryption oracle with unknown but related keys. This is in essence similar to how we speed up the key search based on the well known complementation property of DES, which calls for caution from the designers in building primitives meant to be secure in the single-key setting without a thorough cryptanalysis in the related-key model. We apply the method to sponge-based hash function Ascon-HASH, XOFs XOEsch/Ascon-XOF and AEAD Schwaemm, etc. Accelerated preimage or key-recovery attacks are obtained. Note that all the differential-linear distinguishers employed in this work are highly biased and thus can be experimentally verified.
Consecutive Adaptor Signature Scheme: From Two-Party to N-Party Settings
Adaptor signatures have attracted attention as a tool to address scalability and interoperability issues in blockchain applications. Adaptor signatures can be constructed by extending common digital signature schemes that both authenticate a message and disclose a secret witness to a specific party. In Asiacrypt 2021, Aumayr et al. formulated the two-party adaptor signature as an independent cryptographic primitive. In this study, we extend their adaptor signature formulation to $N$ parties, present its generic construction, and define the security to be satisfied. Then, we present a concrete construction based on Schnorr signatures and discuss the security properties.
Heuristic Ideal Obfuscation Based on Evasive LWR
This paper introduces a heuristic ideal obfuscation scheme grounded in the lattice problems, which differs from that proposed by Jain, Lin, and Luo ([JLLW23], CRYPTO 2023). The approach in this paper follows a methodology akin to that of Brakerski, Dottling, Garg, and Malavolta ([BDGM20], EUROCRYPT 2020) for building indistinguishable obfuscation (iO). The proposal is achieved by leveraging a variant of learning with rounding (LWR) to build linearly homomorphic encryption (LHE) and employing {\em Evasive LWR} to construct multilinear maps. Initially, we reprove the hardness of LWR using the prime number theorem and the fixed-point theorem, showing that the statistical distance between $\lfloor As\rfloor_p$ and $\lfloor u\rfloor_p$ does not exceed $\exp\left(-\frac{n\log_2n\ln p}{\sqrt{5}}\right)$ when the security parameter $q>2^{n}p$. Additionally, we provide definitions for {\em Evasive LWR} and {\em composite homomorphic pseudorandom function} (cHPRF), and based on these, we construct multilinear maps, thereby establishing the ideal obfuscation scheme proposed in this paper.
Securing the Future of GenAI: Policy and Technology
The rise of Generative AI (GenAI) brings about transformative potential across sectors, but its dual-use nature also amplifies risks. Governments globally are grappling with the challenge of regulating GenAI, balancing innovation against safety. China, the United States (US), and the European Union (EU) are at the forefront with initiatives like the Management of Algorithmic Recommendations, the Executive Order, and the AI Act, respectively. However, the rapid evolution of GenAI capabilities often outpaces the development of comprehensive safety measures, creating a gap between regulatory needs and technical advancements.
A workshop co-organized by Google, University of Wisconsin, Madison (UW-Madison), and Stanford University aimed to bridge this gap between GenAI policy and technology. The diverse stakeholders of the GenAI space—from the public and governments to academia and industry—make any safety measures under consideration more complex, as both technical feasibility and regulatory guidance must be realized. This paper summarizes the discussions during the workshop which addressed questions, such as: How regulation can be designed without hindering technological progress? How technology can evolve to meet regulatory standards? The interplay between legislation and technology is a very vast topic, and we don’t claim that this paper is a comprehensive treatment on this topic. This paper is meant to capture findings based on the workshop, and hopefully, can guide discussion on this topic.
Mangrove: A Scalable Framework for Folding-based SNARKs
We present a framework for building efficient folding-based SNARKs. First we develop a new "uniformizing" compiler for NP statements that converts any poly-time computation to a sequence of identical simple steps. The resulting uniform computation is especially well-suited to be processed by a folding-based IVC scheme. Second, we develop two optimizations to folding-based IVC. The first reduces the recursive overhead of the IVC by restructuring the relation to which folding is applied. The second employs a "commit-and-fold'' strategy to further simplify the relation. Together, these optimizations result in a folding based SNARK that has a number of attractive features. First, the scheme uses a constant-size transparent common reference string (CRS). Second, the prover has
(i) low memory footprint,
(ii) makes only two passes over the data,
(iii) is highly parallelizable, and
(iv) is concretely efficient.
Microbenchmarks indicate that proving time is competitive with leading monolithic SNARKs, and significantly faster than other streaming SNARKs. For $2^{24}$ ($2^{32}$) gates, the Mangrove prover is estimated to take $2$ minutes ($8$ hours) with peak memory usage approximately $390$ MB ($800$ MB) on a laptop.
Simulation-Extractable KZG Polynomial Commitments and Applications to HyperPlonk
HyperPlonk is a recent SNARK proposal (Eurocrypt'23) that features a linear-time prover and supports custom gates of larger degree than Plonk. For the time being, its instantiations are only proven to be knowledge-sound (meaning that soundness is only guaranteed when the prover runs in isolation) while many applications motivate the stronger notion of simulation-extractability (SE). Unfortunately, the most efficient SE compilers are not immediately applicable to multivariate polynomial interactive oracle proofs. To address this problem, we provide an instantiation of HyperPlonk for which we can prove simulation-extractability in a strong sense. As a crucial building block, we describe KZG-based commitments to multivariate polynomials that also provide simulation-extractability while remaining as efficient as malleable ones. Our proofs stand in the combined algebraic group and random oracle model and ensure straight-line extractability (i.e., without rewinding).
Practical q-IND-CPA-D-Secure Approximate Homomorphic Encryption
At Eurocrypt $2021$, Li and Micciancio demonstrated that the IND-CPA notion of security is not sufficient to cover the passive security of approximate homomorphic encryption schemes, by outlining a key recovery attack against the CKKS scheme (Cheon, Kim, Kim, Seong, Asiacrypt $2017$). They proposed the notion of $q$-IND-CPA-D security, which allows an adversary to make $q$ calls to a restricted decryption oracle. Li and Micciancio left achieving $q$-IND-CPA-D security as an open problem, but proposed two approaches: noise flooding and an exact version of CKKS.
The first approach was addressed by Li, Micciancio, Schultz and Sorrell (Crypto 2022), but leads to substantial efficiency loss.
In this work, we look at the second approach.
We define $(\delta, r)$-exact CKKS, a version of CKKS that returns exact results on all except the least $r$ significant bits with (high) probability $\delta$, based on bounds on the noise. We prove that the advantage of a $q$-IND-CPA-D attacker against $(\delta, r)$-exact CKKS is determined by the failure probability of those bounds. We conduct a tight average-case and implementation-specific noise analysis of all elementary operations in CKKS, as implemented in the Lattigo library, including the bootstrapping operation. We propose bounds that have small enough failure probability for the advantage of a $q$-IND-CPA-D attacker against $(\delta,r)$-exact CKKS to become smaller than $2^{-128}$, while the parameter sets needed remain practical. We furthermore present an estimator tool that combines the bounds on basic operations and returns tight noise estimates, even for large circuits. We validate our bounds by showcasing experimental results on different iterative algorithms, homomorphic encoding, decoding and bootstrapping.
The Algebraic Freelunch: Efficient Gröbner Basis Attacks Against Arithmetization-Oriented Primitives
In this paper, we present a new type of algebraic attack that applies to many recent arithmetization-oriented families of permutations, such as those used in Griffin, Anemoi, ArionHash, and XHash8, whose security relies on the hardness of the constrained-input constrained-output (CICO) problem. We introduce the FreeLunch approach: the monomial ordering is chosen so that the natural polynomial system encoding the CICO problem already is a Gröbner basis. In addition, we present a new dedicated resolution algorithm for FreeLunch systems of complexity lower than applicable state-of-the-art FGLM algorithms.
We show that the FreeLunch approach challenges the security of fullround instances of Anemoi, Arion and Griffin. We confirm these theoretical results with experimental results on those three permutations. In particular, using the FreeLunch attack combined with a new technique to bypass 3 rounds of Griffin, we recover a CICO solution for 7 out of 10 rounds of Griffin in less than four hours on one core of AMD EPYC 7352 (2.3GHz).
Robust Additive Randomized Encodings from IO and Pseudo-Non-linear Codes
Additive randomized encodings (ARE), introduced by Halevi, Ishai, Kushilevitz, and Rabin (CRYPTO 2023), reduce the computation of a k-party function $f (x_1, . . . , x_k )$ to locally computing encodings $\hat{x}_i$ of each input xi and then adding them together over some Abelian group into an output encoding $\hat{y} = ∑ \hat{x}_i$, which reveals nothing but the result. In robust ARE (RARE) the sum of any subset of $\hat{x}_i$, reveals only the residual function obtained by restricting the corresponding inputs. The appeal of (R)ARE comes from the simplicity of the online part of the computation involving only addition, which yields for instance non-interactive multi-party computation in the shuffle model where messages from different parties are anonymously shuffled. Halevi, Ishai, Kushilevitz, and Rabin constructed ARE from standard assumptions and RARE in the ideal obfuscation model, leaving open the question of whether RARE can be constructed in the plain model.
We construct RARE in the plain model from indistinguishability obfuscation, which is necessary, and a new primitive that we call pseudo-non-linear codes. We provide two constructions of this primitive assuming either Learning with Errors or Decision Diffie Hellman. A bonus feature of our construction is that it is online succinct. Specifically, encodings $\hat{x}_i$ can be decomposed to offline parts $\hat{z}_i$ that can be sent directly to the evaluator and short online parts $\hat{g}_i$ that are added together.
Breaking Indistinguishability with Transfer Learning: A First Look at SPECK32/64 Lightweight Block Ciphers
In this research, we introduce MIND-Crypt, a novel attack framework that uses deep learning (DL) and transfer learning (TL) to challenge the indistinguishability of block ciphers, specifically SPECK32/64 encryption algorithm in CBC mode (Cipher Block Chaining) against Known Plaintext Attacks (KPA). Our methodology includes training a DL model with ciphertexts of two messages encrypted using the same key. The selected messages have the same byte-length and differ by only one bit at the binary level. This DL model employs a residual network architecture. For the TL, we use the trained DL model as a feature extractor, and these features are then used to train a shallow machine learning, such as XGBoost. This dual strategy aims to distinguish ciphertexts of two encrypted messages, addressing traditional cryptanalysis challenges.
Our findings demonstrate that the deep learning model achieves an accuracy of approximately 99% under consistent cryptographic conditions (Same Key or Rounds) with the SPECK32/64 cipher. However, performance degrades to random guessing levels (50%) when tested with ciphertext generated from different keys or different encryption rounds of SPECK32/64. To enhance the results, the DL model requires retraining with different keys or encryption rounds using larger datasets ($10^{7}$ samples). To overcome this limitation, we implement TL, achieving an accuracy of about 53% with just 10,000 samples, which is better than random guessing. Further training with 580,000 samples increases accuracy to nearly 99%, showing a substantial reduction in data requirements by over 94%. This shows that an attacker can utilize machine learning models to break indistinguishability by accessing pairs of plaintexts and their corresponding ciphertexts encrypted with the same key, without directly interacting with the communicating parties.
Privacy-preserving Anti-Money Laundering using Secure Multi-Party Computation
Money laundering is a serious financial crime where criminals aim to conceal the illegal source of their money via a series of transactions. Although banks have an obligation to monitor transactions, it is difficult to track these illicit money flows since they typically span over multiple banks, which cannot share this information due to privacy concerns. We present secure risk propagation, a novel efficient algorithm for money laundering detection across banks without violating privacy concerns. In this algorithm, each account is assigned a risk score, which is then propagated through the transaction network. In this article we present two results. Firstly, using data from a large Dutch bank, we show that it is possible to detect unusual activity using this model, with cash ratio as the risk score. With a recall of 20%, the precision improves from 15% to 40% by propagating the risk scores, reducing the number of false positives significantly. Secondly, we present a privacy-preserving solution for securely performing risk propagation over a joint, inter-bank transaction network. To achieve this, we use Secure Multi-Party Computation (MPC) techniques, which are particularly well-suited for the risk propagation algorithm due to its structural simplicity. We also show that the running time of this secure variant scales linearly in the amount of accounts and transactions. For 200, 000 transactions, two iterations of the secure algorithm between three virtual parties, run within three hours on a consumer-grade server.
On the parallelization of square-root Vélu's formulas
A primary challenge in isogeny-based cryptography lies in the substantial computational cost associated to computing and evaluating prime-degree isogenies. This computation traditionally relied on Vélu's formulas, an approach with time complexity linear in the degree but which was further enhanced by Bernstein, De Feo, Leroux, and Smith to a square-root complexity. The improved square-root Vélu's formulas exhibit a degree of parallelizability that has not been exploited in major implementations. In this study, we introduce a theoretical framework for parallelizing isogeny computations and provide a proof-of-concept implementation in C with OpenMP. While the parallelization effectiveness exhibits diminishing returns with the number of cores, we still obtain strong results when using a small number of cores. Concretely, our implementation shows that for large degrees it is easy to achieve speedup factors of up to $1.74$, $2.54$, and $3.44$ for two, four, and eight cores, respectively.
How (Not) to Simulate PLONK
PLONK is a zk-SNARK system by Gabizon, Williamson, and Ciobotaru with proofs of constant size (0.5 KB) and sublinear verification time. Its setup is circuit-independent supporting proofs of arbitrary statements up to a certain size bound.
Although deployed in several real-world applications, PLONK's zero-knowledge property had only been argued informally. Consequently, we were able to find and fix a vulnerability in its original specification, leading to an update of PLONK in eprint version 20220629:105924.
In this work, we construct a simulator for the patched version of PLONK and prove that it achieves statistical zero knowledge. Furthermore, we give an attack on the previous version of PLONK showing that it does not even satisfy the weaker notion of (statistical) witness indistinguishability.
Bare PAKE: Universally Composable Key Exchange from just Passwords
In the past three decades, an impressive body of knowledge has been built around secure and private password authentication. In particular, secure password-authenticated key exchange (PAKE) protocols require only minimal overhead over a classical Diffie-Hellman key exchange. PAKEs are also known to fulfill strong composable security guarantees that capture many password-specific concerns such as password correlations or password mistyping, to name only a few. However, to enjoy both round-optimality and strong security, applications of PAKE protocols must provide unique session and participant identifiers. If such identifiers are not readily available, they must be agreed upon at the cost of additional communication flows, a fact which has been met with incomprehension among practitioners, and which hindered the adoption of provably secure password authentication in practice.
In this work, we resolve this issue by proposing a new paradigm for truly password-only yet securely composable PAKE, called bare PAKE. We formally prove that two prominent PAKE protocols, namely CPace and EKE, can be cast as bare PAKEs and hence do not require pre-agreement of anything else than a password. Our bare PAKE modeling further allows us to investigate a novel "reusability" property of PAKEs, i.e., whether $n^2$ pairwise keys can be exchanged from only $n$ messages, just as the Diffie-Hellman non-interactive key exchange can do in a public-key setting. As a side contribution, this add-on property of bare PAKEs leads us to observe that some previous PAKE constructions relied on unnecessarily strong, "reusable" building blocks. By showing that ``non-reusable'' tools suffice for standard PAKE, we open a new path towards round-optimal post-quantum secure password-authenticated key exchange.
Bootstrapping Bits with CKKS
The Cheon-Kim-Kim-Song (CKKS) fully homomorphic encryption scheme is designed to efficiently perform computations on real numbers in an encrypted state. Recently, Drucker et al. [J. Cryptol.] proposed an efficient strategy to use CKKS in a black-box manner to perform computations on binary data.
In this work, we introduce several CKKS bootstrapping algorithms designed specifically for ciphertexts encoding binary data. Crucially, the new CKKS bootstrapping algorithms enable to bootstrap ciphertexts containing the binary data in the most significant bits. First, this allows to decrease the moduli used in bootstrapping, saving a larger share of the modulus budget for non-bootstrapping operations.
In particular, we obtain full-slot bootstrapping in ring degree $2^{14}$ for the first time. Second, the ciphertext format is compatible with the one used in the DM/CGGI fully homomorphic encryption schemes. Interestingly, we may combine our CKKS bootstrapping algorithms for bits with the fast ring packing technique from Bae et al. [CRYPTO'23]. This leads to a new bootstrapping algorithm for DM/CGGI that outperforms the state-of-the-art approaches when the number of bootstraps to be performed simultaneously is in the low hundreds.
Public Randomness Extraction with Ephemeral Roles and Worst-Case Corruptions
We distill a simple information-theoretic model for randomness extraction motivated by the task of generating publicly verifiable randomness in blockchain settings and which is closely related to You-Only-Speak-Once (YOSO) protocols (CRYPTO 2021). With the goal of avoiding denial-of-service attacks, parties speak only once and in sequence by broadcasting a public value and forwarding secret values to future parties. Additionally, an unbounded adversary can corrupt any chosen subset of at most $t$ parties. In contrast, existing YOSO protocols only handle random corruptions. As a notable example, considering worst-case corruptions allows us to reduce trust in the role assignment mechanism, which is assumed to be perfectly random in YOSO.
We study the maximum corruption threshold $t$ which allows for unconditional randomness extraction in our model:
- With respect to feasibility, we give protocols for $t$ corruptions and $n=6t+1$ or $n=5t$ parties depending on whether the adversary learns secret values forwarded to corrupted parties immediately once they are sent or only once the corrupted party is executed, respectively. Both settings are motivated by practical implementations of secret value forwarding. To design such protocols, we go beyond the committee-based approach that is sufficient for random corruptions in YOSO but turns out to be sub-optimal for chosen corruptions.
- To complement our protocols, we show that low-error randomness extraction is impossible with corruption threshold $t$ and $n\leq 4t$ in the stronger adversarial network model.
Eureka: A General Framework for Black-box Differential Privacy Estimators
Differential privacy (DP) is a key tool in privacy-preserving data analysis. Yet it remains challenging for non-privacy-experts to prove the DP of their algorithms. We propose a methodology for domain experts with limited data privacy background to empirically estimate the privacy of an arbitrary mechanism. Our Eureka moment is a new link---which we prove---between the problems of DP parameter-estimation and Bayes optimal classifiers in ML, which we believe can be of independent interest. Our estimator uses this link to achieve two desirable properties: (1) black-box, i.e., it does not require knowledge of the underlying mechanism, and (2) it has a theoretically-proven accuracy, depending on the underlying classifier used, allowing plug-and-play use of different classifiers.
More concretely, motivated by the impossibility of the above task for unrestricted input domains (which we prove), we introduce a natural, application-inspired relaxation of DP which we term relative DP. Intuitively, relative DP defines a mechanism's privacy relative to an input set $T$, circumventing the above impossibility when $T$ is finite. Importantly, it preserves the key intuitive privacy guarantee of DP while enjoying a number of desirable DP properties---scalability, composition, and robustness to post-processing. We then devise a black-box poly-time $(\epsilon,\delta)$-relative DP estimator for any poly-size $T$---the first privacy estimator to support mechanisms with large output spaces while having tight accuracy bounds. As a result of independent interest, we generalize our theory to develop the first Distributional Differential Privacy (DDP) estimator.
We benchmark our estimator in a proof-of-concept implementation. First, using kNN as the classifier we show that our method (1) produces a tight, analytically computed $(\epsilon, \delta)$-DP trade-off of low-dimensional Laplace and Gaussian mechanisms---the first to do so, (2) accurately estimates the privacy spectrum of DDP mechanisms, and (3) can verify a DP mechanism's implementations, e.g., Sparse Vector Technique, Noisy Histogram, and Noisy max. Our implementation and experiments demonstrate the potential of our framework, and highlight its computational bottlenecks in estimating DP, e.g., in terms of the size of $\delta$ and the data dimensionality. Our second, neural-network-based instantiation makes a first step in showing that our method can be extended to mechanisms with high-dimensional outputs.
Aegis: A Lightning Fast Privacy-preserving Machine Learning Platform against Malicious Adversaries
Privacy-preserving machine learning (PPML) techniques have gained significant popularity in the past years. Those protocols have been widely adopted in many real-world security-sensitive machine learning scenarios, e.g., medical care and finance. In this work, we introduce $\mathsf{Aegis}$~-- a high-performance PPML platform built on top of a maliciously secure 3-PC framework over ring $\mathbb{Z}_{2^\ell}$. In particular, we propose a novel 2-round secure comparison (a.k.a., sign bit extraction) protocol in the preprocessing model. The communication of its semi-honest version is only 25% of the state-of-the-art (SOTA) constant-round semi-honest comparison protocol by Zhou et al. (S&P 2023); communication and round complexity of its malicious version are approximately 25% and 50% of the SOTA (BLAZE) by Patra and Suresh (NDSS 2020), for $\ell=64$.
Moreover, the overall communication of our maliciously secure inner product protocol is merely $3\ell$ bits, reducing 50% from the SOTA (Swift) by Koti et al. (USENIX 2021).
Finally, the resulting ReLU and MaxPool PPML protocols outperform the SOTA constructions by $4\times$ in the semi-honest setting and $100\times$ in the malicious setting, respectively.
Fast Secure Computations on Shared Polynomials and Applications to Private Set Operations
Secure multi-party computation aims to allow a set of players to compute a given function on their secret inputs without revealing any other information than the result of the computation. In this work, we focus on the design of secure multi-party protocols for shared polynomial operations. We consider the classical model where the adversary is honest-but-curious, and where the coefficients (or any secret values) are either encrypted using an additively homomorphic encryption scheme or shared using a threshold linear secret-sharing scheme. Our protocols terminate after a constant number of rounds and minimize the number of secure multiplications. In their seminal article at PKC 2006, Mohassel and Franklin proposed constant-rounds protocols for the main operations on (shared) polynomials. In this work, we improve the fan-in multiplication of nonzero polynomials, the multi-point polynomial evaluation and the polynomial interpolation (on secret points) to reach a quasi-linear complexity (instead of quadratic in Mohassel and Franklin's work) in the degree of shared input/output polynomials. Computing with shared polynomials is a core component of designing multi-party protocols for privacy-preserving operations on private sets, like private disjointness test or private set intersection. Using our new protocols, we are able to improve the complexity of such protocols and to design the first variant which always returns a correct result.
Finding Dense Submodules with Algebraic Lattice Reduction
We prove an algebraic analogue of Pataki-Tural lemma (Pataki-Tural, arXiv:0804.4014, 2008) -- the main tool in analysing the so-called overstretched regime of NTRU.
Our result generalizes this lemma from Euclidean lattices to modules over any number field enabling us to look at NTRU as rank-2 module over cyclotomic number fields with a rank-1 dense submodule generated by the NTRU secret key.
For Euclidean lattices, this overstretched regime occurs for large moduli $q$ and enables to detect a dense sublattice in NTRU lattices leading to faster NTRU key recovery. We formulate an algebraic version of this event, the so-called Dense Submodule Discovery (DSD) event, and heuristically predict under which conditions this event happens. For that, we formulate an algebraic version of the Geometric Series Assumption -- an heuristic tool that describes the behaviour of algebraic lattice reduction algorithms. We verify this assumption by implementing an algebraic LLL -- an analog of classical LLL lattice reduction that operates on the module level. Our experiments verify the introduced heuristic, enabling us to predict the algebraic DSD event.
Formally verifying Kyber Episode V: Machine-checked IND-CCA security and correctness of ML-KEM in EasyCrypt
We present a formally verified proof of the correctness and IND-CCA security of ML-KEM, the Kyber-based Key Encapsulation Mechanism (KEM) undergoing standardization by NIST.
The proof is machine-checked in EasyCrypt and it includes:
1) A formalization of the correctness (decryption failure probability) and IND-CPA security of the Kyber base public-key encryption scheme, following Bos et al. at Euro S&P 2018;
2) A formalization of the relevant variant of the Fujisaki-Okamoto transform in the Random Oracle Model (ROM), which follows closely (but not exactly) Hofheinz, Hovelmanns and Kiltz at TCC 2017;
3) A proof that the IND-CCA security of the ML-KEM specification and its correctness as a KEM follows from the previous results;
4) Two formally verified implementations of ML-KEM written in Jasmin that are provably constant-time, functionally equivalent to the ML-KEM specification and, for this reason, inherit the provable security guarantees established in the previous points.
The top-level theorems give self-contained concrete bounds for the correctness and security of MLKEM down to (a variant of) Module-LWE. We discuss how they are built modularly by leveraging various EasyCrypt features.
On the Guaranteed Number of Activations in XS-circuits
XS-circuits describe cryptographic primitives that utilize 2 operations on binary words of fixed length: X) bitwise modulo 2 addition and S) substitution. The words are interpreted as elements of a field of characteristic 2. In this paper, we develop a model of XS-circuits according to which several instances of a simple round circuit containing only one S operation are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. When a cascade processes a pair of different inputs, some round oracles get different queries, these oracles are activated. The more activations, the higher security guarantees against differential cryptanalysis the cascade provides. We introduce the notion of the guaranteed number of activations, that is, the minimum number of activations over all choices of the base field, round oracles and pairs of inputs. We show that the guaranteed number of activations is related to the minimum distance of the linear code associated with the cascade. It is also related to the minimum number of occurrences of units in segments of binary linear recurrence sequences whose characteristic polynomial is determined by the round circuit. We provide an algorithm for calculating the guaranteed number of activations. We show how to use the algorithm to deal with linear activations related to linear cryptanalysis.
XS-circuits in Block Ciphers
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
Fully Secure MPC and zk-FLIOP Over Rings: New Constructions, Improvements and Extensions
We revisit the question of the overhead to achieve full security (i.e., guaranteed output delivery) in secure multiparty computation (MPC). Recent works have closed the gap between full security and semi-honest security, by introducing protocols where the parties first compute the circuit using a semi-honest protocol and then run a verification step with sublinear communication in the circuit size. However, in these works the number of interaction rounds in the verification step is also sublinear in the circuit's size. Unlike communication, the round complexity of the semi-honest execution typically grows with the circuit's depth and not its size. Hence, for large but shallow circuits, this additional number of rounds incurs a significant overhead. Motivated by this gap, we make the following contributions:
1. We present a new MPC framework to obtain full security, compatible with effectively any ring, that has an additive communication overhead of only $O(\log |C|)$, where $|C|$ is the number of multiplication gates in the circuit, and a constant number of additional rounds beyond the underlying semi-honest protocol. Our framework works with any linear secret sharing scheme and relies on a new to utilize the machinery of zero-knowledge fully linear interactive oracle proofs (zk-FLIOP) in a black-box way. We present several instantiations to the building blocks of our compiler, from which we derive concretely efficient protocols in different settings.
2. We present extensions to the zk-FLIOP primitive for very general settings. The first one is for proving statements over potentially non-commutative rings, where the only requirement is that the ring has a large enough set where (1) every element in the set commutes with every element in the ring, and (2) the difference between any two distinct elements is invertible. Our second zk-FLIOP extension is for proving statements over Galois Rings. For these rings, we present concrete improvements on the current state-of-the-art for the case of constant-round proofs, by making use of Reverse Multiplication Friendly Embeddings (RMFEs).
The Round Complexity of Proofs in the Bounded Quantum Storage Model
The round complexity of interactive proof systems is a key question of practical and theoretical relevance in complexity theory and cryptography. Moreover, results such as QIP = QIP(3) (STOC'00) show that quantum resources significantly help in such a task.
In this work, we initiate the study of round compression of protocols in the bounded quantum storage model (BQSM). In this model, the malicious parties have a bounded quantum memory and they cannot store the all the qubits that are transmitted in the protocol.
Our main results in this setting are the following:
1. There is a non-interactive (statistical) witness indistinguishable proof for any language in NP (and even QMA) in BQSM in the plain model. We notice that in this protocol, only the memory of the verifier is bounded.
2. Any classical proof system can be compressed in a two-message quantum proof system in BQSM. Moreover, if the original proof system is zero-knowledge, the quantum protocol is zero-knowledge too. In this result, we assume that the prover has bounded memory.
Finally, we give evidence towards the "tightness" of our results. First, we show that NIZK in the plain model against BQS adversaries is unlikely with standard techniques. Second, we prove that without the BQS model there is no 2-message zero-knowledge quantum interactive proof, even under computational assumptions.
Provable security against decryption failure attacks from LWE
In a recent work, Hövelmanns, Hülsing and Majenz introduced a new security proof for the Fujisaki-Okamoto transform in the quantum-accessible random oracle model (QROM) used in post-quantum key encapsulation mechanisms. While having a smaller security loss due to decryption failures present in many constructions, it requires two new security properties of the underlying public-key encryption scheme (PKE).
In this work, we show that one of the properties, Find Failing Plaintexts - Non Generic (FFP-NG) security, is achievable using a relatively efficient LWE-based PKE that does not have perfect correctness. In particular, we show that LWE reduces to breaking FFP-NG security of the PVW scheme, when all LWE errors are discrete Gaussian distributed. The reduction has an arbitrarily small constant multiplicative loss in LWE error size. For the proof, we make use of techniques by Genise, Micciancio, Peikert and Walter to analyze marginal and conditional distributions of sums of discrete Gaussians.
Don’t Forget Pairing-Friendly Curves with Odd Prime Embedding Degrees
Uncategorized
Uncategorized
Pairing-friendly curves with odd prime embedding degrees
at the 128-bit security level, such as BW13-310 and BW19-286, sparked
interest in the field of public-key cryptography as small sizes of the prime
fields. However, compared to mainstream pairing-friendly curves at the
same security level, i.e., BN446 and BLS12-446, the performance of pairing computations on BW13-310 and BW19-286 is usually considered
ineffcient. In this paper we investigate high performance software implementations of pairing computation on BW13-310 and corresponding
building blocks used in pairing-based protocols, including hashing, group
exponentiations and membership testings. Firstly, we propose effcient
explicit formulas for pairing computation on this curve. Moreover, we
also exploit the state-of-art techniques to implement hashing in G1 and
G2, group exponentiations and membership testings. In particular, for
exponentiations in G2 and GT , we present new optimizations to speed
up computational effciency. Our implementation results on a 64-bit processor show that the gap in the performance of pairing computation between BW13-310 and BN446 (resp. BLS12-446) is only up to 4.9% (resp.
26%). More importantly, compared to BN446 and BLS12-446, BW13-
310 is about 109.1% − 227.3%, 100% − 192.6%, 24.5% − 108.5% and
68.2% − 145.5% faster in terms of hashing to G1, exponentiations in G1
and GT , and membership testing for GT , respectively. These results reveal that BW13-310 would be an interesting candidate in pairing-based
cryptographic protocols.
Fine-Grained Non-Interactive Key Exchange, Revisited
We revisit the construction of multiparty non-interactive key-exchange protocols with fine-grained security, which was recently studied in (Afshar et al., Eurocrypt 2023). Their work introduced a 4-party non-interactive key exchange with quadratic hardness, and proved it secure in Shoup's generic group model. This positive result was complemented with a proof that $n$-party non-interactive key exchange with superquadratic security cannot exist in Maurer's generic group model, for any $n\geq 3$. Because Shoup's model is stronger than Maurer's model, this leaves a gap between the positive and the negative result, and their work left as an open question the goal of closing this gap, and of obtaining fine-grained non-interactive key exchange without relying on idealized models.
In this work, we make significant progress on both questions. We obtain two main results:
A 4-party non-interactive key exchange protocol with quadratic security gap, assuming the existence of exponentially secure injective pseudorandom generators, and the subexponential hardness of the computational Diffie-Hellman assumption. In addition, our scheme is conceptually simpler, and can be generalized to other settings (with more parties or from other assumptions).
Assuming the existence of non-uniformly secure injective pseudorandom generators with exponential hardness, we further show that our protocol is secure in Maurer's model, albeit with a smaller hardness gap (up to $N^{1.6}$), making progress on filling the gap between the positive and the negative result of (Afshar et al., Eurocrypt 2023). Somewhat intriguingly, proving the security of our scheme in Maurer's idealized model turns out to be significantly harder than proving its security in the standard model.
INDIANA - Verifying (Random) Probing Security through Indistinguishability Analysis
Despite masking being a prevalent protection against passive side-channel attacks, implementing it securely in hardware remains a manual, challenging, and error-prone process.
This paper introduces INDIANA, a comprehensive security verification tool for hardware masking. It provides a hardware verification framework, enabling a complete analysis of simulation-based security in the glitch-extended probing model, with cycle-accurate estimations for leakage probabilities in the random probing model.
Notably, INDIANA is the first framework to analyze arbitrary masked circuits in both models, even at the scale of full SPN cipher rounds (e.g., AES), while delivering exact verification results.
To attain precise and extensive verifiability, we introduce a partitionable probing distinguisher that enables rapid verification of probe tuples, outperforming state-of-the-art methods based on statistical independence. Most interestingly, our approach inherently facilitates extensions to the random probing model by leveraging Fast Fourier-Hadamard Transformations (FFTs).
Benchmark results show that INDIANA competes effectively with leading probing model verification tools, such as maskVerif and VERICA. Notably, INDIANA the first tool that is capable to provide cycle-accurate estimations of random probing leakage probabilities for large-scale masked circuits.
Hamming Weight Proofs of Proximity with One-Sided Error
We provide a wide systematic study of proximity proofs with one-sided error for the Hamming weight problem $\mathsf{Ham}_{\alpha}$ (the language of bit vectors with Hamming weight at least $\alpha$), surpassing previously known results for this problem. We demonstrate the usefulness of the one-sided error property in applications: no malicious party can frame an honest prover as cheating by presenting verifier randomness that leads to a rejection.
We show proofs of proximity for $\mathsf{Ham}_{\alpha}$ with one-sided error and sublinear proof length in three models (MA, PCP, IOP), where stronger models allow for smaller query complexity. For $n$-bit input vectors, highlighting input query complexity, our MA has $O(\mathrm{log} n)$ query complexity, the PCP makes $O(\mathrm{loglog} n)$ queries, and the IOP makes a single input query. The prover in all of our applications runs in expected quasi-linear time. Additionally, we show that any perfectly complete IP of proximity for $\mathsf{Ham}_{\alpha}$ with input query complexity $n^{1-\epsilon}$ has proof length $\Omega(\mathrm{log} n)$.
Furthermore, we study PCPs of proximity where the verifier is restricted to making a single input query (SIQ). We show that any SIQ-PCP for $\mathsf{Ham}_{\alpha}$ must have a linear proof length, and complement this by presenting a SIQ-PCP with proof length $n+o(n)$.
As an application, we provide new methods that transform PCPs (and IOPs) for arbitrary languages with nonzero completeness error into PCPs (and IOPs) that exhibit perfect completeness. These transformations achieve parameters previously unattained.
Tight Characterizations for Preprocessing against Cryptographic Salting
Cryptography often considers the strongest yet plausible attacks in the real world. Preprocessing (a.k.a. non-uniform attack) plays an important role in both theory and practice: an efficient online attacker can take advantage of advice prepared by a time-consuming preprocessing stage.
Salting is a heuristic strategy to counter preprocessing attacks by feeding a small amount of randomness to the cryptographic primitive.
We present general and tight characterizations of preprocessing against cryptographic salting, with upper bounds matching the advantages of the most intuitive attack.
Our result quantitatively strengthens the previous work by Coretti, Dodis, Guo, and Steinberger (EUROCRYPT'18).
Our proof exploits a novel connection between the non-uniform security of salted games and direct product theorems for memoryless algorithms.
For quantum adversaries, we give similar characterizations for property finding games, resolving an open problem of the quantum non-uniform security of salted collision resistant hash by Chung, Guo, Liu, and Qian (FOCS'20).
Our proof extends the compressed oracle framework of Zhandry (CRYPTO'19) to prove quantum strong direct product theorems for property finding games in the average-case hardness.
How (not) to Build Quantum PKE in Minicrypt
The seminal work by Impagliazzo and Rudich (STOC'89) demonstrated the impossibility of constructing classical public key encryption (PKE) from one-way functions (OWF) in a black-box manner. However, the question remains: can quantum PKE (QPKE) be constructed from quantumly secure OWF?
A recent line of work has shown that it is indeed possible to build QPKE from OWF, but with one caveat --- they rely on quantum public keys, which cannot be authenticated and reused. In this work, we re-examine the possibility of perfect complete QPKE in the quantum random oracle model (QROM), where OWF exists.
Our first main result: QPKE with classical public keys, secret keys and ciphertext, does not exist in the QROM, if the key generation only makes classical queries.
Therefore, a necessary condition for constructing such QPKE from OWF is to have the key generation classically ``un-simulatable’’. Previous discussions (Austrin~et al. CRYPTO'22) on the impossibility of QPKE from OWF rely on a seemingly strong conjecture. Our work makes a significant step towards a complete and unconditional quantization of Impagliazzo and Rudich’s results.
Our second main result extends to QPKE with quantum public keys.
The second main result: QPKE with quantum public keys, classical secret keys and ciphertext, does not exist in the QROM, if the key generation only makes classical queries and the quantum public key is either pure or ``efficiently clonable''.
The result is tight due to all existing QPKEs constructions. Our result further gives evidence on why existing QPKEs lose reusability.
To achieve these results, we use a novel argument based on conditional mutual information and quantum Markov chain by Fawzi and Renner (Communications in Mathematical Physics). We believe the techniques used in the work will find other usefulness in separations in quantum cryptography/complexity.
Multivariate Multi-Polynomial Commitment and its Applications
We introduce and formally define Multivariate Multi-Polynomial (MMP) commitment, a commitment scheme on multiple multivariate polynomials, and illustrate the concept with an efficient construction, which enjoys constant commitment size and logarithmic proof size. We further enhance our MMP scheme to achieve the zero-knowledge property.
Additionally, combined with a novel zero-knowledge range proof for Pedersen subvector commitment, we present a Zero-Knowledge Range Proof (ZKRP) for MMP commitment.
We present two sample applications. Firstly, our MMP commitment can be used for efficient aggregation of SNARK based on multivariate polynomial commitments. As a showcase, we apply MMP commitment to HyperPlonk and refer to this variant of HyperPlonk as aHyperPlonk. For $k$ instances, each with circuit size $n$, the communication and verification complexity is reduced from $O(k \cdot \log n)$ to $O(\log k + \log n)$, while the prover complexity remains the same. Secondly, we propose a novel zero-knowledge proof for vehicle GPS traces based on ZKRP for MMP, which allows vehicle owners to prove if a vehicle has/hasn't passed through some location during a specific time interval.
New Solutions to Delsarte's Dual Linear Programs
Understanding the maximum size of a code with a given minimum distance is a major question in computer science and discrete mathematics. The most fruitful approach for finding asymptotic bounds on such codes is by using Delsarte's theory of association schemes. With this approach, Delsarte constructs a linear program such that its maximum value is an upper bound on the maximum size of a code with a given minimum distance. Bounding this value can be done by finding solutions to the corresponding dual linear program. Delsarte's theory is very general and goes way beyond binary codes. In this work, we provide universal bounds in the framework of association schemes that generalize the Elias-Bassalygo bound, which can be applied to any association scheme constructed from a distance function. These bounds are obtained by constructing new solutions to Delsarte's dual linear program. We instantiate these results and we recover known bounds for $q$-ary codes and for constant-weight binary codes. Our other contribution is to recover, for essentially any $Q$-polynomial scheme, MRRW-type solutions to Delsarte's dual linear program which are inspired by the Laplacian approach of Friedman and Tillich instead of using the Christoffel-Darboux formulas. We show in particular how the second linear programming bound can be interpreted in this framework.
Fully Adaptive Schnorr Threshold Signatures
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle+. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature.
In this paper, we demonstrate that Sparkle+ achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle+ is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t+1 signers, then Sparkle+ is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle+ achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme.
KHAN Encryption Algorithm: Leveraging Full Reptend Primes
The Keyed Hashing and Asymmetric Nonce (KHAN) encryption algorithm is a novel cryptographic scheme that utilizes the unique properties of full reptend prime numbers. This paper details the algorithm, its theoretical foundations, and the rigorous proofs of its security properties. By leveraging the characteristics of cyclic sequences derived from full reptend primes, KHAN provides robust encryption with high resistance to cryptanalytic attacks.
Scaling Lattice Sieves across Multiple Machines
Lattice sieves are algorithms for finding short vectors in lattices. We present an implementation of two such sieves – known as “BGJ1” and “BDGL” in the literature – that scales across multiple servers (with varying success). This class of algorithms requires exponential memory which had put into question their ability to scale across sieving nodes. We discuss our architecture and optimisations and report experimental evidence of the efficiency of our approach.
Efficient Linkable Ring Signature from Vector Commitment inexplicably named Multratug
In this paper we revise the idea of our previous article “Lin2-Xor Lemma: an OR-proof that leads to the membership proof and signature” and introduce another lemma, called Lin2-Choice, which is a generalization of the Lin2-Xor lemma. With the Lin2-Choice lemma we obtain a compact general-purpose trusted-setup-free log-size linkable threshold ring signature called EFLRSL. The signature size is 2log(n+1)+3l+1, where n is the ring size and l is the threshold. By extending the set membership argument of the Lin2-Choice lemma we create a multifunctional version of the EFLRSL signature aliased as Multratug, of size 2log(n+l+1)+7l+4. In addition to signing a message, Multratug proves balance and allows for easy multiparty signing, which makes it suitable for blockchains. We use a black-boxed vector commitment argument as the pivotal building block for both of EFLRSL and Multratug. Only the black-boxed pivot contributes components that depend on the ring size n into the signature sizes. This makes both versions of our signature combinable with other proofs, thus allowing overall size reduction. All this takes place in a prime-order group without bilinear parings under the decisional Diffie-Hellman assumption in the random oracle model. Both versions of our signature are unforgeable w.r.t. insider corruption and existentially unforgeable under chosen message attack. They remain anonymous even for non-uniformly distributed and malformed keys, which makes it possible to use them as a log-size drop-in replacement for LSAG-based schemes.