All papers in 2024 (Page 16 of 2100 results)
A note on -Tweakable HCTR: A BBB Secure Tweakable Enciphering Scheme-
Tweakable HCTR is an tweakable enciphering proposed by Dutta and Nandi in Indocrypt 2018. It provides beyond birthday bound security when each tweak value is not used too frequently. More importantly for this note, its security bound degrades linearly with the maximum input length. We show in this note that this is not true by showing a single query distinguisher with advantage $O(l^2/2^n)$ where $l$ is the length of that query. The distinguisher does not break the beyond-birthday-bound claim but gives higher advantage than the claimed bound.
Probabilistically Checkable Arguments for all NP
A probabilistically checkable argument (PCA) is a computational relaxation of PCPs, where soundness is guaranteed to hold only for false proofs generated by a computationally bounded adversary. The advantage of PCAs is that they are able to overcome the limitations of PCPs. A succinct PCA has a proof length that is polynomial in the witness length (and is independent of the non-deterministic verification time), which is impossible for PCPs, under standard complexity assumptions. Bronfman and Rothblum (ITCS 2022) constructed succinct PCAs for NC that are publicly-verifiable and have constant query complexity under the sub-exponential hardness of LWE.
We construct a publicly-verifiable succinct PCA with constant query complexity for all NP in the adaptive security setting. Our PCA scheme offers several improvements compared to the Bronfman and Rothblum construction: (1) it applies to all problems in NP, (2) it achieves adaptive security, and (3) it can be realized under any of the following assumptions: the polynomial hardness of LWE; $O(1)$-LIN on bilinear maps; or sub-exponential DDH.
Moreover, our PCA scheme has a succinct prover, which means that for any NP relation that can be verified in time $T$ and space $S$, the proof can be generated in time $O_{\lambda,m}(T\cdot\mathrm{polylog}(T))$ and space $O_{\lambda,m}(S\cdot\mathrm{polylog}(T))$. Here, ${O}_{\lambda,m}$ accounts for polynomial factors in the security parameter and in the size of the witness. En route, we construct a new complexity-preserving RAM delegation scheme that is used in our PCA construction and may be of independent interest.
Decryption Indistinguishability under Chosen Control Flow
Security proofs for cryptographic primitives typically assume operations are executed in the correct sequence; however, insecure implementations or software-level attacks can disrupt control flows, potentially invalidating these guarantees. To address this issue, we introduce a new security notion, IND-CFA, which formalizes decryption
security in the presence of adversarially controlled execution flows. Using this notion, we investigate the control flows under which a cryptographic scheme remains secure, providing insights into secure implementation practices. We revisit the Encrypt-then-MAC paradigm, underscoring the crucial role of operation sequencing in ensuring the
security of authenticated encryption schemes built using this method. Additionally, we provide a detailed analysis of the Encode-then-Encipher (EtE) paradigm, a widely adopted approach for constructing robust AE schemes, revealing its vulnerability to adversarial control flows that can enable attackers to infer low-entropy values in the
presence of multiple failure conditions.
Blockchain-based decentralized identity system: Design and security analysis
This paper presents a novel blockchain-based decentralized identity system (DID), tailored for enhanced digital identity management in Internet of Things (IoT) and device-to-device (D2D) networks. The proposed system features a hierarchical structure that effectively merges a distributed ledger with a mobile D2D network, ensuring robust security while streamlining communication. Central to this design are the gateway nodes, which serve as intermediaries, facilitating DID registration and device authentication through smart contracts and distributed storage systems. A thorough security analysis underscores the system’s resilience to common cyber threats and adherence to critical principles like finality and liveness.
Cryptanalysis of signature schemes based on the root extraction problem over braid group
Cumplido, María et al. have recently shown that the Wang-Hu digital signature is not secure and has presented a potential attack on the root extraction problem. The effectiveness of generic attacks on solving this problem for braids is still uncertain and it is unknown if it is possible to create braids that require exponential time to solve these problems. In 2023, Lin and al. has proposed a post-quantum signature scheme similar to the Wang-Hu scheme that is proven to be able to withstand attacks from quantum computers. However, evidence is presented here for the existence of an algorithm based on mean-set attacks that can recover the private key in both schemes without solving the root extraction problem. In the post-quantum signature version, we prove that the attacker can forge a signature passing the verification without recovering the private key
Analysis of Multivariate Encryption Schemes: Application to Dob and C*
A common strategy for constructing multivariate encryption schemes is to use a central map that is easy to invert over an extension field, along with a small number of modifications to thwart potential attacks. In this work we study the effectiveness of these modifications, by deriving estimates for the number of degree fall polynomials. After developing the necessary tools, we focus on encryption schemes using the $C^*$ and Dobbertin central maps, with the internal perturbation (ip), and $Q_+$ modifications. For these constructions we are able to accurately predict the number of degree fall polynomials produced in a Gröbner basis attack, up to and including degree five for the Dob encryption scheme and four for $C^*$. The predictions remain accurate even when fixing variables. Based on this new theory we design a novel attack on Dob, which completely recovers the secret key for the parameters suggested by its designers. Due to the generality of the presented techniques, we also believe that they are of interest to the analysis of other big field schemes.
Greco: Fast Zero-Knowledge Proofs for Valid FHE RLWE Ciphertexts Formation
Fully homomorphic encryption (FHE) allows for evaluating arbitrary functions over encrypted data. In Multi-party FHE applications, different parties encrypt their secret data and submit ciphertexts to a server, which, according to the application logic, performs homomorphic operations on them. For example, in a secret voting application, the tally is computed by summing up the ciphertexts encoding the votes. Valid encrypted votes are of the form $E(0)$ and $E(1)$. A malicious voter could send an invalid encrypted vote such as $E(145127835)$, which can mess up the whole election. Because of that, users must prove that the ciphertext they submitted is a valid Ring-Learning with Errors (RLWE) ciphertext and that the plaintext message they encrypted is a valid vote (for example, either a 1 or 0). Greco uses zero-knowledge proof to let a user prove that their RLWE ciphertext is well-formed. Or, in other words, that the encryption operation was performed correctly. The resulting proof can be, therefore, composed with additional application-specific logic and subject to public verification in a non-interactive setting. Considering the secret voting application, one can prove further properties of the message being encrypted or even properties about the voter, allowing the application to support anonymous voting as well. The prover has been implemented using Halo2-lib as a proving system, and the benchmarks have shown that Greco can already be integrated into user-facing applications without creating excessive friction for the user. The implementation is available at https://github.com/privacy-scaling-explorations/greco
The Case of Small Prime Numbers Versus the Okamoto-Uchiyama Cryptosystem
In this paper we study the effect of using small prime numbers within the Okamoto-Uchiyama public key encryption scheme. We introduce two novel versions and prove their security. Then we show how to choose the system's parameters such that the security results hold. Moreover, we provide a practical comparison between the cryptographic algorithms we introduced and the original Okamoto-Uchiyama cryptosystem.
Asymptotics for the standard block size in primal lattice attacks: second order, formally verified
Many proposals of lattice-based cryptosystems estimate security levels by following a recipe introduced in the New Hope proposal. This recipe, given a lattice dimension n, modulus q, and standard deviation s, outputs a "primal block size" β and a security level growing linearly with β. This β is minimal such that some κ satisfies ((n+κ)s^2+1)^{1/2} < (d/β)^{1/2} δ^{2β−d−1} q^{κ/d}, where d = n + κ + 1 and δ = (β(πβ)^{1/β}/(2π exp 1))^{1/2(β−1)}.
This paper identifies how β grows with n, with enough precision to show the impact of adjusting q and s by constant factors. Specifically, this paper shows that if lg q grows as Q_0 lg n + Q_1 + o(1) and lg s grows as S_0 lg n + S_1 + o(1), where 0 <= S_0 <= 1/2 < Q_0 − S_0, then β/n grows as z_0 + (z_1+o(1))/lg n, where z_0 = 2Q_0/(Q_0−S_0+1/2)^2 and z_1 has a formula given in the paper. The paper provides a traditional-format proof and a proof verified by the HOL Light proof assistant.
Hash your Keys before Signing: BUFF Security of the Additional NIST PQC Signatures
In this work, we analyze the so-called Beyond UnForgeability Features (BUFF) security of the submissions to the current standardization process of additional signatures by NIST. The BUFF notions formalize security against maliciously generated keys and have various real-world use cases, where security can be guaranteed despite misuse potential on a protocol level. Consequently, NIST declared the security against the BUFF notions as desirable features. Despite NIST's interest, only $6$ out of $40$ schemes consider BUFF security at all, but none give a detailed analysis. We close this gap by analyzing the schemes based on codes, isogenies, lattices, and multivariate equations. The results vary from schemes that achieve neither notion (e.g., Wave) to schemes that achieve all notions (e.g., PROV). In particular, we dispute certain claims by SQUIRRELS and VOX regarding their BUFF security. Resulting from our analysis, we observe that three schemes (CROSS, HAWK and PROV) achieve BUFF security without having the hash of public key and message as part of the signature, as BUFF transformed schemes would have. HAWK and PROV essentially use the lighter PS-3 transform by Pornin and Stern (ACNS'05). We further point out whether this transform suffices for the other schemes to achieve the BUFF notions, with both positive and negative results.
Revisiting the Security of Fiat-Shamir Signature Schemes under Superposition Attacks
The Fiat-Shamir transformation is a widely employed technique in constructing signature schemes, known as Fiat-Shamir signature schemes (FS-SIG), derived from secure identification (ID) schemes. However, the existing security proof only takes into account classical signing queries and does not consider superposition attacks, where the signing oracle is quantum-accessible to the adversaries. Alagic et al. proposed a security model called blind unforgeability (BUF, Eurocrypt'20), regarded as a preferable notion under superposition attacks.
In this paper, we conduct a thorough security analysis of FS-SIGs in the BUF model. First, we propose a special property for ID schemes called quantum special honest-verifier zero-knowledge (qsHVZK), which is stronger than classical HVZK. We prove that qsHVZK is a sufficient property for BUF (with implicit rejection) of the resulting FS-SIG in the quantum random oracle model (QROM). Next, we give an efficient construction of (a weaker variant) of qsHVZK ID scheme based on the quantum hardness of LWE problems.
To avoid enhancing the requirement of HVZK, we then progress to the deterministic FS-SIG (DFS) for more efficient constructions. We show that if the pseudorandom function is quantum-access-secure (QPRF), then we can prove the BUF security of the resulting DFS only with the requirement of the standard (multi-)HVZK in the QROM. A similar result can be extended to the hedged version of FS-SIG.
Blind-Folded: Simple Power Analysis Attacks using Data with a Single Trace and no Training
Side-Channel Attacks target the recovery of key material in cryptographic implementations by measuring physical quantities such as power consumption during the execution of a program. Simple Power Attacks consist in deducing secret information from a trace using a single or a few samples, as opposed to differential attacks which require many traces. Software cryptographic implementations now all contain a data-independent execution path, but often do not consider variations in power consumption associated to data. In this work, we show that a technique commonly used to select a value from different possible values in a control-independant way leads to significant power differences depending on the value selected. This difference is actually so important that a single sample can be considered for attacking one condition, and no training on other traces is required. We exploit this finding to propose the first single-trace attack without any knowledge gained on previous executions, using trace folding. We target the two modular exponentiation implementations in Libgcrypt, getting respectively 100% and 99.98% of correct bits in average on 30 executions using 2,048-bit exponents. We also use this technique to attack the scalar multiplication in ECDSA, successfully recovering all secret nonces on 1,000 executions. Finally, the insights we gained from this work allow us to show that a proposed counter-measure from the litterature for performing the safe loading of precomputed operands in the context of windowed implementations can be attacked as well.
Digital Signatures for Authenticating Compressed JPEG Images
We construct a digital signature scheme for images that allows the image to be compressed without invalidating the signature. More specifically, given a JPEG image signed with our signature scheme, a third party can compress the image using JPEG compression, and, as long as the quantization tables only include powers of two, derive a valid signature for the compressed image, without access to the secret signing key, and without interaction with the signer. Our scheme is constructed using a standard digital signature scheme and a hash function as building blocks. This form of signatures that allow image compression could be useful in mitigating some of the threats posed by generative AI and fake news, without interfering with all uses of generative AI.
Taking inspiration from related signature schemes, we define a notion of unforgeability and prove our construction to be secure. Additionally, we show that our signatures have size 32.5 kb under standard parameter choices. Using image quality assessment metrics, we show that JPEG compression with parameters as specified by our scheme, does not result in perceivably reduced visual fidelity, compared to standard JPEG compression.
Hidden $\Delta$-fairness: A Novel Notion for Fair Secure Two-Party Computation
Secure two-party computation allows two mutually distrusting parties to compute a joint function over their inputs, guaranteeing properties such as input privacy or correctness.
For many tasks, such as joint computation of statistics, it is important that when one party receives the result of the computation, the other party also receives the result.
Unfortunately, this property, which is called fairness, is unattainable in the two-party setting for arbitrary functions. So weaker variants have been proposed.
One such notion, proposed by Pass et al. (EUROCRYPT 2017) is called $\Delta$-fairness.
Informally, it guarantees that if a corrupt party receives the output in round $r$ and stops participating in the protocol, then the honest party receives the output by round $\Delta(r)$.
This notion is achieved by using so-called secure enclaves.
In many settings, $\Delta$-fairness is not sufficient, because a corrupt party is guaranteed to receive its output before the honest party, giving the corrupt party an advantage in further interaction.
Worse, as $\Delta$ is known to the corrupt party, it can abort the protocol when it is most advantageous.
We extend the concept of $\Delta$-fairness by introducing a new fairness notion, which we call hidden $\Delta$-fairness, which addresses these problems.
First of all, under our new notion, a corrupt party may not benefit from aborting, because it may not, with probability $\frac{1}{2}$, learn the result first.
Moreover, $\Delta$ and other parameters are sampled according to a given distribution and remain unknown to the participants in the computation.
We propose a 2PC protocol that achieves hidden $\Delta$-fairness, also using secure enclaves, and prove its security in the Generalized Universal Composability (GUC) framework.
Encryption Based Covert Channel for Large Language Models
Transformer neural networks have gained significant traction since their introduction, becoming pivotal across diverse domains. Particularly in large language models like Claude and ChatGPT, the transformer architecture has demonstrated remarkable efficacy. This paper provides a concise overview of transformer neural networks and delves into their security considerations, focusing on covert channel attacks and their implications for the safety of large language models. We present a covert channel utilizing encryption and demonstrate its efficacy in circumventing Claude.ai's security measures. Our experiment reveals that Claude.ai appears to log our queries and blocks our attack within two days of our initial successful breach. This raises two concerns within the community: (1) The extensive logging of user inputs by large language models could pose privacy risks for users. (2) It may deter academic research on the security of such models due to the lack of experiment repeatability.
A Complete Beginner Guide to the Number Theoretic Transform (NTT)
The Number Theoretic Transform (NTT) is a powerful mathematical tool that has become increasingly important in developing Post Quantum Cryptography (PQC) and Homomorphic Encryption (HE). Its ability to efficiently calculate polynomial multiplication using the convolution theorem with a quasi-linear complexity $O(n \log{n})$ instead of $O(n^2)$ when implemented with Fast Fourier Transform-style algorithms has made it a key component in modern cryptography. FFT-style NTT algorithm or fast-NTT is particularly useful in lattice-based cryptography. In this short note, we briefly introduce the basic concepts of linear, cyclic, and negacyclic convolutions via traditional schoolbook algorithms, traditional NTT, its inverse (INTT), and FFT-like versions of NTT/INTT. We then provide consistent toy examples through different concepts and algorithms to understand the basics of NTT concepts.
Efficient Implementations of Square-root Vélu's Formulas
In the implementation of isogeny-based schemes, V\'{e}lu's formulas are essential for constructing and evaluating odd degree isogenies.
Bernstein et al. proposed an approach known as $\surd$elu, which computes an $\ell$-isogeny at a cost of $\tilde{\mathcal{O}}(\sqrt{\ell})$ finite field operations. This paper presents two key improvements to enhance the efficiency of
the implementation
of
$\surd$\'{e}lu from two aspects: optimizing the partition involved in $\surd$\'{e}lu and speeding up the computations of the sums of products used in polynomial multiplications over finite field $\mathbb{F}_p$ with large prime characteristic $p$. To optimize the partition, we adjust it to enhance the utilization of $x$-coordinates and eliminate the computational redundancy, which can ultimately reduce the number of $\mathbb{F}_p$-multiplications. The speedup of the sums of products is to employ two techniques: lazy reduction (abbreviated as LZYR) and generalized interleaved Montgomery multiplication (abbreviated as INTL). These techniques aim to minimize the underlying operations such as $\mathbb{F}_p$-reductions and assembly memory instructions. We present an optimized C and assembly code implementation of $\surd$\'{e}lu for the CTIDH512 instantiation. In terms of $\ell$-isogeny computations in CTIDH512, the performance of clock cycles applying new partition + INTL (resp. new partition + LZYR) offers an improvement up to $16.05\%$ (resp. $ 10.96\%$) compared to the previous work.
Last updated: 2024-04-16
A Note on Quantum Algorithms for Lattice Problems
Recently, a paper by Chen (eprint 2024/555) has claimed to construct a quantum polynomial-time algorithm that solves the Learning With Errors Problem (Regev, JACM 2009), for a range of parameters. As a byproduct of Chen's result, it follows that Chen's algorithm solves the Gap Shortest Vector Problem, for gap $g(n) = \tilde{O}\left( n^{4.5} \right)$. In this short note we point to an error in the claims of Chen's paper.
Improved Alternating-Moduli PRFs and Post-Quantum Signatures
We revisit the alternating-moduli paradigm for constructing symmetric-key primitives with a focus on constructing efficient protocols to evaluate them using secure multi-party computation (MPC). The alternating-moduli paradigm of Boneh, Ishai, Passelègue, Sahai, and Wu (TCC 2018) enables the construction of various symmetric-key primitives with the common characteristic that the inputs are multiplied by two linear maps over different moduli.
The first contribution focuses on efficient two-party evaluation of alternating-moduli pseudorandom functions (PRFs), effectively building an oblivious PRF. We present a generalized alternating-moduli PRF construction along with methods to lower the communication and computation. We then provide several variants of our protocols with different computation and communication tradeoffs for evaluating the PRF. Most of our protocols are in the hybrid model while one is based on specialized garbling. Our most efficient protocol effectively is about $3\times$ faster and requires $1.3\times$ less communication.
Our next contribution is the efficient evaluation of the one-way function (OWF) $f(x)=\mathbf{B}\cdot_3 (\mathbf{A} \cdot_2 x)$ proposed by Dinur, Goldfeder, Halevi, Ishai, Kelkar, Sharma, and Zaverucha (CRYPTO 21) where $\mathbf{A} \in \mathbb{F}^{m\times n}_2, \mathbf{B}\in\mathbb{F}^{t\times m}_3$, and $\cdot_p$ is multiplication mod $p$. This surprisingly simple OWF can be evaluated within MPC by secret sharing $[\![x]\!]$ over $\mathbb{F}_2$, locally computing $[\![v]\!]=\mathbf{A}\cdot_2 [\![x]\!]$, performing a modulus switching protocol to $\mathbb{F}_3$ shares, followed by locally computing the output shares $[\![y]\!]=\mathbf{B}\cdot_3 [\![v]\!]$.
We design a bespoke MPC-in-the-Head (MPCitH) signature scheme that evaluates the aforementioned OWF, achieving state-of-the-art performance. The resulting signature has a size ranging from $4.0$ to $5.5$ KB, achieving between $2\text{-}3\times$ reduction compared to the prior work. To the best of our knowledge, this is only $\approx 5\%$ larger than the smallest signature based on symmetric-key primitives, including the latest NIST post-quantum cryptography competition submissions. We also show that our core techniques can be extended to build very small post-quantum ring signatures for rings of small to medium size, which are competitive with state-of-the-art lattice-based schemes. Our techniques are in fact more generally applicable to set membership in MPCitH.
Fault Attack on SQIsign
In this paper, we introduce the first fault attack on SQIsign. By injecting a fault into the ideal generator during the commitment phase, we demonstrate a meaningful probability of inducing the generation of order $\mathcal{O}_0$. The probability is bounded by one parameter, the degree of commitment isogeny. We also show that the probability can be reasonably estimated by assuming uniform randomness of a random variable, and provide empirical evidence supporting the validity of this approximation. In addition, we identify a loop-abort vulnerability due to the iterative structure of the isogeny operation. Exploiting these vulnerabilities, we present key recovery fault attack scenarios for two versions of SQIsign---one deterministic and the other randomized. We then analyze the time complexity and the number of queries required for each attack. Finally, we discuss straightforward countermeasures that can be implemented against the attack.
Dynamic Decentralized Functional Encryptions from Pairings in the Standard Model
Dynamic Decentralized Functional Encryption (DDFE), introduced by Chotard et al. (CRYPTO'20), represents a robust generalization of (Multi-Client) Functional Encryption. It allows users to dynamically join and contribute private inputs to individually controlled joint functions without requiring a trusted authority.
Recently, Shi and Vanjani (PKC'23) proposed the first Multi-Client Functional Encryption scheme for function-hiding inner products (FH-IP) without relying on random oracles. Unfortunately, their construction still requires a trusted key authority, leaving open the question of whether a full-fledged FH-IP-DDFE can exist in the standard model.
In this work, we answer this question affirmatively by introducing Updatable Pseudorandom Zero Sharing, a novel concept that provides both the critical functionality and security properties needed to construct secure DDFE schemes in the standard model.
Our second contribution is a novel proof strategy, which preserves adaptive security when transforming any functional encryption scheme for FH-IP into FH-IP-DDFE. Together, these two techniques enable a modular construction of FH-IP-DDFE that is secure against adaptive message and key queries in the standard model.
Additionally, our pseudorandom zero-sharing scheme is highly versatile, enabling the first DDFE for attribute-weighted sums in the standard model, complementing the recent ROM-based construction by Agrawal et al. (CRYPTO'23).
Tight Multi-user Security of Ascon and Its Large Key Extension
The Ascon cipher suite has recently become the preferred standard in the NIST Lightweight Cryptography standardization process. Despite its prominence, the initial dedicated security analysis for the Ascon mode was conducted quite recently. This analysis demonstrated that the Ascon AEAD mode offers superior security compared to the generic Duplex mode, but it was limited to a specific scenario: single-user nonce-respecting, with a capacity strictly larger than the key size. In this paper, we eliminate these constraints and provide a comprehensive security analysis of the Ascon AEAD mode in the multi-user setting, where the capacity need not be larger than the key size. Regarding data complexity $D$ and time complexity $T$, our analysis reveals that Ascon achieves AEAD security when $T$ is bounded by $\min\{2^{\kappa}/\mu, 2^c\}$ (where $\kappa$ is the key size, and $\mu$ is the number of users), and $DT$ is limited to $2^b$ (with $b$ denoting the size of the underlying permutation, set at 320 for Ascon). Our results align with NIST requirements, showing that Ascon allows for a tag size as small as 64 bits while supporting a higher rate of 192 bits, provided the number of users remains within recommended limits. However, this security becomes compromised as the number of users increases significantly. To address this issue, we propose a variant of the Ascon mode called LK-Ascon, which enables doubling the key size. This adjustment allows for a greater number of users without sacrificing security, while possibly offering additional resilience against quantum key recovery attacks. We establish tight bounds for LK-Ascon, and furthermore show that both Ascon and LK-Ascon maintain authenticity security even when facing nonce-misuse adversaries.
Assessing the quality of Random Number Generators through Neural Networks
In this paper we address the use of Neural Networks (NN) for the
assessment of the quality and hence safety of several Random Number Generators (RNGs), focusing both on the vulnerability of classical Pseudo Random Number Generators (PRNGs), such as Linear Congruential Generators (LCGs) and the RC4 algorithm, and extending our analysis to non-conventional data sources, such as Quantum Random Number Generators (QRNGs) based on Vertical-Cavity Surface-
Emitting Laser (VCSEL). Among the results found, we identified a sort of classification of generators under different degrees of susceptibility, underlining the fundamental role of design decisions in enhancing the safety of PRNGs. The influence of network architecture design and associated hyper-parameters variations was also explored, highlighting the effectiveness of longer sequence lengths and convolutional neural
networks in enhancing the discrimination of PRNGs against other RNGs. Moreover, in the prediction domain, the proposed model is able to deftly distinguish the raw data of our QRNG from truly random ones, exhibiting a cross-entropy error of 0.52 on the test data-set used. All these findings reveal the potential of NNs to enhance the security of RNGs, while highlighting the robustness of certain QRNGs, in particular the VCSEL-based variants, for high-quality random number generation applications.
Determination of cryptographic tables and properties related to the revised boomerang and its application to a fundamental S-box
In symmetric cryptography, vectorial Boolean functions over finite fields F2n derive strong S-boxes. To this end, the S-box should satisfy a list of tests to resist existing attacks, such as the differential, linear, boomerang, and variants. Several tables are employed to measure an S- box’s resistance, such as the difference distribution table (DDT) and the boomerang connectivity table (BCT). Following the boomerang attacks recently revisited in terms of the boomerang switch effect, with a lustra- tion highlighting the power of this technique, a tool called the Boomerang Difference Table (BDT), an alternative to the classical Boomerang BCT, was introduced. Next, two novel tables have been introduced, namely, the Upper Boomerang Connectivity Table (UBCT) and the Lower Boomerang Connectivity Table (LBCT), which are considered improvements over BCT while allowing systematic evaluation of boomerangs to return over mul- tiple rounds.
This paper focuses on the new tools for measuring the revisited version of boomerang attacks and the related tables UBCT, LBCT, as well as the so-called Extended Boomerang Connectivity Table (EBCT). Specifically, we shall study the properties of these novel tools and investigate the corresponding tables. We also study their interconnections, their links to the DDT, and their values for affine equivalent vectorial functions and compositional inverses of permutations of F2n . Moreover, we introduce the concept of the nontrivial boomerang connectivity uniformity and determine the explicit values of all the entries of the EBCT, LBCT, and EBCT for the important cryptographic case of the inverse function.
On the complexity of the problem of solving systems of tropical polynomial equations of degree two
In this paper, we investigate the computational complexity of the problem of solving a one-sided system of equations of degree two of a special form over the max-plus algebra. Also, we consider the asymptotic density of solvable systems of this form. Such systems have appeared during the analysis of some tropical cryptography protocols that were recently suggested. We show how this problem is related to the integer linear programming problem and prove that this problem is NP-complete. We show that the asymptotic density of solvable systems of this form with some restrictions on the coefficients, the number of variables, and the number of equations is 0. As a corollary, we prove that this problem (with some restrictions on the coefficients, the number of variables, and the number of equations) is decidable generically in polynomial time.
Pairing Optimizations for Isogeny-based Cryptosystems
In isogeny-based cryptography, bilinear pairings are regarded as a powerful tool in various applications, including key compression, public-key validation and torsion basis generation. However, in most isogeny-based protocols, the performance of pairing computations is unsatisfactory due to the high computational cost of the Miller function. Reducing the computational expense of the Miller function is crucial for enhancing the overall performance of pairing computations in isogeny-based cryptography.
This paper addresses this efficiency bottleneck. To achieve this, we propose several techniques for a better implementation of pairings in isogeny-based cryptosystems. We use (modified) Jacobian coordinates and present new algorithms for Miller function computations to compute pairings of order $2^\bullet$ and $3^\bullet$. For pairings of arbitrary order, which are crucial for key compression in some SIDH-based schemes (such as M-SIDH and binSIDH), we combine Miller doublings with Miller additions/subtractions, leading to a considerable speedup. Moreover, the optimizations for pairing applications in CSIDH-based protocols are also considered in this paper. In particular, our approach for supersingularity verification in CSIDH is 15.3% faster than Doliskani's test, which is the state-of-the-art.
PoMMES: Prevention of Micro-architectural Leakages in Masked Embedded Software
Software solutions to address computational challenges are ubiquitous in our daily lives. One specific application area where software is often used is in embedded systems, which, like other digital electronic devices, are vulnerable to side-channel analysis attacks. Although masking is the most common countermeasure and provides a solid theoretical foundation for ensuring security, recent research has revealed a crucial gap between theoretical and real-world security. This shortcoming stems from the micro-architectural effects of the underlying micro-processor. Common security models used to formally verify masking schemes such as the d-probing model fully ignore the micro-architectural leakages that lead to a set of instructions that unintentionally recombine the shares. Manual generation of masked assembly code that remains secure in the presence of such micro-architectural recombinations often involves trial and error, and is non-trivial even for experts.
Motivated by this, we present PoMMES, which enables inexperienced software developers to automatically compile masked functions written in a high-level programming language into assembly code, while preserving the theoretically proven security in practice. Compared to the state of the art, based on a general model for microarchitectural effects, our scheme allows the generation of practically secure masked software at arbitrary security orders for in-order processors. The major contribution of PoMMES is its micro-architecture aware register allocation algorithm, which is one of the crucial steps during the compilation process. In addition to simulation-based assessments that we conducted by open-source tools dedicated to evaluating masked software implementations, we confirm the effectiveness of the PoMMES-generated codes through experimental analysis. We present the result of power consumption based leakage assessments of several case studies running on a Cortex M0+ micro-controller, which is commonly deployed in industry.
Tokenised Multi-client Provisioning for Dynamic Searchable Encryption with Forward and Backward Privacy
Searchable Symmetric Encryption (SSE) has opened up an attractive avenue for privacy-preserved processing of outsourced data on the untrusted cloud infrastructure. SSE aims to support efficient Boolean query processing with optimal storage and search overhead over large real databases. However, current constructions in the literature lack the support for multi-client search and dynamic updates to the encrypted databases, which are essential requirements for the widespread deployment of SSE on real cloud infrastructures. Trivially extending a state-of-the-art single client dynamic construction, such as ODXT (Patranabis et al., NDSS’21), incurs significant leakage that renders such extension insecure in practice. Currently, no SSE construction in the literature offers efficient multi-client query processing and search with dynamic updates over large real databases while maintaining a benign leakage profile.
This work presents the first dynamic multi-client SSE scheme Nomos supporting efficient multi-client conjunctive Boolean queries over an encrypted database. Precisely, Nomos is a multi-reader-single-writer construction that allows only the gate-keeper (or the data-owner) - a trusted entity in the Nomos framework, to update the encrypted database stored on the adversarial server. Nomos achieves forward and type-II backward privacy of dynamic SSE constructions while incurring lesser leakage than the trivial extension of ODXT to a multi- client setting. Furthermore, our construction is practically efficient and scalable - attaining linear encrypted storage and sublinear search overhead for conjunctive Boolean queries. We provide an experimental evaluation of software implementation over an extensive real dataset containing millions of records. The results show that Nomos performance is comparable to the state-of-the-art static conjunctive SSE schemes in practice.
Split Gröbner Bases for Satisfiability Modulo Finite Fields
Satisfiability modulo finite fields enables automated verification for cryptosystems. Unfortunately, previous solvers scale poorly for even some simple systems of field equations, in part because they build a full Gröbner basis (GB) for the system. We propose a new solver that uses multiple, simpler GBs instead of one full GB. Our solver, implemented within the cvc5 SMT solver, admits specialized propagation algorithms, e.g., for understanding bitsums. Experiments show that it solves important bitsum-heavy determinism benchmarks far faster than prior solvers, without introducing much overhead for other benchmarks.
MiniCast: Minimizing the Communication Complexity of Reliable Broadcast
We give a new protocol for reliable broadcast with improved communication complexity for long messages. Namely, to reliably broadcast a message a message $m$ over an asynchronous network to a set of $n$ parties, of which fewer than $n/3$ may be corrupt, our protocol achieves a communication complexity of $1.5 |m| n + O( \kappa n^2 \log(n) )$, where $\kappa$ is the output length of a collision-resistant hash function. This result improves on the previously best known bound for long messages of $2 |m| n + O( \kappa n^2 \log(n) )$.
Actively Secure Private Set Intersection in the Client-Server Setting
Private set intersection (PSI) allows two parties to compute the intersection of their sets without revealing anything else. In some applications of PSI, a server holds a large set and runs a PSI protocol with multiple clients, each with its own smaller set. In this setting, existing protocols fall short: they either achieve only semi-honest security, or else require the server to run the protocol from scratch for each execution.
We design an efficient protocol for this setting with simulation-based security against malicious adversaries. In our protocol, the server publishes a one-time, linear-size encoding of its set. Then, multiple clients can independently execute a PSI protocol with the server, with complexity linear in the size of each client’s set. To learn the intersection, a client can download the server’s encoding, which can be accelerated via content-distribution or peer-to-peer networks since the same encoding is used by all clients; alternatively, clients can fetch only the relevant parts of the encoding using verifiable private information retrieval. A key ingredient of our protocol is an efficient instantiation of an oblivious verifiable unpredictable function, which may be of independent interest.
Our implementation shows that our protocol is highly efficient. For a server holding $10^8$ elements and each client holding $10^3$ elements, the size of the server’s encoding is 800MB; an execution of the protocol uses 60MB of communication, runs in under 5s in a WAN network with 120 Mbps bandwidth, and costs only 0.017 USD when utilizing network-caching infrastructures, a 5× saving compared to a state-of-the-art PSI protocol.
An overview of symmetric fuzzy PAKE protocols
Fuzzy password authenticated key exchange (fuzzy PAKE) protocols enable two parties to securely exchange a session-key for further communication. The parties only need to share a low entropy password. The passwords do not even need to be identical, but can contain some errors. This may be due to typos, or because the passwords were created from noisy biometric readings. In this paper we provide an overview and comparison of existing fuzzy PAKE protocols. Furthermore, we analyze certain security properties of these protocols and argue that the protocols can be expected to be slightly more secure in practice than can be inferred from their theoretical guarantees.
Communication-Efficient Multi-Party Computation for RMS Programs
Despite much progress, general-purpose secure multi-party computation (MPC) with active security may still be prohibitively expensive in settings with large input datasets. This particularly applies to the secure evaluation of graph algorithms, where each party holds a subset of a large graph.
Recently, Araki et al. (ACM CCS '21) showed that dedicated solutions may provide significantly better efficiency if the input graph is sparse. In particular, they provide an efficient protocol for the secure evaluation of "message passing" algorithms, such as the PageRank algorithm. Their protocol's computation and communication complexity are both $\tilde{O}(M\cdot B)$ instead of the $O(M^2)$ complexity achieved by general-purpose MPC protocols, where $M$ denotes the number of nodes and $B$ the (average) number of incoming edges per node. On the downside, their approach achieves only a relatively weak security notion; $1$-out-of-$3$ malicious security with selective abort.
In this work, we show that PageRank can instead be captured efficiently as a restricted multiplication straight-line (RMS) program, and present a new actively secure MPC protocol tailored to handle RMS programs. In particular, we show that the local knowledge of the participants can be leveraged towards the first maliciously-secure protocol with communication complexity linear in $M$, independently of the sparsity of the graph. We present two variants of our protocol. In our communication-optimized protocol, going from semi-honest to malicious security only introduces a small communication overhead, but results in quadratic computation complexity $O(M^2)$. In our balanced protocol, we still achieve a linear communication complexity $O(M)$, although with worse constants, but a significantly better computational complexity scaling with $O(M\cdot B)$. Additionally, our protocols achieve security with identifiable abort and can tolerate up to $n-1$ corruptions.
Amortizing Circuit-PSI in the Multiple Sender/Receiver Setting
Private set intersection (PSI) is a cryptographic functionality for two parties to learn the intersection of their input sets, without leaking any other information. Circuit-PSI is a stronger PSI functionality where the parties learn only a secret-shared form of the desired intersection, thus without revealing the intersection directly. These secret shares can subsequently serve as input to a secure multiparty computation of any function on this intersection.
In this paper we consider several settings in which parties take part in multiple Circuit-PSI executions with the same input set, and aim to amortize communications and computations. To that end, we build up a new framework for Circuit-PSI around generalizations of oblivious (programmable) PRFs that are extended with offline setup phases. We present several efficient instantiations of this framework with new security proofs for this setting. As a side result, we obtain a slight improvement in communication and computation complexity over the state-of-the-art semi-honest Circuit-PSI protocol by Bienstock et al. (USENIX '23). Additionally, we present a novel Circuit-PSI protocol from a PRF with secret-shared outputs, which has linear communication and computation complexity in the parties' input set sizes,
and is able to realize a stronger security notion. Lastly, we derive the potential amortizations over multiple protocol executions, and observe that each of the presented instantiations is favorable in at least one of the multiple-execution settings.
A Round-Optimal Near-Linear Third-Party Private Set Intersection Protocol
Third-party private set intersection (PSI) enables two parties, each holding a private set to compute their intersection and reveal the result only to an inputless third party. In this paper, we present an efficient round-optimal third-party PSI protocol. Our work is motivated by real-world applications such as contact tracing whereby expedition is essential while concurrently preserving privacy. Our construction only requires $2$ communication rounds and attains a near-linear computational complexity of $O(n^{1+\varepsilon})$ for large dataset size $n$, where $\varepsilon>0$ is any fixed constant. Our improvements stem from algorithmic changes and the incorporation of new techniques along with precise parameter selections to achieve a tight asymptotic bound. Furthermore, we also present a third-party PSI cardinality protocol which has not been explored in prior third-party PSI work. In a third-party PSI cardinality setting, only the third-party obtains the size of the intersection and nothing else. Our construction to achieve the cardinality functionality attains a quasilinear computational complexity for the third-party.
On the construction of quantum circuits for S-boxes with different criteria based on the SAT solver
The substitution box (S-box) is often used as the only nonlinear component in symmetric-key ciphers, leading to a significant impact on the implementation performance of ciphers in both classical and quantum application scenarios by S-box circuits. Taking the Pauli-X gate, the CNOT gate, and the Toffoli gate (i.e., the NCT gate set) as the underlying logic gates, this work investigates the quantum circuit implementation of S-boxes based on the SAT solver. Firstly, we propose encoding methods of the logic gates and the NCT-based circuit, based on which we construct STP models for implementing S-boxes. By applying the proposed models to the S-boxes of several well-known cryptographic algorithms, we construct optimal implementations with different criteria for the 4-bit S-boxes and provide the implementation bounds of different criteria for the 5-bit S-boxes. Since S-boxes in the same affine equivalence class share most of the important properties, we then build STP models to further investigate optimizing S-box circuits based on affine equivalence. According to the applications, for almost all the tested 4-bit S-boxes, there always exists an equivalent S-box that can be implemented with half the number of logic gates. Besides, we encode some important cryptographic properties and construct STP models to design S-boxes with given criteria configurations on implementation and properties. As an application, we find an S-box with the same cryptographic properties as the S-box of KECCAK that can be implemented with only 5 NCT gates, even though the application of our models indicates that implementing the KECCAK S-box requires more than 9 NCT gates. Notably, the inputs of the proposed models are tweakable, which makes the models possess some functions not currently available in the public tools for constructing optimized NCT-based circuits for S-boxes.
Multiple Group Action Dlogs with(out) Precomputation
Let $\star: G \times X \rightarrow X$ be the action of a group $G$ of size $N=|G|$ on a set $X$. Let $y = g \star x \in X$ be a group action dlog instance, where our goal is to compute the unknown group element $g \in G$ from the known set elements $x,y \in X$.
The Galbraith-Hess-Smart (GHS) collision finding algorithm solves the group action dlog in $N^{\frac 1 2}$ steps with polynomial memory.
We show that group action dlogs are suitable for precomputation attacks. More precisely, for any $s,t$ our precomputation algorithm computes within $st$ steps a hint of space complexity $s$, which allows to solve any group action dlog in an online phase within $t$ steps. A typical instantiation is $s=t=N^{\frac 1 3}$, which gives precomputation time $N^{\frac 2 3}$ and space $N^{\frac 1 3}$, and online time only $N^{\frac 1 3}$.
Moreover, we show that solving multiple group action dlog instances $y_1, \ldots , y_m$ allows for speedups. Namely, our collision finding algorithm solves $m$ group action dlogs in $\sqrt{m}N^{\frac 1 2}$ steps, instead of the straight-forward $mN^{\frac 1 2}$ steps required for running $m$ times GHS.
Interestingly, our multi instance algorithm (with precomputation) can be seen as a special case of our precomputation algorithm.
Our multiple instance approach can be freely combined with our precomputations, allowing for a variety of tradeoffs.
Technically, our precomputation and multiple instance group action dlog attacks are adaptations of the techniques from the standard dlog setting in abelian groups. While such an adaptation seems natural, it is per se unclear which techniques transfer from the dlog to the more general group action dlog setting, for which $X$ does not offer a group structure.
Our algorithms have direct implications for all group action based cryptosystems, such as CSIDH and its variants. We provide experimental evidence that our techniques work well in the CSIDH setting.
A Note on Related-Tweakey Impossible Differential Attacks
In this note we review the technique proposed at ToSC 2018 by Sadeghi et al. for attacks built upon several related-tweakey impossible differential trails. We show that the initial encryption queries are improper and lead the authors to misevaluate a filtering value in the key recovery phase. We identified 4 other papers (from Eurocrypt, DCC, and 2 from ToSC) that follow on the results of Sadeghi et al., and in three of them the flawed technique was reused.
We thus present a careful analysis of these types of attacks and give generic complexity formulas similar to the ones proposed by Boura et al. at Asiacrypt 2014.
We apply these to the aforementioned papers and provide patched versions of their attacks. The main consequence is an increase in the memory complexity. We show that in many cases (a notable exception being quantum impossible differentials) it is possible to recover the numeric time estimates of the flawed analysis, and in all cases we were able to build a correct attack reaching the same number of rounds.
Practical Proofs of Parsing for Context-free Grammars
We present a scheme to prove, in zero-knowledge (ZK), the correct parsing of a string in context-free grammar (CFG). This is a crucial step towards applications such as proving statements about web API responses in ZK.
To the best of our knowledge, this is the first ZK scheme to prove the correctness of CFG parsing with complexity linear in the length of the string. Further, our algorithm flexibly accommodates different ZK proof systems. We demonstrate this flexibility with multiple implementations using both non-interactive and interactive proof paradigms.
Given general-purpose ZK programming frameworks, our implementations are not only compact (e.g., around 200 lines of code for the non-interactive version) but also deliver competitive performance. In the non-interactive setting, proving the correct parsing of a $\approx 1$KB string takes 24 seconds, even for grammars with $2^{10}$ production rules. In the interactive setting the same proof takes just 1.6 seconds.
SQIAsignHD: SQIsignHD Adaptor Signature
Adaptor signatures can be viewed as a generalized form of standard digital signature schemes by linking message authentication to the disclosure of a secret value. As a recent cryptographic primitive, they have become essential for blockchain applications, including cryptocurrencies, by reducing on-chain costs, improving fungibility, and enabling off-chain payments in payment-channel networks, payment-channel hubs, and atomic swaps. However, existing adaptor signature constructions are vulnerable to quantum attacks due to Shor's algorithm. In this work, we introduce $\mathsf{SQIAsignHD}$, a new quantum-resistant adaptor signature scheme based on isogenies of supersingular elliptic curves, using SQIsignHD - as the underlying signature scheme - and exploiting the idea of the artificial orientation on the supersingular isogeny Diffie-Hellman key exchange protocol, SIDH, to define the underlying hard relation. We, furthermore, provide a formal security proof for our proposed scheme.
Two-Party Decision Tree Training from Updatable Order-Revealing Encryption
Running machine learning algorithms on encrypted data is a way forward to marry functionality needs common in industry with the important concerns for privacy when working with potentially sensitive data. While there is already a growing field on this topic and a variety of protocols, mostly employing fully homomorphic encryption or performing secure multiparty computation (MPC), we are the first to propose a protocol that makes use of a specialized encryption scheme that allows to do secure comparisons on ciphertexts and update these ciphertexts to be encryptions of the same plaintexts but under a new key. We call this notion Updatable Order-Revealing Encryption (uORE) and provide a secure construction using a key-homomorphic pseudorandom function.
In a second step, we use this scheme to construct an efficient three-round protocol between two parties to compute a decision tree (or forest) on labeled data provided by both parties. The protocol is in the passively-secure setting and has some leakage on the data that arises from the comparison function on the ciphertexts. We motivate how our protocol can be compiled into an actively-secure protocol with less leakage using secure enclaves, in a graceful degradation manner, e.g. falling back to the uORE leakage, if the enclave becomes fully transparent. We also analyze the leakage of this approach, giving an upper bound on the leaked information. Analyzing the performance of our protocol shows that this approach allows us to be much more efficient (especially w.r.t. the number of rounds) than current MPC-based approaches, hence allowing for an interesting trade-off between security and performance.
Convolution-Friendly Image Compression in FHE
Fully Homomorphic Encryption (FHE) is a powerful tool that brings privacy and security to all sorts of applications by allowing us to perform additions and multiplications directly on ciphertexts without the need of the secret key.
Some applications of FHE that were previously overlooked but have recently been gaining traction are data compression and image processing.
Practically, FHE enables applications such as private satellite searching,
private object recognition, or even encrypted video editing.
We propose a practical FHE-friendly image compression and processing pipeline where an image can be compressed and encrypted on the client-side, sent to a server which decompresses it homomorphically and then performs image processing in the encrypted domain before returning the encrypted result to the client.
Inspired by JPEG, our pipeline also relies on discrete cosine transforms
and quantization to simplify the representation of an image in the frequency domain, making it possible to effectively use a compression algorithm.
This pipeline is designed to be compatible with existing image-processing techniques in FHE, such as pixel-wise processing and convolutional filters.
Using this technique, a high-definition ($1024\times1024$) image can be homomorphically decompressed, processed with a convolutional filter and re-compressed in under $24.7$s, while using ~8GB memory.
Scoring the predictions: a way to improve profiling side-channel attacks
Side-channel analysis is an important part of the security evaluations of hardware components and more specifically of those that include cryptographic algorithms. Profiling attacks are among the most powerful attacks as they assume the attacker has access to a clone device of the one under attack. Using the clone device allows the attacker to make a profile of physical leakages linked to the execution of algorithms. This work focuses on the characteristics of this profile and the information that can be extracted from its application to the attack traces. More specifically, looking at unsuccessful attacks, it shows that by carefully ordering the attack traces used and limiting their number, better results can be achieved with the same profile. Using this method allows us to consider the classical attack method, i.e. where the traces are randomly ordered, as the worst case scenario. The best case scenario is when the attacker is able to successfully order and select the best attack traces. A method for identifying efficient ordering when using deep learning models as profiles is also provided. A new loss function "Scoring loss" is dedicated to training machine learning models that give a score to the attack prediction and the score can be used to order the predictions.
Permutation-Based Hash Chains with Application to Password Hashing
Hash chain based password systems are a useful way to guarantee authentication with one-time passwords. The core idea is specified in RFC 1760 as S/Key. At CCS 2017, Kogan et al. introduced T/Key, an improved password system where one-time passwords are only valid for a limited time period. They proved security of their construction in the random oracle model under a basic modeling of the adversary. In this work, we make various advances in the analysis and instantiation of hash chain based password systems. Firstly, we describe a slight generalization called U/Key that allows for more flexibility in the instantiation and analysis, and we develop a security model that refines the adversarial strength into offline and online complexity, that can be used beyond the random oracle model, and that allows to argue multi-user security directly. Secondly, we derive a new security proof of U/Key in the random oracle model, as well as dedicated and tighter security proofs of U/Key instantiated with a sponge construction and a truncated permutation. These dedicated security proofs, in turn, solve a problem of understanding the preimage resistance of a cascaded evaluation of the sponge construction. When applied to T/Key, these results improve significantly over the earlier results: whereas the originally suggested instantiation using SHA-256 uses a compression function that maps 768 bits into 256 bits, with a truncated permutation construction one can generically achieve 128 bits of security already with a permutation of size 256 bits.
Menhir: An Oblivious Database with Protection against Access and Volume Pattern Leakage
Analyzing user data while protecting the privacy of individuals remains a big challenge. Trusted execution environments (TEEs) are a possible solution as they protect processes and Virtual Machines (VMs) against malicious hosts. However, TEEs can leak access patterns to code and to the data being processed. Furthermore, when data is stored in a TEE database, the data volume required to answer a query is another unwanted side channel that contains sensitive information. Both types of information leaks, access patterns and volume patterns, allow for database reconstruction attacks.
In this paper, we present Menhir, an oblivious TEE database that hides access patterns with ORAM guarantees and volume patterns through differential privacy. The database allows range and point queries with SQL-like WHERE-clauses. It builds on the state-of-the-art oblivious AVL tree construction Oblix (S&P'18), which by itself does not protect against volume leakage. We show how volume leakage can be exploited in range queries and improve the construction to mitigate this type of attack. We prove the correctness and obliviousness of Menhir. Our evaluation shows that our approach is feasible and scales well with the number of rows and columns in the database.
Quantum Algorithms for Lattice Problems
Uncategorized
Uncategorized
We show a polynomial time quantum algorithm for solving the learning with errors problem (LWE) with certain polynomial modulus-noise ratios. Combining with the reductions from lattice problems to LWE shown by Regev [J.ACM 2009], we obtain polynomial time quantum algorithms for solving the decisional shortest vector problem (GapSVP) and the shortest independent vector problem (SIVP) for all $n$-dimensional lattices within approximation factors of $\tilde{\Omega}(n^{4.5})$. Previously, no polynomial or even subexponential time quantum algorithms were known for solving GapSVP or SIVP for all lattices within any polynomial approximation factors.
To develop a quantum algorithm for solving LWE, we mainly introduce two new techniques. First, we introduce Gaussian functions with complex variances in the design of quantum algorithms. In particular, we exploit the feature of the Karst wave in the discrete Fourier transform of complex Gaussian functions. Second, we use windowed quantum Fourier transform with complex Gaussian windows, which allows us to combine the information from both time and frequency domains. Using those techniques, we first convert the LWE instance into quantum states with purely imaginary Gaussian amplitudes, then convert purely imaginary Gaussian states into classical linear equations over the LWE secret and error terms, and finally solve the linear system of equations using Gaussian elimination. This gives a polynomial time quantum algorithm for solving LWE.
Leakage-Abuse Attacks Against Structured Encryption for SQL
Structured Encryption (StE) enables a client to securely store and query data stored on an untrusted server. Recent constructions of StE have moved beyond basic queries, and now support large subsets of SQL. However, the security of these constructions is poorly understood, and no systematic analysis has been performed.
We address this by providing the first leakage-abuse attacks against StE for SQL schemes. Our attacks can be run by a passive adversary on a server with access to some information about the distribution of underlying data, a common model in prior work. They achieve partial query recovery against select operations and partial plaintext recovery against join operations. We prove the optimality and near-optimality of two new attacks, in a Bayesian inference framework. We complement our theoretical results with an empirical investigation testing the performance of our attacks against real-world data and show they can successfully recover a substantial proportion of queries and plaintexts.
In addition to our new attacks, we provide proofs showing that the conditional optimality of a previously proposed leakage-abuse attack and that inference against join operations is NP-hard in general.
Efficient Linkable Ring Signatures: New Framework and Post-Quantum Instantiations
In this paper, we introduce a new framework for constructing linkable ring signatures (LRS). Our framework is based purely on signatures of knowledge (SoK) which allows one to issue signatures on behalf of any NP-statement using the corresponding witness. Our framework enjoys the following advantages: (1) the security of the resulting LRS depends only on the security of the underlying SoK; (2) the resulting LRS naturally supports online/offline signing (resp. verification), where the output of the offline signing (resp. verification) can be re-used across signatures of the same ring. For a ring size $n$, our framework requires an SoK of the NP statement with size $\log n$.
To instantiate our framework, we adapt the well-known post-quantum secure non-interactive argument of knowledge (NIAoK), ethSTARK, into an SoK. This SoK is inherently post-quantum secure and has a signature size poly-logarithmic in the size of the NP statement. Thus, our resulting LRS has a signature size of $O(\text{polylog}(\log n))$. By comparison, existing post-quantum ring signatures, regardless of linkability considerations, have signature sizes of $O(\log n)$ at best. Furthermore, leveraging online/offline verification, part of the verification of signatures on the same ring can be shared, resulting in a state-of-the-art amortized verification cost of $O(\text{polylog}(\log n))$.
Our LRS also performs favourably against existing schemes in practical scenarios. Concretely, our scheme has the smallest signature size among all post-quantum linkable ring signatures with non-slanderability for ring size larger than $32$. In our experiment, at $128$-bit security and ring size of $1024$, our LRS has a size of $29$KB, and an amortized verification cost of $0.3$ ms, surpassing the state-of-the-art by a significant margin. Even without considering amortization, the verification time for a single signature is $128$ ms, comparable to those featuring linear signature size. A similar performance advantage can also be seen at signing. Furthermore, our LRS has extremely short public keys ($32$ bytes), while public keys of existing constructions are in the order of kilobytes.
Insights from building a blockchain-based metaverse
This paper presents an in-depth exploration of the development and deployment of a Layer 1 (L1) blockchain designed to underpin metaverse experiences. As the digital and physical realms become increasingly intertwined, the metaverse emerges as a frontier for innovation, demanding robust, scalable, and secure infrastructure. The core of our investigation centers around the challenges and insights gained from constructing a blockchain framework capable of supporting the vast, dynamic environments of the metaverse. Through the development process, we identified key areas of focus: interoperability, performance and scalability, cost, identity, privacy, security, and accessibility.
Our findings indicate that most challenges can be effectively addressed through the implementation of cryptography and subnets (i.e., Avalanche architecture), which allow for segmented, optimized environments within the broader metaverse ecosystem. This approach not only enhances performance but also provides a flexible framework for managing the diverse needs of metaverse applications.
Probabilistic Algorithms with applications to countering Fault Attacks on Lattice based Post-Quantum Cryptography
Fault attacks that exploit the propagation of effective/ineffective faults present a richer attack surface than Differential Fault Attacks, in the sense that the adversary depends on a single bit of information to eventually leak secret cryptographic material. In the recent past, a number of propagation-based fault attacks on Lattice-based Key Encapsulation Mechanisms have been proposed; many of which have no known countermeasures. In this work, we propose an orthogonal countermeasure principle that does not follow adhoc strategies (like shuffling operations on secret coefficients), but rather depends on cryptographically-backed guarantees to provide quantifiable defence against aforementioned fault attacks. Concretely, we propose a framework that uses rejection sampling (which has been traditionally used as alternatives to trapdoors) to convert otherwise deterministic algorithms to probabilistic ones. Our specific goals allow careful selection of distributions such that our framework functions with a constant number of retries (around $2-3$) for unfaulted executions. In other words, should a fault be injected, the probability of success is negligible; for correct execution however, the probability of success is overwhelmingly high. Using our framework, we hence enable probabilistic decryptions in Kyber, NewHope, and Masked Kyber, and completely cut-off fault propagation in known attacks on these constructions, allowing a sound defence against known fault attacks in literature.
Fast Parallelizable Misuse-Resistant Authenticated Encryption: Low Latency (Decryption-Fast) SIV
MRAE security is an important goal for many AEAD applications where the nonce uniqueness cannot be maintained and security risks are significant. However, MRAE schemes can be quite expensive. Two of the SoTA MRAE-secure schemes; Deoxys-II and AES-GCM-SIV rely on internal parallelism and special instructions to achieve competitive performance. However, they both suffer from the same bottleneck, they have at least one call to the underlying primitive that cannot be parallelized to any other call. Romulus-M and LMDAE are two other more recent MRAE secure schemes based on TBCs that target low area hardware. However, they are unparallelizable so they are slower than their counterparts.
In this paper, we present two new AEAD modes and four instantiations based on Tweakable Block Ciphers. These new modes target equipping high-speed applications on parallel platforms with nonce misuse resistant AEAD (MRAE). The first mode, LLSIV, targets similar performance on single-core platforms to SCT-2, while eliminating the bottlenecks that make SCT-2 not fully parallelizable. The enhanced parallelism allows LLSIV to encrypt significantly more blocks on parallel platforms, compared to SCT-2, in the same amount of time. LLSIV is based on the NaT MAC, where each ciphertext block can itself be viewed as an instance of NaT when the plaintext is prepended with
. The trade-off is that LLSIV requires the inverse function of the TBC. However, the inverse function is used only once per message and we demonstrate that for parallel implementations it represents a very small overhead.
We give an instantiation of LLSIV based on the SKINNY-128-384 TBC, and a pruned scheme, dubbed pLLSIV, which targets enhanced performance compared both SCT-2 and LLSIV on all platforms, while having reduced security claims. It relies on the recently popularized prove-then-prune methodology to take full advantage of the properties of LLSIV. This leads to a significant performance improvement, making pLLSIV even faster than online TBC-based schemes that are not MRAE-secure. Last but not least, we give an instantiation that uses the primitives used in AES-GCM-SIV: the PolyVal hash function and AES. Our instantiation is faster than AES-GCM-SIV on all platforms and have better bounds. On the other hand, it relies on the ideal cipher model as it uses the ICE TBC proposed as part of the Remus AEAD design.
The second mode we describe is LLDFV. It uses ideas from LLSIV combined the Decryption-Fast SIV (DFV) framework proposed recently by Minematsu. The goal is to reduce the number of calls to the TBC by one, while making the scheme as parallelizable as LLSIV. This makes the scheme faster that DFV on all platforms.
Integral Attack on the Full FUTURE Block Cipher
FUTURE is a recently proposed lightweight block cipher that achieved a remarkable hardware performance due to careful design decisions. FUTURE is an Advanced Encryption Standard (AES)-like Substitution-Permutation Network (SPN) with 10 rounds, whose round function consists of four components, i.e., SubCell, MixColumn, ShiftRow and AddRoundKey. Unlike AES, it is a 64-bit-size block cipher with a 128-bit secret key, and the state can be arranged into 16 cells. Therefore, the operations of FUTURE including its S-box is defined over $\mathbb{F}_2^4$. The previous studies have shown that the integral properties of 4-bit S-boxes are usually weaker than larger-size S-boxes, thus the number of rounds of FUTURE, i.e., 10 rounds only, might be too aggressive to provide enough resistance against integral cryptanalysis.
In this paper, we mount the integral cryptanalysis on FUTURE. With state-of-the-art detection techniques, we identify several integral distinguishers of 7 rounds of FUTURE. By extending this 7-round distinguisher by 3 forward rounds, we manage to recover all the 128 bits secret keys from the full FUTURE cipher without the full codebook for the first time. To further achieve better time complexity, we also present a key recovery attack on full FUTURE with full codebook. Both attacks have better time complexity than existing results.
Efficient isochronous fixed-weight sampling with applications to NTRU
We present a solution to the open problem of designing a linear-time, unbiased and timing attack-resistant shuffling algorithm for fixed-weight sampling. Although it can be implemented without timing leakages of secret data in any architecture, we illustrate with ARMv7-M and ARMv8-A implementations; for the latter, we take advantage of architectural features such as NEON and conditional instructions, which are representative of features available on architectures targeting similar systems, such as Intel. Our proposed algorithm improves asymptotically upon the current approach based on constant-time sorting networks ($O(n)$ versus $O(n \log^2 n)$), and an implementation of the new algorithm applied to NTRU is also faster in practice, by a factor of up to $6.91\ (591\%)$ on ARMv8-A cores and $12.89\ (1189\%)$ on the Cortex-M4; it also requires fewer uniform random bits. This translates into performance improvements for NTRU encapsulation, compared to state-of-the-art implementations, of up to 50\% on ARMv8-A cores and 72\% on the Cortex-M4, and small improvements to key generation (up to 2.7\% on ARMv8-A cores and 6.1\% on the Cortex-M4), with negligible impact on code size and a slight improvement in RAM usage for the Cortex-M4.
Efficient Permutation Correlations and Batched Random Access for Two-Party Computation
In this work we formalize the notion of a two-party permutation correlation $(A, B), (C, \pi)$ s.t. $\pi(A)=B+C$ for a random permutation $\pi$ of $n$ elements and vectors $A,B,C\in \mathbb{F}^n$. This correlation can be viewed as an abstraction and generalization of the Chase et al. (Asiacrypt 2020) share translation protocol. We give a systematization of knowledge for how such a permutation correlation can be derandomized to allow the parties to perform a wide range of oblivious permutations of secret-shared data. This systematization immediately enables the translation of various popular honest-majority protocols to be efficiently instantiated in the two-party setting, e.g. collaborative filtering, sorting, database joins, graph algorithms, and many more.
We give two novel protocols for efficiently generating a random permutation correlation. The first uses MPC-friendly PRFs to generate a correlation of $n$ elements, each of size $\ell=\log|\mathbb{F}|$ bits, with $O(n\ell)$ bit-OTs, time, communication, and only 3 rounds including setup. Similar asymptotics previously required relatively expensive public-key cryptography, e.g. Paillier or LWE. Our protocol implementation for $n=2^{20},\ell=128$ requires just 7 seconds & $\sim2\ell n$ OTs for communication, a respective 40 & $1.1\times$ improvement on the LWE solution of Juvekar at al. (CCS 2018). The second protocol is based on pseudo-random correlation generators and achieves an overhead that is sublinear in the string length $\ell$, i.e. the communication and number of OTs is $O(n\log \ell)$. The overhead of the latter protocol has larger hidden constants, and therefore is more efficient only when long strings are permuted, e.g. in graph algorithms.
Finally, we present a suite of highly efficient protocols based on permutations for performing various batched random access operations. These include the ability to extract a hidden subset of a secret-shared list. More generally, we give ORAM-like protocols for obliviously reading and writing from a list in a batched manner. We argue that this suite of batched random access protocols should be a first class primitive in the MPC practitioner's toolbox.
Share with Care: Breaking E2EE in Nextcloud
Nextcloud is a leading cloud storage platform with more than 20 million users.
Nextcloud offers an end-to-end encryption (E2EE) feature that is claimed to be able “to keep extremely sensitive data fully secure even in case of a full server breach”.
They also claim that the Nextcloud server “has Zero Knowledge, that is, never has access to any of the data or keys in unencrypted form”.
This is achieved by having encryption and decryption operations that are done using file keys that are only available to Nextcloud clients, with those file keys being protected by a key hierarchy that ultimately relies on long passphrases known exclusively to the users.
We provide the first detailed documentation and security analysis of Nextcloud's E2EE feature.
Nextcloud's strong security claims motivate conducting the analysis in the setting where the server itself is considered malicious.
We present three distinct attacks against the E2EE security guarantees in this setting.
Each one enables the confidentiality and integrity of all user files to be compromised.
All three attacks are fully practical and we have built proof-of-concept implementations for each.
The vulnerabilities make it trivial for a malicious Nextcloud server to access and manipulate users' data.
We have responsibly disclosed the three vulnerabilities to Nextcloud. The second and third vulnerabilities have been remediated. The first was addressed by temporarily disabling file sharing from the E2EE feature until a redesign of the feature can be made.
We reflect on broader lessons that can be learned for designers of E2EE systems.
Optimal Asynchronous Byzantine Consensus with Fair Separability
Despite ensuring both consistency and liveness, state machine replication protocols remain vulnerable to adversaries who manipulate the transaction order. To address this, researchers have proposed order-fairness techniques that rely either on building dependency graphs between transactions, or on assigning sequence numbers to transactions. Existing protocols that handle dependency graphs suffer from sub-optimal performance, resilience, or security.
On the other hand, Pompe (OSDI '20) introduced the novel ordering notion of ordering linearizability that uses sequence numbers. However, Pompe's ordering only applies to committed transactions, opening the door to order-fairness violation when there are network delays, and vulnerability to performance downgrade when there are Byzantine attackers. A stronger notion, fair separability, was introduced to require ordering on all observed transactions. However, no implementation of fair separability exists.
In this paper, we introduce a protocol for state machine replication with fair separability ($\mathsf{SMRFS}$); moreover, our protocol has communication complexity $\mathcal{O}(n\ell+\lambda n^2)$, where $n$ is the number of processes, $\ell$ is the input (transaction) size, and $\lambda$ is the security parameter. This is optimal when $\ell\geq \lambda n$, while previous works have cubic communication. To the best of our knowledge, $\mathsf{SMRFS}$ is the first protocol to achieve fair separability, and the first implementation of fair ordering that has optimal communication complexity and optimal Byzantine resilience.
A post-quantum Distributed OPRF from the Legendre PRF
A distributed OPRF allows a client to evaluate a pseudorandom function on an input chosen by the client using a distributed key shared among multiple servers. This primitive ensures that the servers learn nothing about the input nor the output, and the client learns nothing about the key.
We present a post-quantum OPRF in a distributed server setting, which can be computed in a single round of communication between a client and the servers.
The only server-to-server communication occurs during a precomputation phase.
The algorithm is based on the Legendre PRF which can be computed from a single MPC multiplication among the servers.
To this end we propose two MPC approaches to evaluate the Legendre PRF based on replicated and optimised secret sharing, respectively. Furthermore, we propose two methods that allows us to perform MPC multiplication in an efficient way that are of independent interest.
By employing the latter, we are able to evaluate the Legendre OPRF in a fashion that is quantum secure, verifiable and secure against malicious adversaries under a threshold assumption, as well as computable in a single round of interaction.
To the best of our knowledge, our proposed distributed OPRFs are the first post-quantum secure offering such properties.
We also provide an implementation of our protocols, and benchmark it against existing OPRF constructions.
A Note on the Common Haar State Model
Common random string model is a popular model in classical cryptography with many constructions proposed in this model. We study a quantum analogue of this model called the common Haar state model, which was also studied in an independent work by Chen, Coladangelo and Sattath (arXiv 2024). In this model, every party in the cryptographic system receives many copies of one or more i.i.d Haar states.
Our main result is the construction of a statistically secure PRSG with: (a) the output length of the PRSG is strictly larger than the key size, (b) the security holds even if the adversary receives $O\left(\frac{\lambda}{(\log(\lambda))^{1.01}} \right)$ copies of the pseudorandom state. We show the optimality of our construction by showing a matching lower bound. Our construction is simple and its analysis uses elementary techniques.
Breaking Bicoptor from S$\&$P 2023 Based on Practical Secret Recovery Attack
At S$\&$P 2023, a family of secure three-party computing protocols called Bicoptor was proposed by Zhou et al., which is used to compute non-linear functions in privacy preserving machine learning. In these protocols, two parties $P_0, P_1$ respectively hold the corresponding shares of the secret, while a third party $P_2$ acts as an assistant. The authors claimed that neither party in the Bicoptor can independently compromise the confidentiality of the input, intermediate, or output. In this paper, we point out that this claim is incorrect. The assistant $P_2$ can recover the secret in the DReLU protocol, which is the basis of Bicoptor. The restoration of its secret will result in the security of the remaining protocols in Bicoptor being compromised. Specifically, we provide two secret recovery attacks regarding the DReLU protocol. The first attack method belongs to a clever enumeration method, which is mainly due to the derivation of the modular equation about the secret and its share. The key of the second attack lies in solving the small integer root problem of a modular equation, as the lattices involved are only 3 or 4 dimensions, the LLL algorithm can effectively work. For the system settings selected by Bicoptor, our experiment shows that the desired secret in the DReLU protocol can be restored within one second on a personal computer. Therefore, when using cryptographic protocols in the field of privacy preserving machine learning, it is not only important to pay attention to design overhead, but also to be particularly careful of potential security threats.
Dual Support Decomposition in the Head: Shorter Signatures from Rank SD and MinRank
The MPC-in-the-Head (MPCitH) paradigm is widely used for building post-quantum signature schemes, as it provides a versatile way to design proofs of knowledge based on hard problems. Over the years, the MPCitH landscape has changed significantly, with the most recent improvements coming from VOLE-in-the-Head (VOLEitH) and Threshold-Computation-in-the-Head (TCitH).
While a straightforward application of these frameworks already improve the existing MPCitH-based signatures, we show in this work that we can adapt the arithmetic constraints representing the underlying security assumptions (here called the modeling) to achieve smaller sizes using these new techniques.
More precisely, we explore existing modelings for the rank syndrome decoding (RSD) and MinRank problems and we introduce a new modeling, named dual support decomposition, which achieves better sizes with the VOLEitH and TCitH frameworks by minimizing the size of the witnesses.
While this modeling is naturally more efficient than the other ones for a large set of parameters, we show that it is possible to go even further and explore new areas of parameters. With these new modeling and parameters, we obtain low-size witnesses which drastically reduces the size of the arithmetic part of the signature.
We apply the TCitH and VOLEitH frameworks to our new modeling for both RSD and MinRank and compare our results to the NIST candidates RYDE, MiRitH, and MIRA (MPCitH-based schemes from RSD and MinRank). We also note that recent techniques optimizing the sizes of GGM trees are applicable to our schemes and further reduce the signature sizes by a few hundred bytes. We obtain signature sizes below 3.5 kB for 128 bits of security with N=256 parties (a.k.a. leaves in the GGM trees) and going as low as ≈ 2.8 kB with N=2048, for both RSD and MinRank. This represents an improvement of more than 2 kB compared to the original submissions to the 2023 NIST call for additional signatures.
Lattice-Based Timed Cryptography
Timed cryptography studies primitives that retain their security only for a predetermined amount of time, such as proofs of sequential work and time-lock puzzles. This feature has proven to be useful in a large number of practical applications, e.g. randomness generation, sealed-bid auctions, and fair multi-party computation. However, the current state of affairs in timed cryptography is unsatisfactory: Virtually all efficient constructions rely on a single sequentiality assumption, namely that repeated squaring in unknown order groups cannot be parallelised. This is a single point of failure in the classical setting and is even false against quantum adversaries.
In this work we put forward a new sequentiality assumption, which essentially says that a repeated application of the standard lattice-based hash function cannot be parallelised. We provide concrete evidence of the validity of this assumption and perform some initial cryptanalysis. We also propose a new template to construct proofs of sequential work, based on lattice techniques.
Supersingular Hashing using Lattès Maps
In this note we propose a variant (with four sub-variants) of the Charles--Goren--Lauter (CGL) hash function using Lattès maps over finite fields. These maps define dynamical systems on the projective line. The underlying idea is that these maps ``hide'' the $j$-invariants in each step in the isogeny chain, similar to the Merkle--Damgård construction. This might circumvent the problem concerning the knowledge of the starting (or ending) curve's endomorphism ring, which is known to create collisions in the CGL hash function.
Let us, already in the abstract, preface this note by remarking that we have not done any explicit computer experiments and benchmarks (apart from a small test on the speed of computing the orbits), nor do we make any security claims. Part of the reason for this is the author's lack of competence in complexity theory and evaluation of security claims. Instead this note is only meant as a presentation of the main idea, the hope being that someone more competent will find it interesting enough to pursue further.
A comment on "Comparing the MOV and FR reductions in elliptic curve cryptography" from EUROCRYPT'99
In general the discrete logarithm problem is a hard problem in the elliptic curve cryptography, and the best known solving algorithm have exponential running time. But there exists a class of curves, i.e. supersingular elliptic curves, whose discrete logarithm problem has a subexponential solving algorithm called the MOV attack. In 1999, the cost of the MOV reduction is still computationally expensive due to the power of computers. We analysis the cost of the MOV reduction and the discrete logarithm problem of the curves in \cite{HSSI99} using Magma with an ordinary computer.
Confidential and Verifiable Machine Learning Delegations on the Cloud
With the growing adoption of cloud computing, the ability to store data and delegate computations to powerful and affordable cloud servers have become advantageous for both companies and individual users. However, the security of cloud computing has emerged as a significant concern. Particularly, Cloud Service Providers (CSPs) cannot assure data confidentiality and computations integrity in mission-critical applications. In this paper, we propose a confidential and verifiable delegation scheme that advances and overcomes major performance limitations of existing Secure Multiparty Computation (MPC) and Zero Knowledge Proof (ZKP). Secret-shared Data and delegated computations to multiple cloud servers remain completely confidential as long as there is at least one honest MPC server. Moreover, results are guaranteed to be valid even if all the participating servers are malicious. Specifically, we design an efficient protocol based on interactive proofs, such that most of the computations generating the proof can be done locally on each server. In addition, we propose a special protocol for matrix multiplication where the overhead of generating the proof is asymptotically smaller than the time to evaluate the result in MPC. Experimental evaluation demonstrates that our scheme significantly outperforms prior work, with the online prover time being 1-2 orders of magnitude faster. Notably, in the matrix multiplication protocol, only a minimal 2% of the total time is spent on the proof generation. Furthermore, we conducted tests on machine learning inference tasks. We executed the protocol for a fully-connected neural network with 3 layers on the MNIST dataset and it takes 2.6 seconds to compute the inference in MPC and generate the proof, 88× faster than prior work. We also tested the convolutional neural network of Lenet with 2 convolution layers and 3 dense layers and the running time is less than 300 seconds across three servers.
Public-Algorithm Substitution Attacks: Subverting Hashing and Verification
Algorithm Substitution Attacks (ASAs) have traditionally targeted secretly-keyed algorithms (for example, symmetric encryption or signing) with the goal of undetectably exfiltrating the underlying key. We initiate work in a new direction, namely ASAs on algorithms that are public, meaning contain no secret-key material. Examples are hash functions, and verification algorithms of signature schemes or non-interactive arguments. In what we call a PA-SA (Public-Algorithm Substitution Attack), the big-brother adversary replaces the public algorithm $f$ with a subverted algorithm, while retaining a backdoor to the latter. Since there is no secret key to exfiltrate, one has to ask what a PA-SA aims to do. We answer this with definitions that consider big-brother's goal for the PA-SA to be three-fold: it desires utility (it can break an $f$-using scheme or application), undetectability (outsiders can't detect the substitution) and exclusivity (nobody other than big-brother can exploit the substitution). We start with a general setting in which $f$ is arbitrary, formalizing strong definitions for the three goals, and then give a construction of a PA-SA that we prove meets them. We use this to derive, as applications, PA-SAs on hash functions, signature verification and verification of non-interactive arguments, exhibiting new and effective ways to subvert these. As a further application of the first two, we give a PA-SA on X.509 TLS certificates. Our constructions serve to help defenders and developers identify potential attacks by illustrating how they might be built.
NodeGuard: A Highly Efficient Two-Party Computation Framework for Training Large-Scale Gradient Boosting Decision Tree
The Gradient Boosting Decision Tree (GBDT) is a well-known machine learning algorithm, which achieves high performance and outstanding interpretability in real-world scenes such as fraud detection, online marketing and risk management. Meanwhile, two data owners can jointly train a GBDT model without disclosing their private dataset by executing secure Multi-Party Computation (MPC) protocols. In this work, we propose NodeGuard, a highly efficient two party computation (2PC) framework for large-scale GBDT training and inference. NodeGuard guarantees that no sensitive intermediate results are leaked in the training and inference. The efficiency advantage of NodeGuard is achieved by applying a novel keyed bucket aggregation protocol, which optimizes the communication and computation complexity globally in the training. Additionally, we introduce a probabilistic approximate division protocol with an optimization for re-scaling, when the divisor is publicly known. Finally, we compare NodeGuard to state-of-the-art frameworks, and we show that NodeGuard is extremely efficient. It can improve the privacy preserving GBDT training performance by a factor of 5.0 to 131 in LAN and 2.7 to 457 in WAN.
CryptoVampire: Automated Reasoning for the Complete Symbolic Attacker Cryptographic Model
Cryptographic protocols are hard to design and prove correct, as witnessed by the ever-growing list of attacks even on protocol standards. Symbolic models of cryptography enable automated formal security proofs of such protocols against an idealized cryptographic model, which abstracts away from the algebraic properties of cryptographic schemes and thus misses attacks. Computational models of cryptography yield rigorous guarantees but support at present only interactive proofs and/or restricted classes of protocols (e.g., stateless ones). A promising approach is given by the computationally complete symbolic attacker (CCSA) model, formalized in the BC Logic, which aims at bridging and getting the best of the two worlds, obtaining cryptographic guarantees by symbolic protocol analysis. The BC Logic is supported by a recently developed interactive theorem prover, namely Squirrel, which enables machine-checked interactive security proofs, as opposed to automated ones, thus requiring expert knowledge both in the cryptographic space as well as on the reasoning side.
In this paper, we introduce the CryptoVampire cryptographic protocol verifier, which for the first time fully automates proofs of trace properties in the BC Logic. The key technical contribution is a first-order formalization of protocol properties with tailored handling of subterm relations. As such, we overcome the burden of interactive proving in higher-order logic and automatically establish soundness of cryptographic protocols using only first-order reasoning. Our first-order encoding of cryptographic protocols is challenging for various reasons. On the theoretical side, we restrict full first-order logic with cryptographic axioms to ensure that, by losing the expressivity of the higher-order BC Logic, we do not lose soundness of cryptographic protocols in our first-order encoding. On the practical side, CryptoVampire integrates dedicated proof techniques using first-order saturation algorithms and heuristics, which all together enable leveraging the state-of-the-art Vampire first-order automated theorem prover as the underlying proving engine of CryptoVampire. Our experimental results showcase the effectiveness of CryptoVampire as a standalone verifier as well as in terms of automation support for Squirrel.
HyCaMi: High-Level Synthesis for Cache Side-Channel Mitigation
Cache side-channels are a major threat to cryptographic implementations, particularly block ciphers. Traditional manual hardening methods transform block ciphers into Boolean circuits, a practice refined since the late 90s. The only existing automatic approach based on Boolean circuits achieves security but suffers from performance issues. This paper examines the use of Lookup Tables (LUTs) for automatic hardening of block ciphers against cache side-channel attacks. We present a novel method combining LUT-based synthesis with quantitative static analysis in our HyCaMi framework. Applied to seven block cipher implementations, HyCaMi shows significant improvement in efficiency, being 9.5$\times$ more efficient than previous methods, while effectively protecting against cache side-channel attacks. Additionally, for the first time, we explore balancing speed with security by adjusting LUT sizes, providing faster performance with slightly reduced leakage guarantees, suitable for scenarios where absolute security and speed must be balanced.
Analysing Cryptography in the Wild - A Retrospective
We reflect on our experiences analysing cryptography deployed “in the wild” and give recommendations to fellow researchers about this process.
Avoiding Trusted Setup in Isogeny-based Commitments
In 2021, Sterner proposed a commitment scheme based on supersingular isogenies. For this scheme to be binding, one relies on a trusted party to generate a starting supersingular elliptic curve of unknown endomorphism ring. In fact, the knowledge of the endomorphism ring allows one to compute an endomorphism of degree a power of a given small prime. Such an endomorphism can then be split into two to obtain two different messages with the same commitment. This is the reason why one needs a curve of unknown endomorphism ring, and the only known way to generate such supersingular curves is to rely on a trusted party or on some expensive multiparty computation. We observe that if the degree of the endomorphism in play is well chosen, then the knowledge of the endomorphism ring is not sufficient to efficiently compute such an endomorphism and in some particular cases, one can even prove that endomorphism of a certain degree do not exist. Leveraging these observations, we adapt Sterner's commitment scheme in such a way that the endomorphism ring of the starting curve can be known and public. This allows us to obtain isogeny-based commitment schemes which can be instantiated without trusted setup requirements.
An efficient key generation algorithm for GR-NTRU over dihedral group
In this article, we focus on deriving an easily implementable and efficient
method of constructing units of the group ring of dihedral group. We provide
a necessary and sufficient condition that relates the units in the group ring
of dihedral group with the units in the group ring of cyclic group. Using this
relation and the methods available for inversion in the group ring of the cyclic
group, we introduce an algorithm to construct units efficiently and check its
performance experimentally.
Fully Homomorphic Training and Inference on Binary Decision Tree and Random Forest
Uncategorized
Uncategorized
This paper introduces a new method for training decision trees and random forests using CKKS homomorphic encryption (HE) in cloud environments, enhancing data privacy from multiple sources. The innovative Homomorphic Binary Decision Tree (HBDT) method utilizes a modified Gini Impurity index (MGI) for node splitting in encrypted data scenarios. Notably, the proposed training approach operates in a single cloud security domain without the need for decryption, addressing key challenges in privacy-preserving machine learning.
We also propose an efficient method for inference utilizing only addition for path evaluation even when both models and inputs are encrypted, achieving O(1) multiplicative depth.
Experiments demonstrate that this method surpasses the previous study by Akavia et al.'s by at least 3.7 times in the speed of inference. The study also expands to privacy-preserving random forests, with GPU acceleration ensuring feasibly efficient performance in both training and inference.
The solving degrees for computing Gröbner bases of affine semi-regular polynomial sequences
Determining the complexity of computing Gr\"{o}bner bases is an important problem both in theory and in practice, and for that the solving degree plays a key role.
In this paper, we study the solving degrees for affine semi-regular sequences and their homogenized sequences.
Some of our results are considered to give mathematically rigorous proofs of the correctness of methods for computing Gr\"{o}bner bases of the ideal generated by an affine semi-regular sequence.
This paper is a sequel of the authors' previous work and gives additional results on the solving degrees and important behaviors of Gr\"obner basis computation.
We also define the generalized degree of regularity for a sequence of homogeneous polynomials.
For the ideal generated by the homogenization of an affine semi-regular sequence, we relate its generalized degree of regularity with its maximal Gr\"{o}bner basis degree (i.e., the solving degree for the homogenized sequence).
The definition of a generalized (cryptographic) semi-regular sequence is also given, and it derives a new cryptographic assumption to estimate the security of cryptosystems.
From our experimental observation, we raise a conjecture and some questions related to this generalized semi-regularity.
These definitions and our results provide a theoretical formulation of (somehow heuristic) discussions done so far in the cryptographic community.
Last updated: 2024-04-04
Slice more? It leaks: Analysis on the paper ``On the Feasibility of Sliced Garbling''
Recent improvements to garbled circuits are mainly focused on reducing their size.
The state-of-the-art construction of Rosulek and Roy (Crypto 2021) requires $1.5\kappa$ bits for garbling AND gates in the free-XOR setting.
This is below the previously proven lower bound $2\kappa$ in the linear garbling model of Zahur, Rosulek, and Evans (Eurocrypt 2015).
Recently, Ashur, Hazay, and Satish (eprint 2024/389) proposed a scheme that requires $4/3\kappa + O(1)$ bits for garbling AND gates.
Precisely they extended the idea of slicing introduced by Rosulek and Roy to garble 3-input gates of the form $g(u,v,w) := u(v+w)$.
By setting $w = 0$, it can be used to garble AND gates with the improved communication costs.
However, in this paper, we observe that the scheme proposed by Ashur, Hazy, and Satish leaks information on the permute bits,
thereby allowing the evaluator to reveal information on the private inputs.
To be precise, we show that in their garbling scheme, the evaluator can compute the bits $\alpha$ and $\beta + \gamma$,
where $\alpha$, $\beta$, and $\gamma$ are the private permute bits of the input labels $A$, $B$, and $C$, respectively.
Optimizing and Implementing Fischlin's Transform for UC-Secure Zero-Knowledge
Fischlin's transform (CRYPTO 2005) is an alternative to the Fiat-Shamir transform that enables straight-line extraction when proving knowledge. In this work we focus on the problem of using the Fischlin transform to construct UC-secure zero-knowledge from Sigma protocols, since UC security -- that guarantees security under general concurrent composition -- requires straight-line (non-rewinding) simulators. We provide a slightly simplified transform that is much easier to understand, and present algorithmic and implementation optimizations that significantly improve the running time. It appears that the main obstacles to the use of Fischlin in practice is its computational cost and implementation complexity (with multiple parameters that need to be chosen). We provide clear guidelines and a simple methodology for choosing parameters, and show that with our optimizations the running-time is far lower than expected. For just one example, on a 2023 MacBook, the cost of proving the knowledge of discrete log with Fischlin is only 0.41ms (on a single core). This is 15 times slower than plain Fiat-Shamir on the same machine, which is a significant multiple but objectively not significant in many applications. We also extend the transform so that it can be applied to batch proofs, and show how this can be much more efficient than individually proving each statement. We hope that this paper will both encourage and help practitioners implement the Fischlin transform where relevant.
Privacy Preserving Biometric Authentication for Fingerprints and Beyond
Biometric authentication eliminates the need for users to remember secrets and serves as a convenient mechanism for user authentication. Traditional implementations of biometric-based authentication store sensitive user biometry on the server and the server becomes an attractive target of attack and a source of large-scale unintended disclosure of biometric data. To mitigate the problem, we can resort to privacy-preserving computation and store only protected biometrics on the server. While a variety of secure computation techniques is available, our analysis of privacy-preserving biometric computation and biometric authentication constructions revealed that available solutions fall short of addressing the challenges of privacy-preserving biometric authentication. Thus, in this work we put forward new constructions to address the challenges.
Our solutions employ a helper server and use strong threat models, where a client is always assumed to be malicious, while the helper server can be semi-honest or malicious. We also determined that standard secure multi-party computation security definitions are insufficient to properly demonstrate security in the two-phase (enrollment and authentication) entity authentication application. We thus extend the model and formally show security in the multi-phase setting, where information can flow from one phase to another and the set of participants can change between the phases. We implement our constructions and show that they exhibit practical performance for authentication in real time.
A Time-Space Tradeoff for the Sumcheck Prover
The sumcheck protocol is an interactive protocol for verifying the sum of a low-degree polynomial over a hypercube. This protocol is widely used in practice, where an efficient implementation of the (honest) prover algorithm is paramount. Prior work contributes highly-efficient prover algorithms for the notable special case of multilinear polynomials (and related settings). [CTY11] presents two algorithms, the first of which uses logarithmic space but runs in superlinear time; the latter runs in linear time but uses linear space.
In this short note, we present a family of prover algorithms for the multilinear sumcheck protocol that offer new time-space tradeoffs. In particular, we recover the aforementioned algorithms as special cases.
Moreover, we provide an efficient implementation of the new algorithms, and our experiments show that the asymptotics translate into new concrete efficiency tradeoffs
Unbindable Kemmy Schmidt: ML-KEM is neither MAL-BIND-K-CT nor MAL-BIND-K-PK
In "Keeping up with the KEMs" Cremers et al. introduced various binding models for KEMs. The authors show that ML-KEM is LEAK-BIND-K-CT and LEAK-BIND-K-PK, i.e. binding the ciphertext and the public key in the case of an adversary having access, but not being able to manipulate the key material. They further conjecture that ML-KEM also has MAL-BIND-K-PK, but not MAL-BIND-K-CT, the binding of public key or ciphertext to the shared secret in the case of an attacker with the ability to manipulate the key material.
This short paper demonstrates that ML-KEM does neither have MALBIND-K-CT nor MAL-BIND-K-PK, due to the attacker being able to produce mal-formed private keys, giving concrete examples for both. We also suggest mitigations, and sketch a proof for binding both ciphertext and public key when the attacker is not able to manipulate the private key as liberally.
Cryptanalysis of Secure and Lightweight Conditional Privacy-Preserving Authentication for Securing Traffic Emergency Messages in VANETs
In their paper, Wei et al. proposed a lightweight protocol for conditional privacy-preserving authentication in VANET. The protocol aims to achieve ultra-low transmission delay and efficient system secret key (SSK) updating. Their protocol uses a signature scheme with message recovery to authenticate messages. This scheme provides security against adaptively chosen message attacks. However, our analysis reveals a critical vulnerability in the scheme. It is susceptible to replay attacks, meaning a malicious vehicle can replay a message multiple times at different timestamps. This action undermines the overall effectiveness of conditional privacy. We suggest possible solutions to address these vulnerabilities and enhance the security of VANET communication.
LIT-SiGamal: An efficient isogeny-based PKE based on a LIT diagram
In this paper, we propose a novel isogeny-based public key encryption (PKE) scheme named LIT-SiGamal. This is based on a LIT diagram and SiGamal. SiGamal is an isogeny-based PKE scheme that uses a commutative diagram with an auxiliary point. LIT-SiGamal uses a LIT diagram which is a commutative diagram consisting of large-degree horizontal isogenies and relatively small-degree vertical isogenies, while the original SiGamal uses a CSIDH diagram.
A strength of LIT-SiGamal is efficient encryption and decryption. QFESTA is an isogeny-based PKE scheme proposed by Nakagawa and Onuki, which is a relatively efficient scheme in isogeny-based PKE schemes. In our experimentation with our proof-of-concept implementation, the computational time of the encryption of LIT-SiGamal is as efficient as that of QFESTA, and that of the decryption of LIT-SiGamal is about $5$x faster than that of QFESTA.
A note on securing insertion-only Cuckoo filters
We describe a small tweak to Cuckoo filters that allows securing them under insertions using the techniques from Filić et al. (ACM CCS 2022), without the need for an outer PRF call.
On implementation of Stickel's key exchange protocol over max-min and max-$T$ semirings
Given that the tropical Stickel protocol and its variants are all vulnerable to the generalized Kotov-Ushakov attack, we suggest employing the max-min semiring and, more generally, max-$T$ semiring where the multiplication is based on a $T-$norm, as a framework to implement the Stickel protocol. While the Stickel protocol over max-min semiring or max-$T$ semiring remains susceptible to a form of Kotov-Ushakov attack, we demonstrate that it exhibits significantly increased resistance against this attack when compared to the tropical (max-plus) implementation.
Software-Defined Cryptography: A Design Feature of Cryptographic Agility
Given the widespread use of cryptography in Enterprise IT, migration to post-quantum cryptography (PQC) is not drop-in replacement at all. Cryptographic agility, or crypto-agility, is a design feature that enables seamless updates to new cryptographic algorithms and standards without the need to modify or replace the surrounding infrastructure. This paper introduces a notion of software-defined cryptography as the desired design feature for crypto-agility, emphasizing the role of software in providing centralized governance for cryptography and automated enforcement of cryptographic policies, such as migration to PQC.
Fast pairings via biextensions and cubical arithmetic
Biextensions associated to line bundles on abelian varieties allows to reinterpret the usual Weil, Tate, Ate, optimal Ate, \ldots, pairings as monodromy pairings. We introduce a cubical arithmetic, derived from the canonical cubical torsor structure of these line bundles, to obtain an efficient arithmetic of these biextensions.
This unifies and extends Miller's standard algorithm to compute pairings along with other algorithms like elliptic nets and theta functions, and allows to adapt these algorithms to pairings on any model of abelian varieties with a polarisation $\Phi_D$, as long as we have an explicit theorem of the square for $D$.
In particular, we give explicit formulas for the arithmetic of the biextension (and cubical torsor structure) associated to the divisor $D=2(0_E)$ on an elliptic curve. We derive very efficient pairing formulas on elliptic curves and Kummer lines. Notably for generic pairings on Montgomery curves, our cubical biextension ladder algorithm to compute pairings costs only $15M$ by bits, which as far as I know is faster than any pairing doubling formula in the literature.
Similar Data is Powerful: Enhancing Inference Attacks on SSE with Volume Leakages
Searchable symmetric encryption (SSE) schemes provide users with the ability to perform keyword searches on encrypted databases without the need for decryption. While this functionality is advantageous, it introduces the potential for inadvertent information disclosure, thereby exposing SSE systems to various types of attacks. In this work, we introduce a new inference attack aimed at enhancing the query recovery accuracy of RefScore (presented at USENIX 2021). The proposed approach capitalizes on both similar data knowledge and an additional volume leakage as auxiliary information, facilitating the extraction of keyword matches from leaked data. Empirical evaluations conducted on multiple real-world datasets demonstrate a notable enhancement in query recovery accuracy, up to 19.5%. We also analyze the performance of the proposed attack in the presence of diverse countermeasures.
Inject Less, Recover More: Unlocking the Potential of Document Recovery in Injection Attacks Against SSE
Searchable symmetric encryption has been vulnerable to inference attacks that rely on uniqueness in leakage patterns. However, many keywords in datasets lack distinctive leakage patterns, limiting the effectiveness of such attacks. The file injection attacks, initially proposed by Cash et al. (CCS 2015), have shown impressive performance with 100% accuracy and no prior knowledge requirement. Nevertheless, this attack fails to recover queries with underlying keywords not present in the injected files. To address these limitations, our research introduces a novel attack strategy called LEAP-Hierarchical Fusion Attack (LHFA) that combines the strengths of both file injection attacks and inference attacks. Before initiating keyword injection, we introduce a new approach for inert/active keyword selection. In the phase of selecting injected keywords, we focus on keywords without unique leakage patterns and recover them, leveraging their presence for document recovery. Our goal is to achieve an amplified effect in query recovery. We demonstrate a minimum query recovery rate of 1.3 queries per injected keyword with a 10% data leakage of a real-life dataset, and initiate further research to overcome challenges associated with non-distinctive keywords.
Zero-Knowledge Proof Vulnerability Analysis and Security Auditing
Zero-Knowledge Proof (ZKP) technology marks a revolutionary advancement in the field of cryptography, enabling the verification of certain information ownership without revealing any specific details. This technology, with its paradoxical yet powerful characteristics, provides a solid foundation for a wide range of applications, especially in enhancing the privacy and security of blockchain technology and other cryptographic systems. As ZKP technology increasingly becomes a part of the blockchain infrastructure, its importance for security and completeness becomes more pronounced. However, the complexity of ZKP implementation and the rapid iteration of the technology introduce various vulnerabilities, challenging the privacy and security it aims to offer.
This study focuses on the completeness, soundness, and zero-knowledge properties of ZKP to meticulously classify existing vulnerabilities and deeply explores multiple categories of vulnerabilities, including completeness issues, soundness problems, information leakage, and non-standardized cryptographic implementations. Furthermore, we propose a set of defense strategies that include a rigorous security audit process and a robust distributed network security ecosystem. This audit strategy employs a divide-and-conquer approach, segmenting the project into different levels, from the application layer to the platform-nature infrastructure layer, using threat modelling, line-by-line audit, and internal cross-review, among other means, aimed at comprehensively identifying vulnerabilities in ZKP circuits, revealing design flaws in ZKP applications, and accurately identifying inaccuracies in the integration process of ZKP primitives.
Last updated: 2024-04-23
Quantum Implementation and Analysis of SHA-2 and SHA-3
Quantum computers have the potential to solve hard problems that are nearly impossible to solve by classical computers, this has sparked a surge of research to apply quantum technology and algorithm against the cryptographic systems to evaluate for its quantum resistance. In the process of selecting post-quantum standards, NIST categorizes security levels based on the complexity that quantum computers would require to crack AES encryption (levels 1, 3 and 5) and SHA-2 or SHA-3 (levels 2 and 4).
In assessing the security strength of cryptographic algorithms against quantum threats, accurate predictions of quantum resources are crucial. Following the work of Jaques et al. in Eurocrypt 2020, NIST estimated security levels 1, 3, and 5, corresponding to quantum circuit size for finding the key for AES-128, AES-192, and AES-256, respectively. This work has been recently followed-up by Huang et al. (Asiacrypt'22) and Liu et al. (Asiacrypt'23) among others; though the most up-to-date results are available in the work by Jang et al. (ePrint'22). However, for levels 2 and 4, which relate to the collision finding for the SHA-2 and SHA-3 hash functions, quantum attack complexities are probably not well-studied.
In this paper, we present novel techniques for optimizing the quantum circuit implementations for SHA-2 and SHA-3 algorithms in all the categories specified by NIST. After that, we evaluate the quantum circuits of target cryptographic hash functions for quantum collision search. Finally, we define the quantum attack complexity for levels 2 and 4, and comment on the security strength of the extended level. We present new concepts to optimize the quantum circuits at the component level and the architecture level.
Single Trace is All It Takes: Efficient Side-channel Attack on Dilithium
As the National Institute of Standards and Technology (NIST) concludes its post-quantum cryptography (PQC) competition, the winning algorithm, Dilithium, enters the deployment phase in 2024. This phase underscores the importance of conducting thorough practical security evaluations. Our study offers an in-depth side-channel analysis of Dilithium, showcasing the ability to recover the complete private key, ${s}_1$, within ten minutes using just two signatures and achieving a 60% success rate with a single signature. We focus on analyzing the polynomial addition in Dilithium, $z=y+{cs}_1$, by breaking down the attack into two main phases: the recovery of $y$ and ${cs}_1$ through side-channel attacks, followed by the resolution of a system of error-prone equations related to ${cs}_1$. Employing Linear Regression-based profiled attacks enables the successful recovery of the full $y$ value with a 40% success rate without the necessity for initial filtering. The extraction of ${cs}_1$ is further improved using a CNN model, which boasts an average success rate of 75%. A significant innovation of our research is the development of a constrained optimization-based residual analysis technique. This method efficiently recovers ${s}_1$ from a large set of error-containing equations concerning ${cs}_1$, proving effective even when only 10% of the equations are accurate. We conduct a practical attack on the Dilithium2 implementation on an STM32F4 platform, demonstrating that typically two signatures are sufficient for complete private key recovery, with a single signature sufficing in optimal conditions. Using a general-purpose PC, the full private key can be reconstructed in ten minutes.
A Black-box Attack on Fixed-Unitary Quantum Encryption Schemes
We show how fixed-unitary quantum encryption schemes can be attacked in a black-box setting. We use an efficient technique to invert a unitary transformation on a quantum computer to retrieve an encrypted secret quantum state $\ket{\psi}$. This attack has a success rate of 100% and can be executed in constant time. We name a vulnerable scheme and suggest how to improve it to invalidate this attack. The proposed attack highlights the importance of carefully designing quantum encryption schemes to ensure their security against quantum adversaries, even in a black-box setting.
Snake-eye Resistant PKE from LWE for Oblivious Message Retrieval and Robust Encryption
Oblivious message retrieval (OMR) allows resource-limited recipients to outsource the message retrieval process without revealing which messages are pertinent to which recipient. Its realizations in recent works leave an open problem: can an OMR scheme be both practical and provably secure against spamming attacks from malicious senders (i.e., DoS-resistant) under standard assumptions?
In this paper, we first prove that a prior construction $\mathsf{OMRp2}$ is DoS-resistant under a standard LWE assumption, resolving an open conjecture of prior works. Then, we present $\mathsf{DoS\text{-}PerfOMR}$: a provably DoS-resistant OMR construction that is 12x faster than $\mathsf{OMRp2}$, and (almost) matches the performance of the state-of-the-art OMR scheme that is $\textit{not}$ DoS-resistant (proven by the attacks we show).
To achieve this, we analyze the $\textit{snake-eye resistance}$ property for general PKE schemes (i.e., it is hard to encrypt an identical message under two keys). We construct a new lattice-based PKE scheme: $\mathsf{LWEmongrass}$, that is provably snake-eye resistant and has better efficiency than the PVW scheme underlying $\mathsf{OMRp2}$. We also show that the natural candidates (e.g., RingLWE PKE) are not snake-eye resistant.
Furthermore, we show that a snake-eye resistant PKE scheme implies a robust PKE scheme, thus introducing the first robust lattice-based PKE scheme without relying on the KEM-DEM paradigm and its inherent inefficiencies.
Of independent interest, we introduce two variants of LWE with side information, as components towards proving the properties of $\mathsf{LWEmongrass}$, and reduce standard LWE to them for the parameters of interest.
Distribution of cycles in supersingular $\ell$-isogeny graphs
Recent work by Arpin, Chen, Lauter, Scheidler, Stange, and Tran counted the number of cycles of length $r$ in supersingular $\ell$-isogeny graphs. In this paper, we extend this work to count the number of cycles that occur along the spine. We provide formulas for both the number of such cycles, and the average number as $p \to \infty$, with $\ell$ and $r$ fixed. In particular, we show that when $r$ is not a power of $2$, cycles of length $r$ are disproportionately likely to occur along the spine. We provide experimental evidence that this result holds in the case that $r$ is a power of $2$ as well.
Secure Multi-Party Linear Algebra with Perfect Correctness
We present new secure multi-party computation protocols for linear algebra over a finite field, which improve the state-of-the-art in terms of security. We look at the case of \emph{unconditional security with perfect correctness}, i.e., information-theoretic security without errors. We notably propose an expected constant-round protocol for solving systems of $m$ linear equations in $n$ variables over $\mathbb{F}_q$ with expected complexity $O(k(n^{2.5} + m^{2.5}+n^2m^{0.5}))$ where $k > m(m+n)+1$ (complexity is measured in terms of the number of secure multiplications required). The previous proposals were not error-free: known protocols can indeed fail and thus reveal information with probability $\Omega(\textsf{poly}(m)/q)$.
Our protocols are simple and rely on existing computer-algebra techniques, notably the Preparata-Sarwate algorithm, a simple but poorly known ``baby-step giant-step'' method for computing the characteristic polynomial of a matrix, and techniques due to Mulmuley for error-free linear algebra in positive characteristic.
An Efficient SNARK for Field-Programmable and RAM Circuits
The advancement of succinct non-interactive argument of knowledge (SNARK) with constant proof size has significantly enhanced the efficiency and privacy of verifiable computation. Verifiable computation finds applications in distributed computing networks, particularly in scenarios where nodes cannot be generally trusted, such as blockchains. However, fully harnessing the efficiency of SNARK becomes challenging when the computing targets in the network change frequently, as the SNARK verification can involve some untrusted preprocess of the target, which is expected to be reproduced by other nodes. This problem can be addressed with two approaches: One relieves the reproduction overhead by reducing the dimensionality of preprocessing data; The other utilizes verifiable machine computation, which eliminates the dependency on preprocess at the cost of increased overhead to SNARK proving and verification. In this paper, we propose a new SNARK with constant proof size applicable to both approaches. The proposed SNARK combines the efficiency of Groth16 protocol, albeit lacking universality for new problems, and the universality of PlonK protocol, albeit with significantly larger preprocessing data dimensions. Consequently, we demonstrate that our proposed SNARK maintains the efficiency and the universality while significantly reducing the dimensionality of preprocessing data. Furthermore, our SNARK can be seamlessly applied to the verifiable machine computation, requiring a proof size smaller about four to ten times than other related works.
A Decentralized Federated Learning using Reputation
Nowadays Federated learning (FL) is established as one of the best techniques for collaborative machine learning. It allows a set of clients to train a common model without disclosing their sensitive and private
dataset to a coordination server. The latter is in charge of the model aggregation. However, FL faces some problems, regarding the security of updates, integrity of computation and the availability of a server.
In this paper, we combine some new ideas like clients’ reputation with techniques like secure aggregation using Homomorphic Encryption and verifiable secret sharing using Multi-Party Computation techniques to design a decentralized FL system that addresses the issues of incentives, security and availability amongst others. One of the original contributions of this work is the new leader election protocol which uses a secure shuffling and is based on a proof of reputation. Indeed, we propose to select an aggregator among the clients participating to
the FL training using their reputations. That is, we estimate the reputation of each client at every FL iteration and then we select the next round aggregator from the set of clients with the best reputations. As such, we remove misbehaving clients (e.g., byzantines) from the list of clients eligible for the role of aggregation server.
RSA-Based Dynamic Accumulator without Hashing into Primes
A cryptographic accumulator is a compact data structure for representing a set of elements coming from some domain. It allows for a compact proof of membership and, in the case of a universal accumulator, non-membership of an element x in the data structure. A dynamic accumulator, furthermore, allows elements to be added to and deleted from the accumulator.
Previously known RSA-based dynamic accumulators were too slow in practice because they required that an element in the domain be represented as a prime number. Accumulators based on settings other than RSA had other drawbacks such as requiring a prohibitively large common reference string or a trapdoor, or not permitting deletions.
In this paper, we construct RSA-based dynamic accumulators that do not require that the accumulated elements be represented as primes. We also show how to aggregate membership and non-membership witnesses and batch additions and deletions. We demonstrate that, for 112-bit, 128-bit, and 192-bit security, the efficiency gains compared to previously known RSA-based accumulators are substantial, and, for the first time, make cryptographic accumulators a viable candidate for a certificate revocation mechanism as part of a WebPKI-type system. To achieve an efficient verification time for aggregated witnesses, we introduce a variant of Wesolowski’s proof of exponentiation (Journal of Cryptology 2020) that does not require hashing into primes.
Polylogarithmic Proofs for Multilinears over Binary Towers
The use of small fields has come to typify the design of modern, efficient SNARKs. In recent work, Diamond and Posen (EUROCRYPT '25) break a key trace-length barrier, by treating multilinear polynomials even over tiny fields—fields with fewer elements than the polynomial has coefficients. In this work, we make that advance applicable globally, by generically reducing the problem of tiny-field commitment to that of large-field commitment. We introduce a sumcheck-based technique—which we call "ring-switching"—which, on input a multilinear polynomial commitment scheme over a large extension field, yields a further scheme over that field's ground field. The resulting tiny-field scheme, like Diamond and Posen's, lacks "embedding overhead", in the sense that its commitment cost is identical to that of the large-field scheme on each input size (measured in bits). Its evaluation protocol adds to the cost of the underlying large-field scheme's just a concretely small, additive polylogarithmic overhead.
Instantiating our compiler on the BaseFold (CRYPTO '24) large-field multilinear polynomial commitment scheme—or more precisely, on a characteristic-2 adaptation of that scheme, which we develop at length—we obtain an extremely efficient scheme for multilinears over tiny binary fields. Our scheme outperforms Diamond–Posen and represents a new state-of-the-art.
Weakly Super-Invertible Matrices and Constant Communication Dishonest Majority MPC
In recent years, there has been tremendous progress in improving the concrete communication complexity of dishonest majority MPC. In the sub-optimal corruption threshold setting where $t<(1-\varepsilon)\cdot n$ for some constant $0<\varepsilon\leq 1/2$, Sharing Transformation (Goyal $\textit{et al.}$, CRYPTO'22) and SuperPack (Escudero $\textit{et al.}$, EUROCRYPT'23) presented protocols with information-theoretic online phases requiring $O(1)$ field elements of total communication per multiplication gate. However, Sharing Transformation assumes that their offline phase is instantiated by a trusted party, while SuperPack instantiates their offline phase with large communication of $\Omega(n)$ per multiplication gate assuming oblivious linear evaluation (OLE) correlations. The main bottleneck in instantiating the offline phases of both protocols is generating random packed beaver triples of the form $[\mathbf{a}],[\mathbf{b}],[\mathbf{c}]$, for random $\mathbf{a},\mathbf{b}\in\mathbb{F}^k$, and $\mathbf{c}=\mathbf{a}*\mathbf{b}\in\mathbb{F}^k$, where $k=\Omega(n)$ is the $\textit{packing parameter}$.
To address this bottleneck, our main technical contribution is introducing and constructing $\textit{weakly}$ super-invertible matrices, a relaxation of super-invertible matrices in which sub-matrices have high (but not necessarily full) rank. This relaxation allows for matrices with only $\widetilde{O}(n)$ non-zero entries, enabling a first step towards generating packed beaver triples with $\widetilde{O}(1)$ total communication per underlying triple, assuming OLE correlations. As the second (and final) step, we use the efficient $\textit{triple extraction}$ protocol of (Choudhury and Patra, Trans. Inform. Theory '17).
We also implement our packed beaver triple protocol and provide experimental results. Our new protocol obtains up to 38% smaller communication and 9% reduction in runtime compared to SuperPack's triple protocol. Additionally, by instantiating SuperPack's offline phase with our new protocol, we obtain up to 16% communication reductions.
Finally, we use our packed beaver triple protocol to instantiate the offline phase of Sharing Transformation, yielding a dishonest majority MPC protocol with $\widetilde{O}(|C|)$ total communication across both the offline and online phases.
Best of Two Worlds: Efficient, Usable and Auditable Biometric ABC on the Blockchain
In [1], two generic constructions for biometric-based non-transferable Attribute Based Credentials (biometric ABC) are presented, which offer different trade-offs between efficiency and trust assumptions. In this paper, we focus on the second scheme denoted as BioABC-ZK that tries to remove the strong (and unrealistic) trust assumption on the Reader R, and show that BioABC-ZK has a security flaw for a colluding R and Verifier V. Besides, BioABC-ZK lacks GDPR-compliance, which requires secure processing of biometrics, for instance in form of Fuzzy Extractors, as opposed to (i) storing the reference biometric template aBio in the user's mobile phone and (ii) processing of biometrics using an external untrusted R, whose foreign manufacturers are unlikely to adjust their products according to GDPR. The contributions of this paper are threefold. First, we review efficient biometric ABC schemes to identify the privacy-by-design criteria for them. In view of these principles, we propose a new architecture for biometric ABC of [2] by adapting the recently introduced core/helper setting of [3]. Briefly, a user in our modified setting is composed of a constrained core device (a SIM card) inside a helper device (a smart phone with dual SIM and face recognition feature), which -as opposed to [1]- does not need to store aBio. This way, the new design provides Identity Privacy without the need for an external R and/or a dedicated hardware per user such as a biometric smart card reader or a tamper proof smart card as in current hardware-bound credential systems. Besides, the new system maintains minimal hardware requirements on the SIM card -only responsible for storing ABC and helper data-, which results in easy adoption and usability without loosing efficiency, if recently introduced key derivation scheme of [4] and the modified ABC scheme of [2] are employed together. As a result, a total overhead of 500 milliseconds to a showing of a comparable non-biometric ABC is obtained instead of the 2.1 seconds in [1] apart from the removal of computationally expensive pairings. Finally, as different from [1], auditing is achieved via Blockchain instead of proving in zero-knowledge the actual biometric matching by the user to reveal malicious behavior of R and V.
Anonymous Revocable Identity-Based Encryption Supporting Anonymous Revocation
Anonymous identity-based encryption (AIBE) is an extension of identity-based encryption (IBE) that enhances the privacy of a ciphertext by providing ciphertext anonymity. In this paper, we introduce the concept of revocable IBE with anonymous revocation (RIBE-AR), which is capable of issuing an update key and hiding the revoked set of the update key that efficiently revokes private keys of AIBE. We first define the security models of RIBE-AR and propose an efficient RIBE-AR scheme in bilinear groups. Our RIBE-AR scheme is similar to the existing RIBE scheme in terms of efficiency, but is the first RIBE scheme to provide additional ciphertext anonymity and revocation privacy. We show that our RIBE-AR scheme provides the selective message privacy, selective identity privacy, and selective revocation privacy.