All papers in 2023 (Page 18 of 1971 results)

Last updated:  2024-07-22
Swoosh: Efficient Lattice-Based Non-Interactive Key Exchange
Phillip Gajland, Bor de Kock, Miguel Quaresma, Giulio Malavolta, and Peter Schwabe
The advent of quantum computers has sparked significant interest in post-quantum cryptographic schemes, as a replacement for currently used cryptographic primitives. In this context, lattice-based cryptography has emerged as the leading paradigm to build post-quantum cryptography. However, all existing viable replacements of the classical Diffie-Hellman key exchange require additional rounds of interactions, thus failing to achieve all the benefits of this protocol. Although earlier work has shown that lattice-based Non-Interactive Key Exchange (NIKE) is theoretically possible, it has been considered too inefficient for real-life applications. In this work, we challenge this folklore belief and provide the first evidence against it. We construct an efficient lattice-based NIKE whose security is based on the standard module learning with errors (M-LWE) problem in the quantum random oracle model. Our scheme is obtained in two steps: (i) A passively-secure construction that achieves a strong notion of correctness, coupled with (ii) a generic compiler that turns any such scheme into an actively-secure one. To substantiate our efficiency claim, we provide an optimised implementation of our passively-secure construction in Rust and Jasmin. Our implementation demonstrates the scheme's applicability to real-world scenarios, yielding public keys of approximately 220 KBs. Moreover, the computation of shared keys takes fewer than 12 million cycles on an Intel Skylake CPU, offering a post-quantum security level exceeding 120 bits.
Last updated:  2023-02-23
Actively Secure Arithmetic Computation and VOLE with Constant Computational Overhead
Benny Applebaum, Niv Konstantini
We study the complexity of two-party secure arithmetic computation where the goal is to evaluate an arithmetic circuit over a finite field $F$ in the presence of an active (aka malicious) adversary. In the passive setting, Applebaum et al. (Crypto 2017) constructed a protocol that only makes a *constant* (amortized) number of field operations per gate. This protocol uses the underlying field $F$ as a black box, makes black-box use of (standard) oblivious transfer, and its security is based on arithmetic analogs of well-studied cryptographic assumptions. We present an actively-secure variant of this protocol that achieves, for the first time, all the above features. The protocol relies on the same assumptions and adds only a minor overhead in computation and communication. Along the way, we construct a highly-efficient Vector Oblivious Linear Evaluation (VOLE) protocol and present several practical and theoretical optimizations, as well as a prototype implementation. Our most efficient variant can achieve an asymptotic rate of $1/4$ (i.e., for vectors of length $w$ we send roughly $4w$ elements of $F$), which is only slightly worse than the passively-secure protocol whose rate is $1/3$. The protocol seems to be practically competitive over fast networks, even for relatively small fields $F$ and relatively short vectors. Specifically, our VOLE protocol has 3 rounds, and even for 10K-long vectors, it has an amortized cost per entry of less than 4 OT's and less than 300 arithmetic operations. Most of these operations (about 200) can be pre-processed locally in an offline non-interactive phase. (Better constants can be obtained for longer vectors.) Some of our optimizations rely on a novel intractability assumption regarding the non-malleability of noisy linear codes that may be of independent interest. Our technical approach employs two new ingredients. First, we present a new information-theoretic construction of Conditional Disclosure of Secrets (CDS) and show how to use it in order to immunize the VOLE protocol of Applebaum et al. against active adversaries. Second, by using elementary properties of low-degree polynomials, we show that, for some simple arithmetic functionalities, one can easily upgrade Yao's garbled-circuit protocol to the active setting with a minor overhead while preserving the round complexity.
Last updated:  2023-02-24
Simple Two-Round OT in the Explicit Isogeny Model
Emmanuela Orsini, Riccardo Zanotto
In this work we apply the Type-Safe (TS) generic group model, recently introduced by Zhandry (2022), to the more general setting of group actions and extend it to the universal composability (UC) framework of Canetti (2000). We then relax this resulting model, that we call UC-TS, to define an algebraic action framework (UC-AA), where the adversaries can behave algebraically, similarly to the algebraic group model (AGM), but for group actions. Finally, we instantiate UC-AA with isogeny-based assumptions, obtaining the Explicit-Isogeny model, UC-EI, and show that, under certain assumptions, UC-EI is less restricting that UC-AGM. We demonstrate the utility of our definitions by proving UC-EI security for the passive-secure protocol described by Lai et al. (2021), hence providing the first concretely efficient two-round isogeny-based OT protocol in the random oracle model against malicious adversaries.
Last updated:  2023-09-12
Verifiable Decentralized Multi-Client Functional Encryption for Inner Product
Dinh Duy Nguyen, Duong Hieu Phan, and David Pointcheval
Joint computation on encrypted data is becoming increasingly crucial with the rise of cloud computing. In recent years, the development of multi-client functional encryption (MCFE) has made it possible to perform joint computation on private inputs, without any interaction. Well-settled solutions for linear functions have become efficient and secure, but there is still a shortcoming: if one user inputs incorrect data, the output of the function might become meaningless for all other users (while still useful for the malicious user). To address this issue, the concept of verifiable functional encryption was introduced by Badrinarayanan et al. at Asiacrypt ’16 (BGJS). However, their solution was impractical because of strong statistical requirements. More recently, Bell et al. introduced a related concept for secure aggregation, with their ACORN solution, but it requires multiple rounds of interactions between users. In this paper, – we first propose a computational definition of verifiability for MCFE. Our notion covers the computational version of BGJS and extends it to handle any valid inputs defined by predicates. The BGJS notion corresponds to the particular case of a fixed predicate, in our setting; – we then introduce a new technique called Combine-then-Descend, which relies on the class group. It allows us to construct One-time Decentralized Sum (ODSUM) on verifiable private inputs. ODSUM is the building block for our final protocol of a verifiable decentralized MCFE for inner-product, where the inputs are within a range. Our approach notably enables the efficient identification of malicious users, thereby addressing an unsolved problem in ACORN.
Last updated:  2024-03-25
Proteus: A Pipelined NTT Architecture Generator
Florian Hirner, Ahmet Can Mert, and Sujoy Sinha Roy
Number Theoretic Transform (NTT) is a fundamental building block in emerging cryptographic constructions like fully homomorphic encryption, post-quantum cryptography and zero-knowledge proof. In this work, we introduce Proteus, an open-source parametric hardware to generate pipelined architectures for the NTT. For a given parameter set including the polynomial degree and size of the coefficient modulus, Proteus can generate Radix-2 NTT architectures using Single-path Delay Feedback (SDF) and Multi-path Delay Commutator (MDC) approaches. We also present a detailed analysis of NTT implementation approaches and use several optimizations to achieve the best NTT configuration. Our evaluations demonstrate performance gain up to $1.8\times$ compared to SDF and MDC-based NTT implementations in the literature. Our SDF and MDC architectures use 1.75× and 6.5× less DSPs, and 3× and 10.5× less BRAMs, respectively, compared to state-of-the-art SDF and MDC-based NTT implementations.
Last updated:  2023-04-17
Do we need to change some things? Open questions posed by the upcoming post-quantum migration to existing standards and deployments
Panos Kampanakis, Tancrède Lepoint
Cryptographic algorithms are vital components ensuring the privacy and security of computer systems. They have constantly improved and evolved over the years following new developments, attacks, breaks, and lessons learned. A recent example is that of quantum-resistant cryptography, which has gained a lot of attention in the last decade and is leading to new algorithms being standardized today. These algorithms, however, present a real challenge: they come with strikingly different size and performance characteristics than their classical counterparts. At the same time, common foundational aspects of our transport protocols have lagged behind as the Internet remains a very diverse space in which different use-cases and parts of the world have different needs. This vision paper motivates more research and possible standards updates related to the upcoming quantum-resistant cryptography migration. It stresses the importance of amplification reflection attacks and congestion control concerns in transport protocols and presents research and standardization takeaways for assessing the impact and the efficacy of potential countermeasures. It emphasizes the need to go beyond the standardization of key encapsulation mechanisms in order to address the numerous protocols and deployments of public-key encryption while avoiding pitfalls. Finally, it motivates the critical need for research in anonymous credentials and blind signatures at the core of numerous deployments and standardization efforts aimed at providing privacy-preserving trust signals.
Last updated:  2024-03-01
Software with Certified Deletion
James Bartusek, Vipul Goyal, Dakshita Khurana, Giulio Malavolta, Justin Raizes, and Bhaskar Roberts
Is it possible to prove the deletion of a computer program after having executed it? While this task is clearly impossible using classical information alone, the laws of quantum mechanics may admit a solution to this problem. In this work, we propose a new approach to answer this question, using quantum information. In the interactive settings, we present the first fully-secure solution for blind delegation with certified deletion, assuming post-quantum hardness of the learning with errors (LWE) problem. In the non-interactive settings, we propose a construction of obfuscation with certified deletion, assuming post-quantum iO and one-way functions. Our main technical contribution is a new deletion theorem for subspace coset states [Vidick and Zhang, EUROCRYPT'21, Coladangelo et al., CRYPTO'21], which enables a generic compiler that adds the certified deletion guarantee to a variety of cryptographic primitives. In addition to our main result, this allows us to obtain a host of new primitives, such as functional encryption with certified deletion and secure software leasing for an interesting class of programs. In fact, we are able for the first time to achieve a stronger notion of secure software leasing, where even a dishonest evaluator cannot evaluate the program after returning it.
Last updated:  2023-04-06
Public Key Encryption with Secure Key Leasing
Shweta Agrawal, Fuyuki Kitagawa, Ryo Nishimaki, Shota Yamada, Takashi Yamakawa
We introduce the notion of public key encryption with secure key leasing (PKE-SKL). Our notion supports the leasing of decryption keys so that a leased key achieves the decryption functionality but comes with the guarantee that if the quantum decryption key returned by a user passes a validity test, then the user has lost the ability to decrypt. Our notion is similar in spirit to the notion of secure software leasing (SSL) introduced by Ananth and La Placa (Eurocrypt 2021) but captures significantly more general adversarial strategies. In more detail, our adversary is not restricted to use an honest evaluation algorithm to run pirated software. Our results can be summarized as follows: 1. Definitions: We introduce the definition of PKE with secure key leasing and formalize a security notion that we call indistinguishability against key leasing attacks (IND-KLA security). We also define a one-wayness notion for PKE-SKL that we call OW-KLA security and show that an OW-KLA secure PKE-SKL scheme can be lifted to an IND-KLA secure one by using the (quantum) Goldreich-Levin lemma. 2. Constructing IND-KLA PKE with Secure Key Leasing: We provide a construction of OW-KLA secure PKE-SKL (which implies IND-KLA secure PKE-SKL as discussed above) by leveraging a PKE scheme that satisfies a new security notion that we call consistent or inconsistent security against key leasing attacks (CoIC-KLA security). We then construct a CoIC-KLA secure PKE scheme using 1-key Ciphertext-Policy Functional Encryption (CPFE) that in turn can be based on any IND-CPA secure PKE scheme. 3. Identity Based Encryption, Attribute Based Encryption and Functional Encryption with Secure Key Leasing: We provide definitions of secure key leasing in the context of advanced encryption schemes such as identity based encryption (IBE), attribute-based encryption (ABE) and functional encryption (FE). Then we provide constructions by combining the above PKE-SKL with standard IBE, ABE and FE schemes. Notably, our definitions allow the adversary to request distinguishing keys in the security game, namely, keys that distinguish the challenge bit by simply decrypting the challenge ciphertext, so long as it returns them (and they pass the validity test) before it sees the challenge ciphertext. All our constructions satisfy this stronger definition, albeit with the restriction that only a bounded number of such keys be allowed to the adversary in the IBE and ABE (but not FE) security games. Prior to our work, the notion of single decryptor encryption (SDE) has been studied in the context of PKE (Georgiou and Zhandry, Eprint 2020) and FE (Kitigawa and Nishimaki, Asiacrypt 2022) but all their constructions rely on strong assumptions including indistinguishability obfuscation. In contrast, our constructions do not require any additional assumptions, showing that PKE/IBE/ABE/FE can be upgraded to support secure key leasing for free.
Last updated:  2023-06-08
DualMS: Efficient Lattice-Based Two-Round Multi-Signature with Trapdoor-Free Simulation
Yanbo Chen
A multi-signature scheme allows multiple signers to jointly sign a common message. In recent years, two lattice-based two-round multi-signature schemes based on Dilithium-G were proposed: DOTT by Damgård, Orlandi, Takahashi, and Tibouchi (PKC'21) and Musig-L by Boschini, Takahashi, and Tibouchi (CRYPTO'22). In this work, we propose a new lattice-based two-round multi-signature scheme called DualMS. Compared to DOTT, DualMS is likely to significantly reduce signature size, since it replaces an opening to a homomorphic trapdoor commitment with a Dilithium-G response in the signature. Compared to Musig-L, concrete parameters show that DualMS has smaller public keys, signatures, and lower communication, while the first round cannot be preprocessed offline as in Musig-L. The main reason behind such improvements is a trapdoor-free "dual signing simulation" of our scheme. Signature simulation of DualMS is virtually the same as the normal signing procedure and does not use lattice trapdoors like DOTT and Musig-L.
Last updated:  2023-03-02
Generic Attack on Duplex-Based AEAD Modes using Random Function Statistics
Henri Gilbert, Rachelle Heim Boissier, Louiza Khati, Yann Rotella
Duplex-based authenticated encryption modes with a sufficiently large key length are proven to be secure up to the birthday bound 2^(c/2), where c is the capacity. However this bound is not known to be tight and the complexity of the best known generic attack, which is based on multicollisions, is much larger: it reaches (2^c)/α where α represents a small security loss factor. There is thus an uncertainty on the true extent of security beyond the bound 2^(c/2) provided by such constructions. In this paper, we describe a new generic attack against several duplex-based AEAD modes. Our attack leverages random functions statistics and produces a forgery in time complexity O(2^(3c/4)) using negligible memory and no encryption queries. Furthermore, for some duplex-based modes, our attack recovers the secret key with a negligible amount of additional computations. Most notably, our attack breaks a security claim made by the designers of the NIST lightweight competition candidate Xoodyak. This attack is a step further towards determining the exact security provided by duplex-based constructions.
Last updated:  2024-10-31
A Greedy Global Framework for Lattice Reduction Using Deep Insertions
Sanjay Bhattacherjee, Julio Hernandez-Castro, and Jack Moyler
LLL-style lattice reduction algorithms iteratively employ size reduction and reordering on ordered basis vectors to find progressively shorter, more orthogonal vectors. DeepLLL reorders the basis through deep insertions, yielding much shorter vectors than LLL. DeepLLL was introduced alongside BKZ, however, the latter has received greater attention and has emerged as the state-of-the-art. We first show that LLL-style algorithms work with a designated measure of basis quality and iteratively improves it; specifically, DeepLLL improves a sublattice measure based on the generalised Lovász condition. We then introduce a new generic framework X-GG for lattice reduction algorithms that work with a measure X of basis quality. X-GG globally searches for deep insertions that minimise X in each iteration. We instantiate the framework with two quality measures — basis potential (Pot) and squared sum (SS) — both of which have corresponding DeepLLL algorithms. We prove polynomial runtimes for our X-GG algorithms and also prove their output to be X-DeepLLL reduced. Our experiments on non-preprocessed bases show that X-GG produces better quality outputs whilst being much faster than the corresponding DeepLLL algorithms. We also compare SS-GG and the FPLLL implementation of BKZ with LLL-preprocessed bases. In small dimensions (40 to 210), SS-GG is significantly faster than BKZ with block sizes 8 to 12, while simultaneously also providing better output quality in most cases. In higher dimensions (250 and beyond), by varying the threshold $\delta$ for deep insertion, SS-GG offers new trade-offs between the output quality and runtime. On the one hand, it provides significantly better runtime than BKZ-5 with worse output quality; on the other hand, it is significantly faster than BKZ-21 while providing increasingly better output quality after around dimension 350.
Last updated:  2023-09-12
Webb Protocol: A cross-chain private application and governance protocol.
Drew Stone
In this paper, we present the Webb Protocol, a system for building and governing cross-chain applications that can support shared anonymity set functionality across a set of identical bridged systems on compatible blockchains. The Webb Protocol is composed of two major protocols that deal with storing, updating, and validating of data and state changes that occur on a bridge and that are relevant to replicate on each connected chain. State is efficiently verifiable through the use of merkle trees and privacy is provided using zero-knowledge proofs of membership. Together, one can create applications leveraging distributed state with private property testing capabilities. Both financial and non-financial applications are described as motivating examples within the paper.
Last updated:  2023-02-23
A MIQCP-Based Automatic Search Algorithm for Differential-Linear Trails of ARX Ciphers(Long Paper)
Guangqiu Lv, Chenhui Jin, Ting Cui
Differential-linear (DL) cryptanalysis has undergone remarkable advancements since it was first proposed by Langford and Hellman \cite{langford1994differential} in 1994. At CRYPTO 2022, Niu et al. studied the (rotational) DL cryptanalysis of $n$-bit modulo additions with 2 inputs, i.e., $\boxplus_2$, and presented a technique for evaluating the (rotational) DL correlation of ARX ciphers. However, the problem of how to automatically search for good DL trails on ARX with solvers was left open, which is the focus of this work. In this paper, we solve this open problem through some techniques to reduce complexity and a transformation technique from matrix multiplication chain to Mixed Integer Quadratically-Constrained Programs (MIQCP). First, the computational complexity of the DL correlation of $\boxplus_2$ is reduced to approximately one-eighth of the state of art, which can be computed by a $2\times2$ matrix multiplication chain of the same length as before. Some methods to further reduce complexity in special cases have been studied. Additionally, we present how to compute the extended (rotational) DL correlations of $\boxplus_k$ for $k\ge 2$, where two output linear masks of the cipher pairs can be different. Second, to ensure that the existing solver Gurobi\footnote{The solver used in this paper is Gurobi, and some ready-made functions in Gurobi are also used, such as LOG\_2 and ABS. The source code is available at \url{https://}. } can compute DL correlations of $\boxplus_2$, we propose a method to transform an arbitrary matrix multiplication chain into a MIQCP, which forms the foundation of our automatic search of DL trails in ARX ciphers. Third, in ARX ciphers, we use a single DL trail under some explicit conditions to give a good estimate of the correlation, which avoids the exhaustion of intermediate differences. We then derive an automatic method for evaluating the DL correlations of ARX, which we apply to Alzette and some versions of SPECK. Experimentally verified results confirm the validity of our method, with the predicted correlations being close to the experimental ones. To the best of our knowledge, this method finds the best DL distinguishers for these ARX primitives currently. Furthermore, we presented the lowest time-complexity attacks against 12-14 rounds of SPECK32 to date.
Last updated:  2023-03-13
Privacy-Preserving Tree-Based Inference with Fully Homomorphic Encryption
Jordan Frery, Andrei Stoian, Roman Bredehoft, Luis Montero, Celia Kherfallah, Benoit Chevallier-Mames, Arthur Meyre
Privacy enhancing technologies (PETs) have been proposed as a way to protect the privacy of data while still allowing for data analysis. In this work, we focus on Fully Homomorphic Encryption (FHE), a powerful tool that allows for arbitrary computations to be performed on encrypted data. FHE has received lots of attention in the past few years and has reached realistic execution times and correctness. More precisely, we explain in this paper how we apply FHE to tree-based models and get state-of-the-art solutions over encrypted tabular data. We show that our method is applicable to a wide range of tree-based models, including decision trees, random forests, and gradient boosted trees, and has been implemented within the Concrete-ML library, which is open-source at https://github.com/zama-ai/concrete-ml. With a selected set of use-cases, we demonstrate that our FHE version is very close to the unprotected version in terms of accuracy.
Last updated:  2023-02-22
Deep Neural Networks for Encrypted Inference with TFHE
Andrei Stoian, Jordan Frery, Roman Bredehoft, Luis Montero, Celia Kherfallah, Benoit Chevallier-Mames
Fully homomorphic encryption (FHE) is an encryption method that allows to perform computation on encrypted data, without decryption. FHE preserves the privacy of the users of online services that handle sensitive data, such as health data, biometrics, credit scores and other personal information. A common way to provide a valuable service on such data is through machine learning and, at this time, Neural Networks are the dominant machine learning model for unstructured data. In this work we show how to construct Deep Neural Networks (DNN) that are compatible with the constraints of TFHE, an FHE scheme that allows arbitrary depth computation circuits. We discuss the constraints and show the architecture of DNNs for two computer vision tasks. We benchmark the architectures using the Concrete stack, an open-source implementation of TFHE.
Last updated:  2023-02-22
Traitor Tracing with N^(1/3)-size Ciphertexts and O(1)-size Keys from k-Lin
Junqing Gong, Ji Luo, Hoeteck Wee
We present a pairing-based traitor tracing scheme for $N$ users with$$ |\mathsf{pk}| = |\mathsf{ct}| = O(N^{1/3}), \quad |\mathsf{sk}| = O(1). $$This is the first pairing-based scheme to achieve ${|\mathsf{pk}|\cdot|\mathsf{sk}|\cdot|\mathsf{ct}|=o(N)}$. Our construction relies on the (bilateral) $k$-Lin assumption, and achieves private tracing and full collusion resistance. Our result simultaneously improves upon the sizes of $\mathsf{pk},\mathsf{ct}$ in Boneh–Sahai–Waters [Eurocrypt '06] and the size of $\mathsf{sk}$ in Zhandry [Crypto '20], while further eliminating the reliance on the generic group model in the latter work.
Last updated:  2023-02-23
Exploiting Non-Full Key Additions: Full-Fledged Automatic Demirci-Selcuk Meet-in-the-Middle Cryptanalysis of SKINNY
Danping Shi, Siwei Sun, Ling Song, Lei Hu, Qianqian Yang
The Demirci-Sel{\c{c}}uk meet-in-the-middle (DS-MITM) attack is a sophisticated variant of differential attacks. Due to its sophistication, it is hard to efficiently find the best DS-MITM attacks on most ciphers \emph{except} for AES. Moreover, the current automatic tools only capture the most basic version of DS-MITM attacks, and the critical techniques developed for enhancing the attacks (e.g., differential enumeration and key-dependent-sieve) still rely on manual work. In this paper, we develop a full-fledged automatic framework integrating all known techniques (differential enumeration, key-dependent-sieve, and key bridging, etc) for the DS-MITM attack that can produce key-recovery attacks directly rather than only search for distinguishers. Moreover, we develop a new technique that is able to exploit partial key additions to generate more linear relations beneficial to the attacks. We apply the framework to the SKINNY family of block ciphers and significantly improved results are obtained. In particular, all known DS-MITM attacks on the respective versions of SKINNY are improved by at least 2 rounds, and the data, memory, or time complexities of some attacks are reduced even compared to previous best attacks penetrating less rounds.
Last updated:  2023-02-27
Mitigating Decentralized Finance Liquidations with Reversible Call Options
Kaihua Qin, Jens Ernstberger, Liyi Zhou, Philipp Jovanovic, Arthur Gervais
Liquidations in DeFi are both a blessing and a curse — whereas liquidations prevent lenders from capital loss, they simultaneously lead to liquidation spirals and system-wide failures. Since most lending and borrowing protocols assume liquidations are indispensable, there is an increased interest in alternative constructions that prevent immediate systemic-failure under uncertain circumstances. In this work, we introduce reversible call options, a novel financial primitive that enables the seller of a call option to terminate it before maturity. We apply reversible call options to lending in DeFi and devise Miqado, a protocol for lending platforms to replace the liquidation mechanisms. To the best of our knowledge, Miqado is the first protocol that actively mitigates liquidations to reduce the risk of liquidation spirals. Instead of selling collateral, Miqado incentivizes external entities, so-called supporters, to top-up a borrowing position and grant the borrower additional time to rescue the debt. Our simulation shows that Miqado reduces the amount of liquidated collateral by 89.82% in a worst-case scenario.
Last updated:  2025-03-07
XOCB: Beyond-Birthday-Bound Secure Authenticated Encryption Mode with Rate-One Computation (Full Version)
Zhenzhen Bao, Seongha Hwang, Akiko Inoue, Byeonghak Lee, Jooyoung Lee, and Kazuhiko Minematsu
We present a new block cipher mode of operation for authenticated encryption (AE), dubbed XOCB, that has the following features: (1) beyond-birthday-bound (BBB) security based on the standard pseudorandom assumption of the internal block cipher if the maximum block length is sufficiently smaller than the birthday bound, (2) rate-1 computation, and (3) supporting any block cipher with any key length. Namely, XOCB has effectively the same efficiency as the seminal OCB while having stronger quantitative security without any change in the security model or the required primitive in OCB. Although numerous studies have been conducted in the past, our XOCB is the first mode of operation to achieve these multiple goals simultaneously.
Last updated:  2023-11-19
Obfuscation of Pseudo-Deterministic Quantum Circuits
James Bartusek, Fuyuki Kitagawa, Ryo Nishimaki, and Takashi Yamakawa
We show how to obfuscate pseudo-deterministic quantum circuits in the classical oracle model, assuming the quantum hardness of learning with errors. Given the classical description of a quantum circuit $Q$, our obfuscator outputs a quantum state $\ket{\widetilde{Q}}$ that can be used to evaluate $Q$ repeatedly on arbitrary inputs. Instantiating the classical oracle using any candidate post-quantum indistinguishability obfuscator gives us the first candidate construction of indistinguishability obfuscation for all polynomial-size pseudo-deterministic quantum circuits. In particular, our scheme is the first candidate obfuscator for a class of circuits that is powerful enough to implement Shor's algorithm (SICOMP 1997). Our approach follows Bartusek and Malavolta (ITCS 2022), who obfuscate \emph{null} quantum circuits by obfuscating the verifier of an appropriate classical verification of quantum computation (CVQC) scheme. We go beyond null circuits by constructing a publicly-verifiable CVQC scheme for quantum \emph{partitioning} circuits, which can be used to verify the evaluation procedure of Mahadev's quantum fully-homomorphic encryption scheme (FOCS 2018). We achieve this by upgrading the one-time secure scheme of Bartusek (TCC 2021) to a fully reusable scheme, via a publicly-decodable \emph{Pauli functional commitment}, which we formally define and construct in this work. This commitment scheme, which satisfies a notion of binding against committers that can access the receiver's standard and Hadamard basis decoding functionalities, is constructed by building on techniques of Amos, Georgiou, Kiayias, and Zhandry (STOC 2020) introduced in the context of equivocal but collision-resistant hash functions.
Last updated:  2023-02-22
Hardware Root-of-Trust implementations in Trusted Execution Environments
Usman Ali, Hamza Omar, Chujiao Ma, Vaibhav Garg, Omer Khan
Hardware-based Root of Trust (HRT) is considered the gold standard for bootstrapping trust in secure computing. This paper analyzes HRT implementations across state-of-the-art TEEs and differentiates HRT implementation across two dimensions: 1) Security Properties & Threats and 2) Hardware Capabilities. Later, this work analyzes and compares 1) Intel SGX, 2) ARM TrustZone, 3) NXP Trust Architecture, 4) AMD SEV, 5) Microsoft Pluton, and 6) Apple T2 HRTs in terms of threats, security properties, and capabilities.
Last updated:  2023-02-21
A Lower Bound on the Length of Signatures Based on Group Actions and Generic Isogenies
Dan Boneh, Jiaxin Guan, Mark Zhandry
We give the first black box lower bound for signature protocols that can be described as group actions, which include many based on isogenies. We show that, for a large class of signature schemes making black box use of a (potentially non-abelian) group action, the signature length must be $\Omega(\lambda^2/\log\lambda)$. Our class of signatures generalizes all known signatures that derive security exclusively from the group action, and our lower bound matches the state of the art, showing that the signature length cannot be improved without deviating from the group action framework.
Last updated:  2024-05-22
Anamorphic Encryption, Revisited
Fabio Banfi, Konstantin Gegier, Martin Hirt, Ueli Maurer, and Guilherme Rito
An anamorphic encryption scheme allows two parties who share a so-called double key to embed covert messages in ciphertexts of an established PKE scheme. This protects against a dictator that can force the receiver to reveal the secret keys for the PKE scheme, but who is oblivious about the existence of the double key. We identify two limitations of the original model by Persiano, Phan, and Yung (EUROCRYPT 2022). First, in their definition a double key can only be generated once, together with a key-pair. This has the drawback that a receiver who wants to use the anamorphic mode after a dictator comes to power, needs to deploy a new key-pair, a potentially suspicious act. Second, a receiver cannot distinguish whether or not a ciphertext contains a covert message. In this work we propose a new model that overcomes these limitations. First, we allow to associate multiple double keys to a key-pair, after its deployment. This also enables deniability in case the double key only depends on the public key. Second, we propose a natural robustness notion, which guarantees that anamorphically decrypting a regularly encrypted message results in a special symbol indicating that no covert message is contained, which also eliminates certain attacks. Finally, to instantiate our new, stronger definition of anamorphic encryption, we provide generic and concrete constructions. Concretely, we show that ElGamal and Cramer-Shoup satisfy a new condition, selective randomness recoverability, which enables robust anamorphic extensions, and we also provide a robust anamorphic extension for RSA-OAEP.
Last updated:  2023-02-21
Unique-Path Identity Based Encryption With Applications to Strongly Secure Messaging
Paul Rösler, Daniel Slamanig, Christoph Striecks
Hierarchical Identity Based Encryption (HIBE) is a well studied, versatile tool used in many cryptographic protocols. Yet, since the performance of all known HIBE constructions is broadly considered prohibitive, some real-world applications avoid relying on HIBE at the expense of security. A prominent example for this is secure messaging: Strongly secure messaging protocols are provably equivalent to Key-Updatable Key Encapsulation Mechanisms (KU-KEMs; Balli et al., Asiacrypt 2020); so far, all KU-KEM constructions rely on adaptive unbounded-depth HIBE (Poettering and Rösler, Jaeger and Stepanovs, both CRYPTO 2018). By weakening security requirements for better efficiency, many messaging protocols dispense with using HIBE. In this work, we aim to gain better efficiency without sacrificing security. For this, we observe that applications like messaging only need a restricted variant of HIBE for strong security. This variant, that we call Unique-Path Identity Based Encryption (UPIBE), restricts HIBE by requiring that each secret key can delegate at most one subordinate secret key. However, in contrast to fixed secret key delegation in Forward-Secure Public Key Encryption, the delegation in UPIBE, as in HIBE, is uniquely determined by variable identity strings from an exponentially large space. We investigate this mild but surprisingly effective restriction and show that it offers substantial complexity and performance advantages. More concretely, we generically build bounded-depth UPIBE from only bounded-collusion IBE in the standard model; and we generically build adaptive unbounded-depth UPIBE from only selective bounded-depth HIBE in the random oracle model. These results significantly extend the range of underlying assumptions and efficient instantiations. We conclude with a rigorous performance evaluation of our UPIBE design. Beyond solving challenging open problems by reducing complexity and improving efficiency of KU-KEM and strongly secure messaging protocols, we offer a new definitional perspective on the bounded-collusion setting.
Last updated:  2023-10-09
A New Sieving-Style Information-Set Decoding Algorithm
Qian Guo, Thomas Johansson, and Vu Nguyen
The problem of decoding random codes is a fundamental problem for code-based cryptography, including recent code-based candidates in the NIST post-quantum standardization process. In this paper, we present a novel sieving-style information-set decoding (ISD) algorithm, addressing the task of solving the syndrome decoding problem. Our approach involves maintaining a list of weight-$2p$ solution vectors to a partial syndrome decoding problem and then creating new vectors by identifying pairs of vectors that collide in $p$ positions. By gradually increasing the parity-check condition by one and repeating this process iteratively, we find the final solution(s). We show that our novel algorithm performs better than other ISDs in the memory-restricted scenario when applied to McEliece. Notably, in the case of problem instances with very low relative weight, it seems to significantly outperform all previous algorithms. In particular, for code-based candidate BIKE, the algorithm has lower bit complexity than the previous best results.
Last updated:  2023-02-21
Fixing and Mechanizing the Security Proof of Fiat-Shamir with Aborts and Dilithium
Manuel Barbosa, Gilles Barthe, Christian Doczkal, Jelle Don, Serge Fehr, Benjamin Grégoire, Yu-Hsuan Huang, Andreas Hülsing, Yi Lee, Xiaodi Wu
We extend and consolidate the security justification for the Dilithium signature scheme. In particular, we identify a subtle but crucial gap that appears in several ROM and QROM security proofs for signature schemes that are based on the Fiat-Shamir with aborts paradigm, including Dilithium. The gap lies in the CMA-to-NMA reduction and was uncovered when trying to formalize a variant of the QROM security proof by Kiltz, Lyubashevsky, and Schaffner (Eurocrypt 2018). The gap was confirmed by the authors, and there seems to be no simple patch for it. We provide new, fixed proofs for the affected CMA-to-NMA reduction, both for the ROM and the QROM, and we perform a concrete security analysis for the case of Dilithium to show that the claimed security level is still valid after addressing the gap. Furthermore, we offer a fully mechanized ROM proof for the CMA-security of Dilithium in the EasyCrypt proof assistant. Our formalization includes several new tools and techniques of independent interest for future formal verification results.
Last updated:  2024-05-14
A Detailed Analysis of Fiat-Shamir with Aborts
Julien Devevey, Pouria Fallahpour, Alain Passelègue, Damien Stehlé, and Keita Xagawa
Lyubashevky's signatures are based on the Fiat-Shamir with Aborts paradigm. It transforms an interactive identification protocol that has a non-negligible probability of aborting into a signature by repeating executions until a loop iteration does not trigger an abort. Interaction is removed by replacing the challenge of the verifier by the evaluation of a hash function, modeled as a random oracle in the analysis. The access to the random oracle is classical (ROM), resp. quantum (QROM), if one is interested in security against classical, resp. quantum, adversaries. Most analyses in the literature consider a setting with a bounded number of aborts (i.e., signing fails if no signature is output within a prescribed number of loop iterations), while practical instantiations (e.g., Dilithium) run until a signature is output (i.e., loop iterations are unbounded). In this work, we emphasize that combining random oracles with loop iterations induces numerous technicalities for analyzing correctness, run-time, and security of the resulting schemes, both in the bounded and unbounded case. As a first contribution, we put light on errors in all existing analyses. We then provide two detailed analyses in the QROM for the bounded case, adapted from Kiltz, Lyubashevsky, and Shaffner [EUROCRYPT'18] and from Grilo, Hövelmanns, Hülsing, and Majenz [ASIACRYPT'21]. In the process, we prove the underlying $\Sigma$-protocol to achieve a stronger zero-knowledge property than usually considered for $\Sigma$-protocols with aborts, which enables a corrected analysis. A further contribution is a detailed analysis in the case of unbounded aborts, the latter inducing several additional subtleties.
Last updated:  2023-09-19
Semi-Quantum Copy-Protection and More
Céline Chevalier, Paul Hermouet, and Quoc-Huy Vu
Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Unfortunately, this usually comes at the cost of needing quantum power from every party in the protocol, while arguably a more realistic scenario would be a network of classical clients, classically interacting with a quantum server. In this paper, we focus on copy-protection, which is a quantum primitive that allows a program to be evaluated, but not copied, and has shown interest especially due to its links to other unclonable cryptographic primitives. Our main contribution is to show how to dequantize existing quantum copy-protection from hidden coset states, by giving a construction for classically-instructed remote state preparation for coset states. We then apply this dequantizer to obtain semi-quantum cryptographic protocols for copy-protection and tokenized signatures with strong unforgeability. In the process, we present the first secure copy-protection scheme for point functions in the plain model and a new direct product hardness property of coset states which immediately implies a strongly unforgeable tokenized signature scheme.
Last updated:  2024-08-25
Memory-Efficient Attacks on Small LWE Keys
Andre Esser, Arindam Mukherjee, and Santanu Sarkar
Combinatorial attacks on small max norm LWE keys suffer enormous memory requirements, which render them inefficient in realistic attack scenarios. Therefore, more memory-efficient substitutes for these algorithms are needed. In this work, we provide new combinatorial algorithms for recovering small max norm LWE secrets outperforming previous approaches whenever the available memory is limited. We provide analyses of our algorithms for secret key distributions of current NTRU, Kyber and Dilithium variants, showing that our new approach outperforms previous memory-efficient algorithms. For instance, considering uniformly random ternary secrets of length $n$ we improve the best known time complexity for \emph{polynomial memory} algorithms from $2^{1.063n}$ down-to $2^{0.926n}$. We obtain even larger gains for LWE secrets in $\{-m,\ldots,m\}^n$ with $m=2,3$ as found in Kyber and Dilithium. For example, for uniformly random keys in $\{-2,\ldots,2\}^n$ as is the case for Dilithium we improve the previously best time under polynomial memory restriction from $2^{1.742n}$ down-to $2^{1.282n}$. Eventually, we provide novel time-memory trade-offs continuously interpolating between our polynomial memory algorithms and the best algorithms in the unlimited memory case (May, Crypto 2021).
Last updated:  2023-02-21
The propagation game: on simulatability, correlation matrices, and probing security
Vittorio Zaccaria
This work is intended for researchers in the field of side-channel attacks, countermeasure analysis, and probing security. It reports on a formalization of simulatability in terms of linear algebra properties, which we think will provide a useful tool in the practitioner toolbox. The formalization allowed us to revisit some existing definitions (such as probe isolating non-interference) in a simpler way that corresponds to the propagation of erase morphisms. From a theoretical perspective, we shed light into probabilistic definitions of simulatability and matrix-based spectral approaches. This could mean, in practice, that potentially better tools can be built. Readers will find a different, and perhaps less contrived, definition of simulatability, which could enable new forms of reasoning. This work does not cover any practical implementation of the proposed tools, which is left for future work.
Last updated:  2023-02-21
Lynx: Family of Lightweight Authenticated Encryption Schemes based on Tweakable Blockcipher
Munawar Hasan, Donghoon Chang
The widespread deployment of low-power and handheld devices opens an opportunity to design lightweight authenticated encryption schemes. The schemes so proposed must also prove their resilience under various security notions. Romulus-N1 is an authenticated encryption scheme with associated data based on a tweakable blockcipher, a primary variant of Romulus-N family which is NIST (National Institute of Standards and Technology) lightweight cryptography competition finalist; provides beyond birthday bound security for integrity security in nonce respecting scenario but fails to provide the integrity security in nonce misuse and RUP (release of unverified plaintext) scenarios. In this paper, we propose lynx, a family with $14$ members of 1-pass and rate-1 lightweight authenticated encryption schemes with associated data based on a tweakable blockcipher, that provides birthday bound security for integrity security in both nonce respecting as well as nonce misuse and RUP scenarios and birthday bound security for privacy in nonce respecting scenario. For creating such a family of schemes we propose a family of function $\mathcal{F}$ that provides a total of $72$ cases out of which we show that only $14$ of them can be used for creating authenticated encryption schemes. We provide the implementation of one of the members of lynx family on six different hardware platforms and compare it with Romulus-N1. The comparison clearly shows that the lynx member outperforms Romulus-N1 on all the six platforms.
Last updated:  2023-02-21
Pitfalls and Shortcomings for Decompositions and Alignment (Full Version)
Baptiste Lambin, Gregor Leander, Patrick Neumann
In this paper we, for the first time, study the question under which circumstances decomposing a round function of a Substitution-Permutation Network is possible uniquely. More precisely, we provide necessary and sufficient criteria for the non-linear layer on when a decomposition is unique. Our results in particular imply that, when cryptographically strong S-boxes are used, the decomposition is indeed unique. We then apply our findings to the notion of alignment, pointing out that the previous definition allows for primitives that are both aligned and unaligned simultaneously. As a second result, we present experimental data that shows that alignment might only have limited impact. For this, we compare aligned and unaligned versions of the cipher PRESENT.
Last updated:  2023-02-22
Improved Preimage Sampling for Lattices
Corentin Jeudy, Adeline Roux-Langlois, Olivier Sanders
Preimage Sampling is a fundamental process in lattice-based cryptography whose performance directly affects the one of the cryptographic mechanisms that rely on it. In 2012, Micciancio and Peikert proposed a new way of generating trapdoors (and an associated preimage sampling procedure) with very interesting features. Unfortunately, in some applications such as digital signatures, the performance may not be as competitive as other approaches like Fiat-Shamir with Aborts. In this work we revisit the preimage sampling algorithm proposed by Micciancio and Peikert with different contributions. We first propose a finer analysis of this procedure which results in drastic efficiency gains of up to 50% on the preimage sizes without affecting security. It can thus be used as a drop-in replacement in every construction resorting to it. We then propose a new preimage sampling method which still relies on the trapdoors of Micciancio and Peikert, but that also bridges to the Fiat-Shamir with Aborts signature paradigm by leveraging rejection sampling. It again leads to dramatic gains of up to 75% compared to the original sampling technique. This opens promising perspectives for the efficiency of advanced lattice-based constructions relying on such mechanisms. As an application of our new procedure, we give the first lattice-based aggregate signature supporting public aggregation and that achieves relevant compression compared to the concatenation of individual signatures. Our scheme is proven secure in the aggregate chosen-key model coined by Boneh et al. in 2003, based on the well-studied assumptions Module Learning With Errors and Module Short Integer Solution.
Last updated:  2023-02-21
Certifying Giant Nonprimes
Charlotte Hoffmann, Pavel Hubáček, Chethan Kamath, Krzysztof Pietrzak
GIMPS and PrimeGrid are large-scale distributed projects dedicated to searching giant prime numbers, usually of special forms like Mersenne and Proth. The numbers in the current search-space are millions of digits large and the participating volunteers need to run resource-consuming primality tests. Once a candidate prime $N$ has been found, the only way for another party to independently verify the primality of $N$ used to be by repeating the expensive primality test. To avoid the need for second recomputation of each primality test, these projects have recently adopted certifying mechanisms that enable efficient verification of performed tests. However, the mechanisms presently in place only detect benign errors and there is no guarantee against adversarial behavior: a malicious volunteer can mislead the project to reject a giant prime as being non-prime. In this paper, we propose a practical, cryptographically-sound mechanism for certifying the non-primality of Proth numbers. That is, a volunteer can -- parallel to running the primality test for $N$ -- generate an efficiently verifiable proof at a little extra cost certifying that $N$ is not prime. The interactive protocol has statistical soundness and can be made non-interactive using the Fiat-Shamir heuristic. Our approach is based on a cryptographic primitive called Proof of Exponentiation (PoE) which, for a group $\mathbb{G}$, certifies that a tuple $(x,y,T)\in\mathbb{G}^2\times\mathbb{N}$ satisfies $x^{2^T}=y$ (Pietrzak, ITCS 2019 and Wesolowski, J. Cryptol. 2020). In particular, we show how to adapt Pietrzak's PoE at a moderate additional cost to make it a cryptographically-sound certificate of non-primality.
Last updated:  2023-06-09
Fast Practical Lattice Reduction through Iterated Compression
Keegan Ryan, Nadia Heninger
We introduce a new lattice basis reduction algorithm with approximation guarantees analogous to the LLL algorithm and practical performance that far exceeds the current state of the art. We achieve these results by iteratively applying precision management techniques within a recursive algorithm structure and show the stability of this approach. We analyze the asymptotic behavior of our algorithm, and show that the heuristic running time is $O(n^{\omega}(C+n)^{1+\varepsilon})$ for lattices of dimension $n$, $\omega\in (2,3]$ bounding the cost of size reduction, matrix multiplication, and QR factorization, and $C$ bounding the log of the condition number of the input basis $B$. This yields a running time of $O\left(n^\omega (p + n)^{1 + \varepsilon}\right)$ for precision $p = O(\log \|B\|_{max})$ in common applications. Our algorithm is fully practical, and we have published our implementation. We experimentally validate our heuristic, give extensive benchmarks against numerous classes of cryptographic lattices, and show that our algorithm significantly outperforms existing implementations.
Last updated:  2024-03-29
Certified Everlasting Secure Collusion-Resistant Functional Encryption, and More
Taiga Hiroka, Fuyuki Kitagawa, Tomoyuki Morimae, Ryo Nishimaki, Tapas Pal, and Takashi Yamakawa
We study certified everlasting secure functional encryption (FE) and many other cryptographic primitives in this work. Certified everlasting security roughly means the following. A receiver possessing a quantum cryptographic object (such as ciphertext) can issue a certificate showing that the receiver has deleted the cryptographic object and information included in the object (such as plaintext) was lost. If the certificate is valid, the security is guaranteed even if the receiver becomes computationally unbounded after the deletion. Many cryptographic primitives are known to be impossible (or unlikely) to have information-theoretical security even in the quantum world. Hence, certified everlasting security is a nice compromise (intrinsic to quantum). In this work, we define certified everlasting secure versions of FE, compute-and-compare obfuscation, predicate encryption (PE), secret-key encryption (SKE), public-key encryption (PKE), receiver non-committing encryption (RNCE), and garbled circuits. We also present the following constructions: - Adaptively certified everlasting secure collusion-resistant public-key FE for all polynomial-size circuits from indistinguishability obfuscation and one-way functions. - Adaptively certified everlasting secure bounded collusion-resistant public-key FE for $\mathsf{NC}^1$ circuits from standard PKE. - Certified everlasting secure compute-and-compare obfuscation from standard fully homomorphic encryption and standard compute-and-compare obfuscation - Adaptively (resp., selectively) certified everlasting secure PE from standard adaptively (resp., selectively) secure attribute-based encryption and certified everlasting secure compute-and-compare obfuscation. - Certified everlasting secure SKE and PKE from standard SKE and PKE, respectively. - Certified everlasting secure RNCE from standard PKE. - Certified everlasting secure garbled circuits from standard SKE.
Last updated:  2023-04-29
New Results on Machine Learning Based Distinguishers
Anubhab Baksi, Jakub Breier, Vishnu Asutosh Dasu, Xiaolu Hou, Hyunji Kim, Hwajeong Seo
Machine Learning (ML) is almost ubiquitously used in multiple disciplines nowadays. Recently, we have seen its usage in the realm of differential distinguishers for symmetric key ciphers. In this work, we explore the possibility of a number of ciphers with respect to various ML-based distinguishers. We show new distinguishers on the unkeyed and round reduced version of SPECK-32, SPECK-128, ASCON, SIMECK-32, SIMECK-64 and SKINNY-128. We explore multiple avenues in the process. In summary, we use neural network as well as support vector machine in various settings (such as varying the activation function), apart from experimenting with a number of input difference tuples. Among other results, we show a distinguisher of 8-round SPECK-32 that works with practical data complexity (most of the experiments take a few hours on a personal computer).
Last updated:  2023-02-20
Privately Puncturing PRFs from Lattices: Adaptive Security and Collusion Resistant Pseudorandomness
Rupeng Yang
A private puncturable pseudorandom function (PRF) enables one to create a constrained version of a PRF key, which can be used to evaluate the PRF at all but some punctured points. In addition, the constrained key reveals no information about the punctured points and the PRF values on them. Existing constructions of private puncturable PRFs are only proven to be secure against a restricted adversary that must commit to the punctured points before viewing any information. It is an open problem to achieve the more natural adaptive security, where the adversary can make all its choices on-the-fly. In this work, we solve the problem by constructing an adaptively secure private puncturable PRF from standard lattice assumptions. To achieve this goal, we present a new primitive called explainable hash, which allows one to reprogram the hash function on a given input. The new primitive may find further applications in constructing more cryptographic schemes with adaptive security. Besides, our construction has collusion resistant pseudorandomness, which requires that even given multiple constrained keys, no one could learn the values of the PRF at the punctured points. Private puncturable PRFs with collusion resistant pseudorandomness were only known from multilinear maps or indistinguishability obfuscations in previous works, and we provide the first solution from standard lattice assumptions.
Last updated:  2023-02-24
Complete Characterization of Broadcast and Pseudo-Signatures from Correlations
Varun Narayanan, Vinod M. Prabhakaran, Neha Sangwan, Shun Watanabe
Unconditionally secure broadcast is feasible among parties connected by pairwise secure links only if there is a strict two-thirds majority of honest parties when no additional resources are available. This limitation may be circumvented when the parties have recourse to additional resources such as correlated randomness. Fitzi, Wolf, and Wullschleger (CRYPTO 2004) attempted to characterize the conditions on correlated randomness shared among three parties which would enable them to realize broadcast. Due to a gap in their impossibility argument, it turns out that their characterization is incorrect. Using a novel construction we show that broadcast is feasible under a considerably larger class of correlations. In fact, we realize pseudo-signatures, which are information theoretic counterparts of digital signatures using which unconditionally secure broadcast may be obtained. We also obtain a matching impossibility result thereby characterizing the class of correlations on which three-party broadcast (and pseudo-signatures) can be based. Our impossibility proof, which extends the well-know argument of Fischer, Lynch and Merritt (Distr. Comp., 1986) to the case where parties observe correlated randomness, maybe of independent interest.
Last updated:  2025-02-19
Crypto Dark Matter on the Torus: Oblivious PRFs from shallow PRFs and FHE
Martin R. Albrecht, Alex Davidson, Amit Deo, and Daniel Gardham
Partially Oblivious Pseudorandom Functions (POPRFs) are 2-party protocols that allow a client to learn pseudorandom function (PRF) evaluations on inputs of its choice from a server. The client submits two inputs, one public and one private. The security properties ensure that the server cannot learn the private input, and the client cannot learn more than one evaluation per POPRF query. POPRFs have many applications including password-based key exchange and privacy-preserving authentication mechanisms. However, most constructions are based on classical assumptions, and those with post quantum security suffer from large efficiency drawbacks. In this work, we construct a novel POPRF from lattice assumptions and the “Crypto Dark Matter” PRF candidate (TCC’18) in the random oracle model. At a conceptual level, our scheme exploits the alignment of this family of PRF candidates, relying on mixed modulus computations, and programmable bootstrapping in the torus fully homomorphic encryption scheme (TFHE). We show that our construction achieves malicious client security based on circuit-private FHE, and client privacy from the semantic security of the FHE scheme. We further explore a heuristic approach to extend our scheme to support verifiability, based on the difficulty of computing cheating circuits in low depth. This would yield a verifiable (P)OPRF. We provide a proof-of-concept implementation and preliminary benchmarks of our construction. For the core online OPRF functionality, we require amortised 10.0KB communication per evaluation and a one-time per-client setup communication of 2.5MB.
Last updated:  2023-02-20
One Digit Checksum for Data Integrity Verification of Cloud-executed Homomorphic Encryption Operations
Mostefa Kara, Abdelkader Laouid, Omer Al dabbas, Mohammad Hammoudeh, Ahcène Bounceur
Homomorphic Encryption~(HE) is used in many fields including information storage, data protection, privacy preservation, blockchain, and authentication. HE allows an untrusted third party to perform algebraic operations on encrypted data. Protecting the results of HE against accidental or malicious tampering attacks is still an open research challenge. In this paper, we introduce a lightweight technique that allows a data owner to verify the integrity of HE results performed in the cloud. The proposed method is quick, simple, and applicable, as it depends on adding a single digit to the encrypted message before storing it in the cloud. This digit represents verification proof and it is later used to ensure a verifiable HE. Our technique can be integrated with any HE scheme that uses encryption with non-isolated plaintext.
Last updated:  2023-02-20
Attacking the IETF/ISO Standard for Internal Re-keying CTR-ACPKM
Orr Dunkelman, Shibam Ghosh, Eran Lambooij
Encrypting too much data using the same key is a bad practice from a security perspective. Hence, it is customary to perform re-keying after a given amount of data is transmitted. While in many cases, the re-keying is done using a fresh execution of some key exchange protocol (e.g., in IKE or TLS), there are scenarios where internal re-keying, i.e., without exchange of information, is performed, mostly due to performance reasons. Originally suggested by Abdalla and Bellare, there are several proposals on how to perform this internal re-keying mechanism. For example, Liliya et al. offered the CryptoPro Key Meshing (CPKM) to be used together with GOST 28147-89 (known as the GOST block cipher). Later, ISO and the IETF adopted the Advanced CryptoPro Key Meshing (ACKPM) in ISO 10116 and RFC 8645, respectively. In this paper, we study the security of ACPKM and CPKM. We show that the internal re-keying suffers from an entropy loss in successive repetitions of the re- keying mechanism. We show some attacks based on this issue. The most prominent one has time and data complexities of $O(2^{\kappa/2} )$ and success rate of $O(2^{−\kappa/4} )$ for a $\kappa$-bit key. Furthermore, we show that a malicious block cipher designer or a faulty implementation can exploit the ACPKM (or the original CPKM) mechanism to significantly hinder the security of a protocol employing ACPKM (or CPKM). Namely, we show that in such cases, the entropy of the re-keyed key can be greatly reduced.
Last updated:  2023-09-21
One-out-of-Many Unclonable Cryptography: Definitions, Constructions, and More
Fuyuki Kitagawa and Ryo Nishimaki
The no-cloning principle of quantum mechanics enables us to achieve amazing unclonable cryptographic primitives, which is impossible in classical cryptography. However, the security definitions for unclonable cryptography are tricky. Achieving desirable security notions for unclonability is a challenging task. In particular, there is no indistinguishable-secure unclonable encryption and quantum copy-protection for single-bit output point functions in the standard model. To tackle this problem, we introduce and study relaxed but meaningful security notions for unclonable cryptography in this work. We call the new security notion one-out-of-many unclonable security. We obtain the following results. - We show that one-time strong anti-piracy secure secret key single-decryptor encryption (SDE) implies one-out-of-many indistinguishable-secure unclonable encryption. - We construct a one-time strong anti-piracy secure secret key SDE scheme in the standard model from the LWE assumption. - We construct one-out-of-many copy-protection for single-bit output point functions from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption. - We construct one-out-of-many unclonable predicate encryption (PE) from one-out-of-many indistinguishable-secure unclonable encryption and the LWE assumption. Thus, we obtain one-out-of-many indistinguishable-secure unclonable encryption, one-out-of-many copy-protection for single-bit output point functions, and one-out-of-many unclonable PE in the standard model from the LWE assumption. In addition, our one-time SDE scheme is the first SDE scheme that does not rely on any oracle heuristics and strong assumptions such as indistinguishability obfuscation and witness encryption.
Last updated:  2023-02-20
Authenticated Continuous Key Agreement: Active MitM Detection and Prevention
Benjamin Dowling, Britta Hale
Current messaging protocols are incapable of detecting active man-in-the-middle threats. Even common continuous key agreement protocols such as Signal, which offers forward secrecy and post-compromise security, are dependent on the adversary being passive immediately following state compromise, and healing guarantees are lost if the attacker is not. This work offers the first solution for detecting active man-in-the-middle attacks on such protocols by extending authentication beyond the initial public keys and binding it to the entire continuous key agreement. In this, any adversarial fork is identifiable to the protocol participants. We provide a modular construction generic for application with any continuous key agreement protocol, a specific construction for application to Signal, and security analysis. The modularity of our solution enables it to be seamlessly adopted by any continuous key agreement protocol.
Last updated:  2023-02-20
A Novel Automatic Technique Based on MILP to Search for Impossible Differentials
Yong Liu, Zejun Xiang, Siwei Chen, Shasha Zhang, Xiangyong Zeng
The Mixed Integer Linear Programming (MILP) is a common method of searching for impossible differentials (IDs). However, the optimality of the distinguisher should be confirmed by an exhaustive search of all input and output differences, which is clearly computationally infeasible due to the huge search space. In this paper, we propose a new technique that uses two-dimensional binary variables to model the input and output differences and characterize contradictions with constraints. In our model, the existence of IDs can be directly obtained by checking whether the model has a solution. In addition, our tool can also detect any contradictions between input and output differences by changing the position of the contradictions. Our method is confirmed by applying it to several block ciphers, and our results show that we can find 6-, 13-, and 12-round IDs for Midori-64, CRAFT, and SKINNY-64 within a few seconds, respectively. Moreover, by carefully analyzing the key schedule of Midori-64, we propose an equivalent key transform technique and construct a complete MILP model for an 11-round impossible differential attack (IDA) on Midori-64 to search for the minimum number of keys to be guessed. Based on our automatic technique, we present a new 11-round IDA on Midori-64, where 23 nibbles of keys need to be guessed, which reduces the time complexity compared to previous work. The time and data complexity of our attack are $2^{116.59}$ and $2^{60}$, respectively. To the best of our knowledge, this is the best IDA on Midori-64 at present.
Last updated:  2023-02-19
Impossibility of Indifferentiable Iterated Blockciphers from 3 or Less Primitive Calls
Chun Guo, Lei Wang, Dongdai Lin
Virtually all modern blockciphers are iterated. In this paper, we ask: to construct a secure iterated blockcipher "non-trivially", how many calls to random functions and permutations are necessary? When security means indistinguishability from a random permutation, optimality is achieved by the Even-Mansour scheme using 1 call to a public permutation. We seek for the arguably strongest security indifferentiability from an ideal cipher, a notion introduced by Maurer et al. (TCC 2004) and popularized by Coron et al. (JoC, 2014). We provide the first generic negative result/lower bounds: when the key is not too short, no iterated blockcipher making 3 calls is (statistically) indifferentiable. This proves optimality for a 4-call positive result of Guo et al. (Eprint 2016). Furthermore, using 1 or 2 calls, even indifferentiable iterated blockciphers with polynomial keyspace are impossible. To prove this, we develop an abstraction of idealized iterated blockciphers and establish various basic properties, and apply Extremal Graph Theory results to prove the existence of certain (generalized) non-random properties such as the boomerang and yoyo.
Last updated:  2023-02-19
A Post-Quantum Round-Optimal Oblivious PRF from Isogenies
Andrea Basso
An oblivious pseudorandom function, or OPRF, is an important primitive that is used to build many advanced cryptographic protocols. Despite its relevance, very few post-quantum solutions exist. In this work, we propose a novel OPRF protocol that is post-quantum, verifiable, round-optimal, and moderately compact. Our protocol is based on a previous SIDH-based construction by Boneh, Kogan, and Woo, which was later shown to be insecure due to an attack on its one-more unpredictability. We first propose an efficient countermeasure against this attack by redefining the PRF function to use irrational isogenies. This prevents a malicious user from independently evaluating the PRF. The SIDH-based construction by Boneh, Kogan, and Woo is also vulnerable to the recent attacks on SIDH. We thus demonstrate how to efficiently incorporate the countermeasures against such attacks to obtain a secure OPRF protocol. To achieve this, we also propose the first proof of isogeny knowledge that is compatible with masked torsion points, which may be of independent interest. Lastly, we design a novel non-interactive proof of knowledge of parallel isogenies, which reduces the number of communication rounds of the OPRF to the theoretically-optimal two. Putting everything together, we obtain the most compact post-quantum verifiable OPRF protocol.
Last updated:  2023-02-19
Improved Power Analysis Attacks on Falcon
Shiduo Zhang, Xiuhan Lin, Yang Yu, Weijia Wang
Falcon is one of the three post-quantum signature schemes selected for standardization by NIST. Due to its low bandwidth and high efficiency, Falcon is seen as an attractive option for quantum-safe embedded systems. In this work, we study Falcon's side-channel resistance by analysing its Gaussian samplers. Our results are mainly twofold. The first result is an improved key recovery exploiting the leakage within the base sampler investigated by Guerreau et al. (CHES 2022). Instead of resorting to the fourth moment as in former parallelepiped-learning attacks, we work with the second order statistics covariance and use its spectral decomposition to recover the secret information. Our approach substantially reduces the requirement for measurements and computation resources: $220\,000$ traces is sufficient to recover the secret key of Falcon 512 within half an hour with a probability of $\approx 25\%$. As a comparison, even with $10^6$ traces, the former attack still needs about 1000 hours CPU time of lattice reduction for a full key recovery. In addition, our approach is robust to inaccurate leakage classification, which is another advantage over parallelepiped-learning attacks. Our second result is a practical power analysis targeting the integer Gaussian sampler of Falcon. The analysis relies on the leakage of random sign flip within the integer Gaussian sampling. This leakage was exposed in 2018 by Kim and Hong, but it is not considered in Falcon's implementation and unexploited for side channel analysis until now. We identify the leakage within the reference implementation of Falcon on an ARM Cortex-M4 STM32F407IGT6 microprocessor. We also show that this single bit of leakage is in effect enough for practical key recovery: with $170\,000$ traces one can fully recover the key of Falcon-512 within half an hour. Furthermore, combining the sign leakage and the aforementioned leakage, one can recover the key with only $45\,000$ signature measurements in a short time. As a by-product, we also extend our power analysis to Mitaka which is a recent variant of Falcon. The same leakages exist within the integer Gaussian samplers of Mitaka, and they can also be used to mount key recovery attacks. Nevertheless, the key recovery in Mitaka requires much more traces than it does in Falcon, due to their different lattice Gaussian samplers.
Last updated:  2023-02-18
Classical and Quantum Security of Elliptic Curve VRF, via Relative Indifferentiability
Chris Peikert, Jiayu Xu
Verifiable random functions (VRFs) are essentially pseudorandom functions for which selected outputs can be proved correct and unique, without compromising the security of other outputs. VRFs have numerous applications across cryptography, and in particular they have recently been used to implement committee selection in the Algorand protocol. Elliptic Curve VRF (ECVRF) is an elegant construction, originally due to Papadopoulos et al., that is now under consideration by the Internet Research Task Force. Prior work proved that ECVRF possesses the main desired security properties of a VRF, under suitable assumptions. However, several recent versions of ECVRF include changes that make some of these proofs inapplicable. Moreover, the prior analysis holds only for *classical* attackers, in the random-oracle model (ROM); it says nothing about whether any of the desired properties hold against *quantum* attacks, in the quantumly accessible ROM. We note that certain important properties of ECVRF, like uniqueness, do *not* rely on assumptions that are known to be broken by quantum computers, so it is plausible that these properties could hold even in the quantum setting. This work provides a multi-faceted security analysis of recent versions of ECVRF, in both the classical and quantum settings. First, we motivate and formally define new security properties for VRFs, like non-malleability and binding, and prove that recent versions of ECVRF satisfy them (under standard assumptions). Second, we identify a subtle obstruction in proving that recent versions of ECVRF have *uniqueness* via prior indifferentiability definitions and theorems, even in the classical setting. Third, we fill this gap by defining a stronger notion called *relative indifferentiability*, and extend prior work to show that a standard domain extender used in ECVRF satisfies this notion, in both the classical and quantum settings. This final contribution is of independent interest and we believe it should be applicable elsewhere.
Last updated:  2023-02-18
A Lightweight Identification Protocol Based on Lattices
Samed Düzlü, Juliane Krämer, Thomas Pöppelmann, Patrick Struck
In this work we present a lightweight lattice-based identification protocol based on the CPA-secured public key encryption scheme Kyber. It is designed as a replacement for existing classical ECC- or RSA-based identification protocols in IoT, smart card applications, or for device authentication. The proposed protocol is simple, efficient, and implementations are supposed to be easy to harden against side-channel attacks. Compared to standard constructions for identification protocols based on lattice-based KEMs, our construction achieves this by avoiding the Fujisaki-Okamoto transform and its impact on implementation security. Moreover, contrary to prior lattice-based identification protocols or standard constructions using signatures, our work does not require rejection sampling and can use more efficient parameters than signature schemes. We provide a generic construction from CPA-secured public key encryption schemes to identification protocols and give a security proof of the protocol in the ROM. Moreover, we instantiate the generic construction with Kyber, for which we use the proposed parameter sets for NIST security levels I, III, and V. To show that the protocol is suitable for constrained devices, we implemented one selected parameter set on an ARM Cortex-M4 microcontroller. As the protocol is based on existing algorithms for Kyber, we make use of existing SW components (e.g., fast NTT implementations) for our implementation.
Last updated:  2023-02-17
Bicorn: An optimistically efficient distributed randomness beacon
Kevin Choi, Arasu Arun, Nirvan Tyagi, Joseph Bonneau
We introduce Bicorn, an optimistically efficient distributed randomness protocol with strong robustness under a dishonest majority. Bicorn is a "commit-reveal-recover" protocol. Each participant commits to a random value, which are combined to produce a random output. If any participants fail to open their commitment, recovery is possible via a single time-lock puzzle which can be solved by any party. In the optimistic case, Bicorn is a simple and efficient two-round protocol with no time-lock puzzle. In either case, Bicorn supports open, flexible participation, requires only a public bulletin board and no group-specific setup or PKI, and is guaranteed to produce random output assuming any single participant is honest. All communication and computation costs are (at most) linear in the number of participants with low concrete overhead.
Last updated:  2023-02-17
Password-Authenticated TLS via OPAQUE and Post-Handshake Authentication
Julia Hesse, Stanislaw Jarecki, Hugo Krawczyk, Christopher Wood
OPAQUE is an Asymmetric Password-Authenticated Key Exchange (aPAKE) protocol being standardized by the IETF (Internet Engineering Task Force) as a more secure alternative to the traditional ``password-over-TLS'' mechanism prevalent in current practice. OPAQUE defends against a variety of vulnerabilities of password-over-TLS by dispensing with reliance on PKI and TLS security, and ensuring that the password is never visible to servers or anyone other than the client machine where the password is entered. In order to facilitate the use of OPAQUE in practice, integration of OPAQUE with TLS is needed. The main proposal for standardizing such integration uses the Exported Authenticators (TLS-EA) mechanism of TLS 1.3 that supports post-handshake authentication and allows for a smooth composition with OPAQUE. We refer to this composition as TLS-OPAQUE and present a detailed security analysis for it in the Universal Composability (UC) framework. Our treatment is general and includes the formalization of components that are needed in the analysis of TLS-OPAQUE but are of wider applicability as they are used in many protocols in practice. Specifically, we provide formalizations in the UC model of the notions of post-handshake authentication and channel binding. The latter, in particular, has been hard to implement securely in practice, resulting in multiple protocol failures, including major attacks against prior versions of TLS. Ours is the first treatment of these notions in a computational model with composability guarantees. We complement the theoretical work with a detailed discussion of practical considerations for the use and deployment of TLS-OPAQUE in real-world settings and applications.
Last updated:  2023-02-17
Sieving for large twin smooth integers using single solutions to Prouhet-Tarry-Escott
Knud Ahrens
In the isogeny-based track of post-quantum cryptography the signature scheme SQISign relies on primes $p$ such that $p\pm1$ is smooth. In 2021 a new approach to find those numbers was discovered using solutions to the Prouhet-Tarry-Escott (PTE) problem. With these solutions one can sieve for smooth integers $A$ and $B$ with a difference of $|A-B|=C$ fixed by the solution. Then some $2A/C$ and $2B/C$ are smooth integers hopefully enclosing a prime. They took many different PTE solutions and combined them into a tree to process them more efficiently. But for bigger numbers there are fewer promising PTE solutions so their advantage over the naive approach (checking a single solution at a time) fades. For a single PTE solution the search can be optimised for the corresponding $C$ and allows to only sieve those integers that are divisible by $C$. In this work we investigate such optimisations and show a significant speed-up compared to the naive approach.
Last updated:  2023-06-16
On the Post-Quantum Security of Classical Authenticated Encryption Schemes
Nathalie Lang, Stefan Lucks
We study the post-quantum security of authenticated encryption (AE) schemes, designed with classical security in mind. Under superposition attacks, many CBC-MAC variants have been broken, and AE modes employing those variants, such as EAX and GCM, thus fail at authenticity. As we show, the same modes are IND-qCPA insecure, i.e., they fail to provide privacy under superposition attacks. However, a constrained version of GCM is IND-qCPA secure, and a nonce-based variant of the CBC-MAC is secure under superposition queries. Further, the combination of classical authenticity and classical chosen-plaintext privacy thwarts attacks with superposition chosen-ciphertext and classical chosen-plaintext queries -a security notion that we refer to as IND-qdCCA. And nonce-based key derivation allows generically turning an IND-qdCCA secure scheme into an IND-qCCA secure scheme.
Last updated:  2023-02-17
Indifferentiability of the Sponge Construction with a Restricted Number of Message Blocks
Charlotte Lefevre
The sponge construction is a popular method for hashing. Quickly after its introduction, the sponge was proven to be tightly indifferentiable from a random oracle up to $ \approx 2^{c/2}$ queries, where $c$ is the capacity. However, this bound is not tight when the number of message blocks absorbed is restricted to $\ell <\lceil \frac{c}{2(b-c)}\rceil + 1$ (but still an arbitrary number of blocks can be squeezed). In this work, we show that this restriction leads to indifferentiability from a random oracle up to $\approx \min \left\{2^{b/2}, \max\left\{2^{c/2}, 2^{b- \ell \times (b-c)} \right\}\right\}$ queries, where $b>c$ is the permutation size. Depending on the parameters chosen, this result allows to have enhanced security or to absorb at a larger rate for applications that require a fixed-length input hash function.
Last updated:  2024-03-07
Two-Round Stateless Deterministic Two-Party Schnorr Signatures From Pseudorandom Correlation Functions
Yashvanth Kondi, Claudio Orlandi, and Lawrence Roy
Schnorr signatures are a popular choice due to their simplicity, provable security, and linear structure that enables relatively easy threshold signing protocols. The deterministic variant of Schnorr (where the nonce is derived in a stateless manner using a PRF from the message and a long term secret) is widely used in practice since it mitigates the threats of a faulty or poor randomness generator (which in Schnorr leads to catastrophic breaches of security). Unfortunately, threshold protocols for the deterministic variant of Schnorr have so far been quite inefficient, as they make non black-box use of the PRF involved in the nonce generation. In this paper, we present the first two-party threshold protocol for Schnorr signatures, where signing is stateless and deterministic, and only makes black-box use of the underlying cryptographic algorithms. We present a protocol from general assumptions which achieves covert security, and a protocol that achieves full active security under standard factoring-like assumptions. Our protocols make crucial use of recent advances within the field of pseudorandom correlation functions (PCFs). As an additional benefit, only two-rounds are needed to perform distributed signing in our protocol, connecting our work to a recent line of research on the trade-offs between round complexity and cryptographic assumptions for threshold Schnorr signatures.
Last updated:  2023-04-24
Formally verifying Kyber Episode IV: Implementation Correctness
José Bacelar Almeida, Manuel Barbosa, Gilles Barthe, Benjamin Grégoire, Vincent Laporte, Jean-Christophe Léchenet, Tiago Oliveira, Hugo Pacheco, Miguel Quaresma, Peter Schwabe, Antoine Séré, Pierre-Yves Strub
In this paper we present the first formally verified implementations of Kyber and, to the best of our knowledge, the first such implementations of any post-quantum cryptosystem. We give a (readable) formal specification of Kyber in the EasyCrypt proof assistant, which is syntactically very close to the pseudocode description of the scheme as given in the most recent version of the NIST submission. We present high-assurance open-source implementations of Kyber written in the Jasmin language, along with machine-checked proofs that they are functionally correct with respect to the EasyCrypt specification. We describe a number of improvements to the EasyCrypt and Jasmin frameworks that were needed for this implementation and verification effort, and we present detailed benchmarks of our implementations, showing that our code achieves performance close to existing hand-optimized implementations in C and assembly.
Last updated:  2023-09-14
DIPSAUCE: Efficient Private Stream Aggregation Without Trusted Parties
Joakim Brorsson and Martin Gunnarsson
Private Stream Aggregation (PSA) schemes are efficient protocols for distributed data analytics. In a PSA scheme, a set of data producers can encrypt data for a central party so that it learns the sum of all encrypted values, but nothing about each individual value. Thus, a trusted aggregator is avoided. However, all known PSA schemes still require a trusted party for key generation. In this paper we propose the first PSA scheme that does not rely on a trusted party. We argue its security against static and mobile malicious adversaries, and show its efficiency by implementing both our scheme and the previous state-of-the-art on realistic IoT devices, and compare their performance. Our security and efficiency evaluations show that it is indeed possible to construct an efficient PSA scheme without a trusted central party. Surprisingly, our results also show that, as side effect, our method for distributing the setup procedure also makes the encryption procedure more efficient than the state of the art PSA schemes which rely on trusted parties.
Last updated:  2024-01-09
Deniable Authentication when Signing Keys Leak
Suvradip Chakraborty, Dennis Hofheinz, Ueli Maurer, and Guilherme Rito
Deniable Authentication is a highly desirable property for secure messaging protocols: it allows a sender Alice to authentically transmit messages to a designated receiver Bob in such a way that only Bob gets convinced that Alice indeed sent these messages. In particular, it guarantees that even if Bob tries to convince a (non-designated) party Judy that Alice sent some message, and even if Bob gives Judy his own secret key, Judy will not be convinced: as far as Judy knows, Bob could be making it all up! In this paper we study Deniable Authentication in the setting where Judy can additionally obtain Alice's secret key. Informally, we want that knowledge of Alice's secret key does not help Judy in learning whether Alice sent any messages, even if Bob does not have Alice's secret key and even if Bob cooperates with Judy by giving her his own secret key. This stronger flavor of Deniable Authentication was not considered before and is particularly relevant for Off-The-Record Group Messaging as it gives users stronger deniability guarantees. Our main contribution is a scalable ``MDRS-PKE'' (Multi-Designated Receiver Signed Public Key Encryption) scheme---a technical formalization of Deniable Authentication that is particularly useful for secure messaging for its confidentiality guarantees---that provides this stronger deniability guarantee. At its core lie new MDVS (Multi-Designated Verifier Signature) and PKEBC (Public Key Encryption for Broadcast) scheme constructions: our MDVS is not only secure with respect to the new deniability notions, but it is also the first to be tightly secure under standard assumptions; our PKEBC---which is also of independent interest---is the first with ciphertext sizes and encryption and decryption times that grow only linearly in the number of receivers. This is a significant improvement upon the construction given by Maurer et al. (EUROCRYPT '22), where ciphertext sizes and encryption and decryption times are quadratic in the number of receivers.
Last updated:  2023-02-17
Generating Secure Hardware using ChatGPT Resistant to CWEs
Madhav Nair, Rajat Sadhukhan, Debdeep Mukhopadhyay
The development of Artificial Intelligence (AI) based systems to automatically generate hardware systems has gained an impulse that aims to accelerate the hardware design cycle with no human intervention. Recently, the striking AI-based system ChatGPT from OpenAI has achieved a momentous headline and has gone viral within a short span of time since its launch. This chatbot has the capability to interactively communicate with the designers through a prompt to generate software and hardware code, write logic designs, and synthesize designs for implementation on Field Programmable Gate Array (FPGA) or Application Specific Integrated Circuits (ASIC). However, an unvetted ChatGPT prompt by a designer with an aim to generate hardware code may lead to security vulnerabilities in the generated code. In this work, we systematically investigate the necessary strategies to be adopted by a designer to enable ChatGPT to recommend secure hardware code generation. To perform this analysis, we prompt ChatGPT to generate code scenarios listed in Common Vulnerability Enumerations (CWEs) under the hardware design (CWE-1194) view from MITRE. We first demonstrate how a ChatGPT generates insecure code given the diversity of prompts. Finally, we propose techniques to be adopted by a designer to generate secure hardware code. In total, we create secure hardware code for $10$ noteworthy CWEs under hardware design view listed on MITRE site.
Last updated:  2023-02-17
Improved Low-depth SHA3 Quantum Circuit for Fault-tolerant Quantum Computers
Gyeongju Song, Kyungbae Jang, Hwajeong Seo
To build an efficient security system in the post-quantum era, it is possible to find the minimum security parameters for defending a fault-tolerant quantum computer by estimating the quantum resources required for an quantum attack. In a fault-tolerant quantum computer, errors must reach an acceptable level through error detection and error correction, which additionally uses quantum resources. As the depth of the quantum circuit increases, the computation time per qubit increases, and errors in quantum computers increases. Therefore, in terms of errors in quantum circuits, it is appropriate to reduce the depth by increasing the number of qubits. This paper proposes an low-depth quantum circuit implementations of SHA3 for fault-tolerant quantum computers to reduce errors. The proposed SHA3 quantum circuit is implemented in the direction of reducing the quantum circuit depth through a trade-off between the number of qubits, quantum gate, and quantum depth in each function. Compared to the-state-of-art works, proposed method decreased T-depth and Full-depth by 30.3\% and 80.05\%, respectively. We expect that this work will contribute to the establishment of minimum security parameters for SHA3 in the quantum era.
Last updated:  2023-02-17
New Generic Constructions of Error-Correcting PIR and Efficient Instantiations
Reo Eriguchi, Kaoru Kurosawa, Koji Nuida
A $b$-error-correcting $m$-server Private Information Retrieval (PIR) protocol enables a client to privately retrieve a data item of a database from $m$ servers even in the presence of $b$ malicious servers. List-decodable PIR is a generalization of error-correcting PIR to achieve a smaller number of servers at the cost of giving up unique decoding. Previous constructions of error-correcting and list-decodable PIR have exponential computational complexity in $m$ or cannot achieve sub-polynomial communication complexity $n^{o(1)}$, where $n$ is the database size. Recently, Zhang, Wang and Wang (ASIACCS 2022) presented a non-explicit construction of error-correcting PIR with $n^{o(1)}$ communication and polynomial computational overhead in $m$. However, their protocol requires the number of servers to be larger than the minimum one $m=2b+1$ and they left it as an open problem to reduce it. As for list-decodable PIR, there is no construction with $n^{o(1)}$ communication. In this paper, we propose new generic constructions of error-correcting and list-decodable PIR from any one-round regular PIR. Our constructions increase computational complexity only by a polynomial factor in $m$ while the previous generic constructions incur $\binom{m}{b}$ multiplicative overheads. Instantiated with the best-known protocols, our construction provides for the first time an explicit error-correcting PIR protocol with $n^{o(1)}$ communication, which reduces the number of servers of the protocol by Zhang, Wang and Wang (ASIACCS 2022). For sufficiently large $b$, we also show the existence of $b$-error-correcting PIR with $n^{o(1)}$ communication achieving the minimum number of servers, by allowing for two rounds of interaction. Furthermore, we show an extension to list-decodable PIR and obtain for the first time a protocol with $n^{o(1)}$ communication. Other instantiations improve the communication complexity of the state-of-the-art $t$-private protocols in which $t$ servers may collude. Along the way, we formalize the notion of \textit{locally surjective map families}, which generalize perfect hash families and may be of independent interest.
Last updated:  2023-02-23
Hiding in Plain Sight: Non-profiling Deep Learning-based Side-channel Analysis with Plaintext/Ciphertext
Lichao Wu, Guilherme Perin, Stjepan Picek
Deep learning-based profiling side-channel analysis is widely adopted in academia and industry thanks to the ability to reveal secrets protected with countermeasures. To leverage its capability, the adversary needs to have access to a clone of an attack device to obtain the profiling measurements. Moreover, the adversary needs to know secret information to label these measurements. Non-profiling attacks avoid those constraints by not relying on secret information to label data but rather by trying all key guesses and taking the most successful one. Deep learning approaches also form the basis of several non-profiling attacks. Unfortunately, such approaches suffer from high computational complexity and low generality when applied in practice. This paper proposes a novel non-profiling deep learning-based side-channel analysis technique. Our approach relies on the fact that there is (commonly) a bijective relationship between known information, such as plaintext and ciphertext, and secret information. We use this fact to label the leakage measurement with the known information and then mount attacks. Our results show that we reach at least $3\times$ better attack performance with negligible computational effort than existing non-profiling methods. Moreover, our non-profiling approach rivals the performance of state-of-the-art deep learning-based profiling attacks.
Last updated:  2023-04-15
zkTree: A Zero-Knowledge Recursion Tree with ZKP Membership Proofs
Sai Deng, Bo Du
We introduce zkTree, a general framework for constructing a tree by recursively verifying children's zero-knowledge proofs (ZKPs) in a parent ZKP node, while enabling the retrieval of membership proofs for user-supplied zk proofs. We also outline a construction pipeline that allows zkTree to be built and verified on-chain with constant gas cost and low data processing pipeline overhead. By aggregating a large number of user proofs into a single root proof, zkTree makes ZKP on-chain verification cost-effective. Once the root proof is verified, all user proofs can be verified by providing Merkle membership proofs. zkTree can be implemented using Plonky2, which combines PLONK and FRI, with its root proof recursively verified in Groth16. Furthermore, we demonstrate how to employ zkTree to verify the default signature scheme of Tendermint consensus by validating ed25519 signatures in a single proof within the Ethereum Virtual Machine (EVM).
Last updated:  2025-03-23
On Quantum Secure Compressing Pseudorandom Functions
Ritam Bhaumik, Benoît Cogliati, Jordan Ethan, and Ashwin Jha
In this paper we characterize all $2n$-bit-to-$n$-bit Pseudorandom Functions (PRFs) constructed with the minimum number of calls to $n$-bit-to-$n$-bit PRFs and arbitrary number of linear functions. First, we show that all two-round constructions are either classically insecure, or vulnerable to quantum period-finding attacks. Second, we categorize three-round constructions depending on their vulnerability to these types of attacks. This allows us to identify classes of constructions that could be proven secure. We then proceed to show the security of the following three candidates against any quantum distinguisher that asks at most $ 2^{n/4} $ (possibly superposition) queries \[ \begin{array}{rcl} \mathsf{TNT}(x_1,x_2) &:=& f_3(x_2 \oplus f_2(x_2 \oplus f_1(x_1)))\\ \mathsf{LRQ}(x_1,x_2) &:=& f_2(x_2) \oplus f_3(x_2 \oplus f_1(x_1))\\ \mathsf{LRWQ}(x_1,x_2) &:=& f_3( f_1(x_1) \oplus f_2(x_2)). \end{array} \] Note that the first construction is a classically secure tweakable block cipher due to Bao et al., and the third construction is shown to be quantum secure tweakable block cipher by Hosoyamada and Iwata with similar query limits. Of note is our proof framework, an adaptation of Chung et al.'s rigorous formulation of Zhandry's compressed oracle technique in indistinguishability setup, which could be of independent interests. This framework gives very compact and mostly classical looking proofs as compared to Hosoyamada and Iwata interpretation of Zhandry's compressed oracle.
Last updated:  2024-05-10
Orca: FSS-based Secure Training and Inference with GPUs
Neha Jawalkar, Kanav Gupta, Arkaprava Basu, Nishanth Chandran, Divya Gupta, and Rahul Sharma
Secure Two-party Computation (2PC) allows two parties to compute any function on their private inputs without revealing their inputs to each other. In the offline/online model for 2PC, correlated randomness that is independent of all inputs to the computation, is generated in a preprocessing (offline) phase and this randomness is then utilized in the online phase once the inputs to the parties become available. Most 2PC works focus on optimizing the online time as this overhead lies on the critical path. A recent paradigm for obtaining efficient 2PC protocols with low online cost is based on the cryptographic technique of function secret sharing (FSS). We build an end-to-end system ORCA to accelerate the computation of FSS-based 2PC protocols with GPUs. Next, we observe that the main performance bottleneck in such accelerated protocols is in storage (due to the large amount of correlated randomness), and we design new FSS-based 2PC protocols for several key functionalities in ML which reduce storage by up to 5×. Compared to prior state-of-the-art on secure training accelerated with GPUs in the same computation model (PIRANHA, Usenix Security 2022), we show that ORCA has 4% higher accuracy, 98× lesser communication, and is 26× faster on CIFAR-10. Moreover, maintaining training accuracy while using fixed-point needs stochastic truncations, and all prior works on secure fixed-point training (including PIRANHA) use insecure protocols for it. We provide the first secure protocol for stochastic truncations and build on it to provide the first evaluation of training with end-to-end security. For secure ImageNet inference, ORCA achieves sub-second latency for VGG-16 and ResNet-50, and outperforms the state-of-the-art by 8 − 103×.
Last updated:  2023-02-15
DEFending Integrated Circuit Layouts
Jitendra Bhandari, Jayanth Gopinath, Mohammed Ashraf, Johann Knechtel, Ramesh Karri
The production of modern integrated circuit (IC) requires a complex, outsourced supply chain involving computer-aided design (CAD) tools, expert knowledge, and advanced foundries. This complexity has led to various security threats, such as Trojans inserted by adversaries during outsourcing, and physical probing or manipulation of devices at run-time. Our proposed solution, DEFense is an extensible CAD framework for evaluating and proactively mitigating threats to IC at the design-time stage. Our goal with DEFense is to achieve “security closure” at the physical layout level of IC design, prioritizing security alongside traditional power, performance, and area (PPA) objectives. DEFense uses an iterative approach to assess and mitigate vulnerabilities in the IC layout, automating vulnerability assessments and identifying vulnerable active devices and wires. Using the quantified findings, DEFense guides CAD tools to re-arrange placement and routing and use other heuristic means to “DEFend” the layouts. DEFense is independent of back-end CAD tools as it works with the standard DEF format for physical layouts. It is a flexible and extensible scripting framework without the need for modifications to commercial CAD code bases. We are providing the framework to the community and have conducted a thorough experimental investigation into different threats and adversaries at various stages of the IC life-cycle, including Trojan insertion by an untrusted foundry, probing by an untrusted end-user, and intentionally introduced crosstalk by an untrusted foundry.
Last updated:  2023-02-15
TreePIR: Sublinear-Time and Polylog-Bandwidth Private Information Retrieval from DDH
Arthur Lazzaretti, Charalampos Papamanthou
In Private Information Retrieval (PIR), a client wishes to retrieve the value of an index $i$ from a public database of $N$ values without leaking information about the index $i$. In their recent seminal work, Corrigan-Gibbs and Kogan (EUROCRYPT 2020) introduced the first two-server PIR protocol with sublinear amortized server time and sublinear, $O(\sqrt{N}\log N)$ bandwidth. In a followup work, Shi et al. (CRYPTO 2021) reduced the bandwidth to polylogarithmic by proposing a construction based on privately puncturable pseudorandom functions, a primitive whose only construction known to date is based on heave cryptographic primitives. Partly because of this, their PIR protocol does not achieve concrete efficiency. In this paper we propose TreePIR, a two-server PIR protocol with sublinear amortized server time and polylogarithmic bandwidth whose security can be based on just the DDH assumption. TreePIR can be partitioned in two phases, both sublinear: The first phase is remarkably simple and only requires pseudorandom generators. The second phase is a single-server PIR protocol on \emph{only} $\sqrt{N}$ indices, for which we can use the protocol by D\"ottling et al. (CRYPTO 2019) based on DDH, or, for practical purposes, the most concretely efficient single-server PIR protocol. Not only does TreePIR achieve better asymptotics than previous approaches while resting on weaker cryptographic assumptions, but it also outperforms existing two-server PIR protocols in practice. The crux of our protocol is a new cryptographic primitive that we call weak privately puncturable pseudorandom functions, which we believe can have further applications.
Last updated:  2023-02-15
A Different Base Approach for Better Efficiency on Range Proofs
Esra Günsay, Cansu Betin Onur, Murat Cenk
Zero-knowledge range proofs (ZKRPs) are commonly used to prove the validation of a secret integer lies in an interval to some other party in a secret way. In many ZKRPs, the secret is represented in binary and then committed via a suitable commitment scheme or represented as an appropriate encryption scheme. This paper is an extended version of the conference paper presented in 14th IEEE International Conference on Security of Information and Networks. To this end, we first analyze the proof proposed by Mao in 1998 in both discrete logarithm-setting and elliptic-curve settings. Mao’s proof contains a bit commitment scheme with an OR construction as a sub-protocol. We have extended Mao’s range proof to base-u with a modified OR-proof. We investigate and compare the efficiency of different base approaches on Mao’s range proof. Later, we analyze the range poof proposed by Bootle et al. in both finite fields and elliptic-curve settings. This proof contains polynomial commitment with matrix row operations. We take the number of computations in modulo exponentiation and the cost of the number of exchanged integers between parties. Then, we generalize these costs for u-based construction. We show that compared with the base-2 representation, different base approach provides efficiency in communication cost or computation cost, or both.
Last updated:  2023-02-15
SAT-aided Automatic Search of Boomerang Distinguishers for ARX Ciphers (Long Paper)
Dachao Wang, Baocang Wang, Siwei Sun
In Addition-Rotation-Xor (ARX) ciphers, the large domain size obstructs the application of the boomerang connectivity table. In this paper, we explore the problem of computing this table for a modular addition and the automatic search of boomerang characteristics for ARX ciphers. We provide dynamic programming algorithms to efficiently compute this table and its variants. These algorithms are the most efficient up to now. For the boomerang connectivity table, the execution time is $4^2(n − 1)$ simple operations while the previous algorithm costs $8^2(n − 1)$ simple operations, which generates a smaller model in the searching phase. After rewriting these algorithms with boolean expressions, we construct the corresponding Boolean Satisfiability Problem models. Two automatic search frameworks are also proposed based on these models. This is the first time bringing the SAT-aided automatic search techniques into finding boomerang attacks on ARX ciphers. Finally, under these frameworks, we find out the first verifiable 10-round boomerang trail for SPECK32/64 with probability $2^{-29.15}$ and a 12-round trail for SPECK48/72 with probability $2^{-44.15}$. These are the best distinguishers for them so far. We also perceive that the previous boomerang attacks on LEA are constructed with an incorrect computation of the boomerang connection probability. The result is then fixed by our frameworks.
Last updated:  2023-02-15
DenseQMC: an efficient bit-slice implementation of the Quine-McCluskey algorithm
Aleksei Udovenko
This note describes a new efficient bit-slice implementation DenseQMC of the Quine-McCluskey algorithm for finding all prime implicants of a Boolean function in the dense case. It is practically feasible for n <= 23 when run on a common laptop or for n <= 27 when run on a server with 1 TiB RAM. This note also outlines a very common mistake in the implementations of the Quine-McCluskey algorithm, leading to a quadratic slowdown. An optimized corrected implementation of the classic approach is also given (called SparseQMC). The implementation is freely available at https://github.com/hellman/Quine-McCluskey .
Last updated:  2023-08-02
Classical and quantum 3 and 4-sieves to solve SVP with low memory
Johanna Loyer, André Chailloux
The Shortest Vector Problem (SVP) is at the foundation of lattice-based cryptography. The fastest known method to solve SVP in dimension $d$ is by lattice sieving, which runs in time $2^{td+o(d)}$ with $2^{md+o(d)}$ memory for constants $t,m \in \Theta(1)$. Searching reduced vectors in the sieve is a problem reduced to the configuration problem, \ie searching $k$ vectors satisfying given constraints over their scalar products. In this work, we present a framework for $k$-sieve algorithms: we filter the input list of lattice vectors using a code structure modified from [BDGL16] to get lists centred around $k$ codewords summing to the null-vector. Then, we solve a simpler instance of the configuration problem in the $k$ filtered lists. Based on this framework, we describe classical sieves for $k=3$ and $4$ that introduce new time-memory trade-offs. We also use the $k$-Lists algorithm [KMPM19] inside our framework, and this improves the time for $k=3$ and gives new trade-offs for $k=4$.
Last updated:  2023-03-01
MixFlow: Assessing Mixnets Anonymity with Contrastive Architectures and Semantic Network Information
Reyhane Attarian, Esfandiar Mohammadi, Tao Wang, Emad Heydari Beni
Traffic correlation attacks have illustrated challenges with protecting communication meta-data, yet short flows as in messaging applications like Signal have been protected by practical Mixnets such as Loopix from prior traffic correlation attacks. This paper introduces a novel traffic correlation attack against short-flow applications like Signal that are tunneled through practical Mixnets like Loopix. We propose the MixFlow model, an approach for analyzing the unlinkability of communications through Mix networks. As a prominent example, we do our analysis on Loopix. The MixFlow is a contrastive model that looks for semantic relationships between entry and exit flows, even if the traffic is tunneled through Mixnets that protect meta-data like Loopix via Poisson mixing delay and cover traffic. We use the MixFlow model to evaluate the resistance of Loopix Mix networks against an adversary that observes only the inflow and outflow of Mixnet and tries to correlate communication flows. Our experiments indicate that the MixFlow model is exceptionally proficient in connecting end-to-end flows, even when the Poison delay and cover traffic are increased. These findings challenge the conventional notion that adding Poisson mixing delay and cover traffic can obscure the metadata patterns and relationships between communicating parties. Despite the implementation of Poisson mixing countermeasures in Mixnets, MixFlow is still capable of effectively linking end-to-end flows, enabling the extraction of meta-information and correlation between inflows and outflows. Our findings have important implications for existing Poisson-mixing techniques and open up new opportunities for analyzing the anonymity and unlinkability of communication protocols.
Last updated:  2023-05-04
Chopsticks: Fork-Free Two-Round Multi-Signatures from Non-Interactive Assumptions
Jiaxin Pan, Benedikt Wagner
Multi-signatures have been drawing lots of attention in recent years, due to their applications in cryptocurrencies. Most early constructions require three-round signing, and recent constructions have managed to reduce the round complexity to two. However, their security proofs are mostly based on non-standard, interactive assumptions (e.g. one-more assumptions) and come with a huge security loss, due to multiple uses of rewinding (aka the Forking Lemma). This renders the quantitative guarantees given by the security proof useless. In this work, we improve the state of the art by proposing two efficient two-round multi-signature schemes from the (standard, non-interactive) Decisional Diffie-Hellman (DDH) assumption. Both schemes are proven secure in the random oracle model without rewinding. We do not require any pairing either. Our first scheme supports key aggregation but has a security loss linear in the number of signing queries, and our second scheme is the first tightly secure construction. A key ingredient in our constructions is a new homomorphic dual-mode commitment scheme for group elements, that allows to equivocate for messages of a certain structure. The definition and efficient construction of this commitment scheme is of independent interest.
Last updated:  2023-02-15
Flexible Password-Based Encryption: Securing Cloud Storage and Provably Resisting Partitioning-Oracle Attacks
Mihir Bellare, Laura Shea
We introduce flexible password-based encryption (FPBE), an extension of traditional password-based encryption designed to meet the operational and security needs of contemporary applications like end-to-end secure cloud storage. Operationally, FPBE supports nonces, associated data and salt reuse. Security-wise, it strengthens the usual privacy requirement, and, most importantly, adds an authenticity requirement, crucial because end-to-end security must protect against a malicious server. We give an FPBE scheme called DtE that is not only proven secure, but with good bounds. The challenge, with regard to the latter, is in circumventing partitioning-oracle attacks, which is done by leveraging key-robust (also called key-committing) encryption and a notion of authenticity with corruptions. DtE can be instantiated to yield an efficient and practical FPBE scheme for the target applications.
Last updated:  2023-02-15
On Two Factors Affecting the Efficiency of MILP Models in Automated Cryptanalyses
Shengyuan Xu, Xiutao Feng, Yongxing Wang
In recent years, mixed integer linear programming (MILP, in short) gradually becomes a popular tool of automated cryptanalyses in symmetric ciphers, which can be used to search differential characteristics and linear approximations with high probability/correlation. A key problem in the MILP method is how to build a proper model that can be solved efficiently in the MILP solvers like Gurobi or Cplex. It is known that a MILP problem is NP-hard, and the numbers of variables and inequalities are two important measures of its scale and time complexity. Whilst the solution space and the variables in many MILP models built for symmetric cryptanalyses are fixed without introducing dummy variables, the cardinality, i.e., the number of inequalities, is a main factor that might affect the runtime of MILP models. We notice that the norm of a MILP model, i.e., the maximal absolute value of all coefficients in its inequalities, is also an important factor affecting its runtime. In this work we will illustrate the effects of two parameters cardinality and norm of inequalities on the runtime of Gurobi by a large number of cryptanalysis experiments. Here we choose the popular MILP solver Gurobi and view it a black box, construct a large number of MILP models with different cardinalities or norms by means of differential analyses and impossible differential analyses for some classic block ciphers with SPN structure, and observe their runtimes in Gurobi. As a result, our experiments show that although minimizing the number of inequalities and the norm of coefficients might not always minimize the runtime, it is still a better choice in most situations.
Last updated:  2023-02-14
A simpler alternative to Lucas–Lehmer–Riesel primality test
Pavel Atnashev
This paper investigates application of Morrison primality test to numbers of $k \cdot 2^n-1$ form and finds a simple general formula, which is equivalent to Lucas–Lehmer and Lucas–Lehmer–Riesel primality tests.
Last updated:  2023-02-14
Hull Attacks on the Lattice Isomorphism Problem
Léo Ducas, Shane Gibbons
The lattice isomorphism problem (LIP) asks one to find an isometry between two lattices. It has recently been proposed as a foundation for cryptography in two independent works [Ducas & van Woerden, EUROCRYPT 2022, Bennett et al. preprint 2021]. This problem is the lattice variant of the code equivalence problem, on which the notion of the hull of a code can lead to devastating attacks. In this work we study the cryptanalytic role of an adaptation of the hull to the lattice setting, namely, the $s$-hull. We first show that the $s$-hull is not helpful for creating an arithmetic distinguisher. More specifically, the genus of the $s$-hull can be efficiently predicted from $s$ and the original genus and therefore carries no extra information. However, we also show that the hull can be helpful for geometric attacks: for certain lattices the minimal distance of the hull is relatively smaller than that of the original lattice, and this can be exploited. The attack cost remains exponential, but the constant in the exponent is halved. This second result gives a counterexample to the general hardness conjecture of LIP proposed by Ducas & van Woerden. Our results suggests that one should be very considerate about the geometry of hulls when instantiating LIP for cryptography. They also point to unimodular lattices as attractive options, as they are equal to their dual and their hulls, leaving only the original lattice to an attacker. Remarkably, this is already the case in proposed instantiations, namely the trivial lattice $\mathbb{Z}^n$ and the Barnes-Wall lattices.
Last updated:  2023-08-23
Traceable Policy-Based Signatures with Delegation
Ismail Afia and Riham AlTawy
In PKC 2014, a policy-based signature (PBS) scheme was proposed by Bellare and Fuchsbauer in which a signer can only sign messages conforming to some policy specified by an issuing authority. PBS construction supports the delegation of signing policy keys with possible restrictions to the original policy. Although the PBS scheme is meant to restrict the signing privileges of the scheme’s users, singers could easily share their signing keys with others without being held accountable since PBS does not have a tracing capability, and a signing policy key defines a policy that should be satisfied by the message only. In this work, we build on PBS and propose a traceable policy-based signature scheme (TPBS) where we employ a rerandomizable signature scheme, a digital signature scheme, and a zero-knowledge proof system as its building blocks. TPBS introduces the notion of anonymized identity keys that are used with the policy keys for signing. Thus it achieves traceability without compromising the delegatability feature of the PBS scheme. Additionally, TPBS ensures non-frameability under the assumption of a corrupted tracing authority. We define and formally prove the security notions of the generic TPBS scheme. Finally, we propose an instantiation of TPBS utilizing the Pointcheval Sanders rerandomizable signature scheme, Abe et al.’s structure-preserving signature scheme, and Groth-Sahai NIZK system, and analyze its efficiency.
Last updated:  2023-08-07
Faithful Simulation of Randomized BFT Protocols on Block DAGs
Hagit Attiya, Constantin Enea, Shafik Nassar
Byzantine Fault-Tolerant (BFT) protocols that are based on Directed Acyclic Graphs (DAGs) are attractive due to their many advantages in asynchronous blockchain systems. These DAG-based protocols can be viewed as a simulation of some BFT protocol on a DAG. Many DAG-based BFT protocols rely on randomization, since they are used for agreement and ordering of transactions, which cannot be achieved deterministically in asynchronous systems. Randomization is achieved either through local sources of randomness, or by employing shared objects that provide a common source of randomness, e.g., common coins. A DAG simulation of a randomized protocol should be faithful, in the sense that it precisely preserves the properties of the original BFT protocol, and in particular, their probability distributions. We argue that faithfulness is ensured by a forward simulation. We show how to faithfully simulate any BFT protocol that uses public coins and shared objects, like common coins.
Last updated:  2023-06-20
Beyond the Blockchain Address: Zero-Knowledge Address Abstraction
Sanghyeon Park, Jeong Hyuk Lee, Seunghwa Lee, Jung Hyun Chun, Hyeonmyeong Cho, MinGi Kim, Hyun Ki Cho, Soo-Mook Moon
Integrating traditional Internet (web2) identities with blockchain (web3) identities presents considerable obstacles. Conventional solutions typically employ a mapping strategy, linking web2 identities directly to specific blockchain addresses. However, this method can lead to complications such as fragmentation of identifiers across disparate networks. To address these challenges, we propose a novel scheme, Address Abstraction (AA), that circumvents the need for direct mapping. AA scheme replaces the existing blockchain address system while maintaining essential properties including a unique identifier, immutability of requests, and privacy preservation. This capability allows users to interact with the blockchain via their web2 identities, irrespective of the specific blockchain address, thereby eliminating limitations tied to a blockchain-specific address system. This mechanism fosters the seamless integration of web2 identities within the web3, in addition, promotes cross-chain compatibility. We also provide an application of AA, denoted as zero-knowledge Address Abstraction (zkAA). It mainly leverages the zero-knowledge proofs to ensure the properties of AA. zkAA has been implemented as a smart contract — compatible with any existing contract-enabled blockchains. Our evaluation of zkAA on Ethereum demonstrates its efficiency. The average cost for registering an abstracted identity is approximate \$7.66, whereas publishing an abstracted transaction costs around \$4.75. In contrast, on Polygon, the associated costs are markedly lower: \$0.02 for registration and \$0.01 for publication, as of January 13, 2023. This empirical evaluation substantiates the feasibility of our proposed solution.
Last updated:  2023-08-28
Practical Security Analysis of Zero-Knowledge Proof Circuits
Hongbo Wen, Jon Stephens, Yanju Chen, Kostas Ferles, Shankara Pailoor, Kyle Charbonnet, Isil Dillig, and Yu Feng
As privacy-sensitive applications based on zero-knowledge proofs (ZKPs) gain increasing traction, there is a pressing need to detect vulnerabilities in ZKP circuits. This paper studies common vulnerabilities in Circom (the most popular domain-specific language for ZKP circuits) and describes a static analysis framework for detecting these vulnerabilities. Our technique operates over an abstraction called the circuit dependence graph (CDG) that captures key properties of the circuit and allows expressing semantic vulnerability patterns as queries over the CDG abstraction. We have implemented 9 different detectors using this framework and perform an experimental evaluation on over 258 circuits from popular Circom projects on Github. According to our evaluation, these detectors can identify vulnerabilities, including previously unknown ones, with high precision and recall.
Last updated:  2023-02-13
tlock: Practical Timelock Encryption from Threshold BLS
Nicolas Gailly, Kelsey Melissaris, Yolan Romailler
We present a practical construction and implementation of timelock encryption, in which a ciphertext is guaranteed to be decryptable only after some specified time has passed. We employ an existing threshold network, the League of Entropy, implementing threshold BLS [BLS01, B03] in the context of Boneh and Franklin's identity-based encryption (IBE) [BF01]. At present this threshold network broadcasts BLS signatures over each round number, equivalent to the current time interval, and as such can be considered a decentralised key holder periodically publishing private keys for the IBE where identities are the round numbers. A noticeable advantage of this scheme is that only the encryptors and decryptors are required to perform any additional cryptographic operations; the threshold network can remain unaware of the TLE and does not have to change to support the scheme. We also release an open-source implementation of our scheme and a live web page that can be used in production now relying on the existing League of Entropy network acting as a distributed public randomness beacon service using threshold BLS signatures.
Last updated:  2023-02-13
Cryptanalysis of a key agreement scheme using determinants and rectangular matrices
Daniel R. L. Brown
Hecht and Scolnik proposed key agreement using rectangular matrices and determinants. This report describes an attack.
Last updated:  2023-02-24
Towards Modular Foundations for Protocol Security
Lúcás Críostóir Meier
Universally composable (UC) security is the most widely used framework for analyzing the security of cryptographic protocols. Many variants and simplifications of the framework have been proposed and developed, nonetheless, many practitioners find UC proofs to be both difficult to construct and understand. We remedy this situation by proposing a new framework for protocol security. We believe that our framework provides proofs that are both easier to write, but also more rigorous, and easier to understand. Our work is based on state-separable proofs allowing for modular proofs, by decomposing complicated protocols into simple components.
Last updated:  2023-02-13
Generic Models for Group Actions
Julien Duman, Dominik Hartmann, Eike Kiltz, Sabrina Kunzweiler, Jonas Lehmann, Doreen Riepel
We define the Generic Group Action Model (GGAM), an adaptation of the Generic Group Model to the setting of group actions (such as CSIDH). Compared to a previously proposed definition by Montgomery and Zhandry (ASIACRYPT'22), our GGAM more accurately abstracts the security properties of group actions. We are able to prove information-theoretic lower bounds in the GGAM for the discrete logarithm assumption, as well as for non-standard assumptions recently introduced in the setting of threshold and identification schemes on group actions. Unfortunately, in a natural quantum version of the GGAM, the discrete logarithm assumption does not hold. To this end we also introduce the weaker Quantum Algebraic Group Action Model (QAGAM), where every set element (in superposition) output by an adversary is required to have an explicit representation relative to known elements. In contrast to the Quantum Generic Group Action Model, in the QAGAM we are able to analyze the hardness of group action assumptions: We prove (among other things) the equivalence between the discrete logarithm assumption and non-standard assumptions recently introduced in the setting of QROM security for Password-Authenticated Key Exchange, Non-Interactive Key Exchange, and Public-Key Encryption.
Last updated:  2024-01-06
The Last Yard: Foundational End-to-End Verification of High-Speed Cryptography
Philipp G. Haselwarter, Benjamin Salling Hvass, Lasse Letager Hansen, Théo Winterhalter, Catalin Hritcu, and Bas Spitters
The field of high-assurance cryptography is quickly maturing, yet a unified foundational framework for end-to-end formal verification of efficient cryptographic implementations is still missing. To address this gap, we use the Coq proof assistant to formally connect three existing tools: (1) the Hacspec emergent cryptographic specification language; (2) the Jasmin language for efficient, high-assurance cryptographic implementations; and (3) the SSProve foundational verification framework for modular cryptographic proofs. We first connect Hacspec with SSProve by devising a new translation from Hacspec specifications to imperative SSProve code. We validate this translation by considering a second, more standard translation from Hacspec to purely functional Coq code and generate a proof of the equivalence between the code produced by the two translations. We further define a translation from Jasmin to SSProve, which allows us to formally reason in SSProve about efficient cryptographic implementations in Jasmin. We prove this translation correct in Coq with respect to Jasmin's operational semantics. Finally, we demonstrate the usefulness of our approach by giving a foundational end-to-end Coq proof of an efficient AES implementation. For this case study, we start from an existing Jasmin implementation of AES that makes use of hardware acceleration and prove that it conforms to a specification of the AES standard written in Hacspec. We use SSProve to formalize the security of the encryption scheme based on the Jasmin implementation of AES.
Last updated:  2023-06-06
Quantum Linear Key-recovery Attacks Using the QFT
André Schrottenloher
The Quantum Fourier Transform is a fundamental tool in quantum cryptanalysis. In symmetric cryptanalysis, hidden shift algorithms such as Simon's (FOCS 1994), which rely on the QFT, have been used to obtain structural attacks on some very specific block ciphers. The Fourier Transform is also used in classical cryptanalysis, for example in FFT-based linear key-recovery attacks introduced by Collard et al. (ICISC 2007). Whether such techniques can be adapted to the quantum setting has remained so far an open question. In this paper, we introduce a new framework for quantum linear key-recovery attacks using the QFT. These attacks loosely follow the classical method of Collard et al., in that they rely on the fast computation of a ``correlation state'' in which experimental correlations, rather than being directly accessible, are encoded in the amplitudes of a quantum state. The experimental correlation is a statistic that is expected to be higher for the good key, and on some conditions, the increased amplitude creates a speedup with respect to an exhaustive search of the key. The same method also yields a new family of structural attacks, and new examples of quantum speedups beyond quadratic using classical known-plaintext queries.
Last updated:  2023-07-27
Maravedí: A Secure and Practical Protocol to Trade Risk for Instantaneous Finality
Mario Larangeira, Maxim Jourenko
The efficiency of blockchain systems is often compared to popular credit card networks with respect to the transactions per second rate. This seems to be an unfair comparison since these networks do not complete a transaction from beginning to end. Rather they buy the risk and settle it much later. Typically transactions have only two players, the payer and the payee, and the settlement of this transaction requires time since it depends on basic properties of the consensus protocol. In practice, the payee, very often, needs to wait for confirmation in order to ship the traded goods. Alternatively, the payee, or merchant, can ship it in faith that the transaction will be confirmed. Our contribution, the Maravedí Protocol, introduces a third player to minimize the risk of the payee to be left without the payment even without the consensus layer confirmation. The main idea is that the third player can work similarly to a credit card company. That is, it buys the risk from the merchant, by a small discount, and allows the third player to pay it instantaneously via a payment-channel like protocol. In parallel, the third player receives the regular payment transaction from the payer that can be settled on the chain, thus, after waiting the consensus/blockchain required time. Moreover, the on-chain transaction pays the full amount, allowing the third player to cash in the discount. Hence, on the side of the merchant, our protocol puts forth instantaneous finality in a novel way to the best of our knowledge.
Last updated:  2024-09-18
CAPYBARA and TSUBAKI: Verifiable Random Functions from Group Actions and Isogenies
Yi-Fu Lai
In this work, we introduce two post-quantum Verifiable Random Function (VRF) constructions based on abelian group actions and isogeny group actions with a twist. The former relies on the standard group action Decisional Diffie-Hellman (GA-DDH) assumption. VRFs serve as cryptographic tools allowing users to generate pseudorandom outputs along with publicly verifiable proofs. Moreover, the residual pseudorandomness of VRFs ensures the pseudorandomness of unrevealed inputs, even when multiple outputs and proofs are disclosed. Our work aims at addressing the growing demand for post-quantum VRFs, as existing constructions based on elliptic curve cryptography (ECC) or classical DDH-type assumptions are vulnerable to quantum threats. In our contributions, our two VRF constructions, rooted in number-theoretic pseudorandom functions, are both simple and secure over the random oracle model. We introduce a new proof system for the factorization of group actions and set elements, serving as the proofs for our VRFs. The first proposal is based on the standard GA-DDH problem, and for its security proof, we introduce the (group action) master Decisional Diffie-Hellman problem over group actions, proving its equivalence to the standard GA-DDH problem. In the second construction, we leverage quadratic twists to enhance efficiency, reducing the key size and the proof sizes, expanding input size. The scheme is based on the square GA-DDH problem. Moreover, we employ advanced techniques from the isogeny literature to optimize the proof size to 39KB and 34KB using CSIDH-512 without compromising VRF notions. The schemes feature fast evaluations but exhibit slower proof generation. To the best of our knowledge, these constructions represent the first two provably secure VRFs based on isogenies.
Last updated:  2023-02-13
Fully Automated Differential-Linear Attacks against ARX Ciphers
Emanuele Bellini, David Gerault, Juan Grados, Rusydi Makarim, Thomas Peyrin
In this paper, we present a fully automated tool for differential-linear attacks using Mixed-Integer Linear Programming (MILP) and Mixed-Integer Quadratic Constraint Programming (MIQCP) techniques, which is, to the best of our knowledge, the very first attempt to fully automate such attacks. We use this tool to improve the correlations of the best 9 and 10-round differential-linear distinguishers on Speck32/64, and reach 11 rounds for the first time. Furthermore, we improve the latest 14-round key-recovery attack against Speck32/64, using differential-linear distinguishers obtained with our MILP/MIQCP tool. The techniques we present are generic and can be applied to other ARX ciphers as well.
Last updated:  2023-05-29
Asymmetric Trapdoor Pseudorandom Generators: Definitions, Constructions, and Applications to Homomorphic Signatures with Shorter Public Keys
Jinpeng Hou, Yansong Gao, Anmin Fu, Jie Chen, Xiaofeng Chen, Yuqing Zhang, Willy Susilo, Josef Pieprzyk
We introduce a new primitive called the asymmetric trapdoor pseudorandom generator (ATPRG), which belongs to pseudorandom generators with two additional trapdoors (a public trapdoor and a secret trapdoor) or backdoor pseudorandom generators with an additional trapdoor (a secret trapdoor). Specifically, ATPRG can only generate public pseudorandom numbers $pr_1,\dots,pr_N$ for the users having no knowledge of the public trapdoor and the secret trapdoor; so this function is the same as pseudorandom generators. However, the users having the public trapdoor can use any public pseudorandom number $pr_i$ to recover the whole $pr$ sequence; so this function is the same as backdoor pseudorandom generators. Further, the users having the secret trapdoor can use $pr$ sequence to generate a sequence $sr_1,\dots,sr_N$ of the secret pseudorandom numbers. ATPRG can help design more space-efficient protocols where data/input/message should respect a predefined (unchangeable) order to be correctly processed in a computation or malleable cryptographic system. As for applications of ATPRG, we construct the first homomorphic signature scheme (in the standard model) whose public key size is only $O(T)$ that is independent of the dataset size. As a comparison, the shortest size of the existing public key is $O(\sqrt{N}+\sqrt{T})$, proposed by Catalano et al. (CRYPTO'15), where $N$ is the dataset size and $T$ is the dimension of the message. In other words, we provide the first homomorphic signature scheme with $O(1)$-sized public keys for the one-dimension messages.
Last updated:  2023-02-13
TS-Hash: a lightweight cryptographic hash family based on Galois LFSRs
Itay Bookstein, Boaz Tsaban
We study a novel family of cryptographic hash functions based on Galois linear feedback shift registers (LFSRs), and identify initial guidelines for choosing secure parameters for this family. These hash functions are extremely simple, efficient, and suitable for implementation in constrained environments.
Last updated:  2023-02-16
Rotational-XOR Differential Rectangle Cryptanalysis on Simon-like Ciphers
Siwei Chen, Mingming Zhu, Zejun Xiang, Runqing Xu, Xiangyong Zeng, Shasha Zhang
In this paper, we propose a rectangle-like method called \textit{rotational-XOR differential rectangle} attack to search for better distinguishers. It is a combination of the rotational-XOR cryptanalysis and differential cryptanalysis in the rectangle-based way. In particular, we put a rotational-XOR characteristic before a differential characteristic to construct a rectangle structure. By choosing some appropriate rotational-XOR and differential characteristics as well as considering multiple differentials, some longer distinguishers that have the probability greater than $2^{-2n}$ can be constructed effectively where $n$ is the block size of a block cipher. We apply this new method to some versions of \textsc{Simon} and \textsc{Simeck} block ciphers. As a result, we obtain rotational-XOR differential rectangle distinguishers up to 16, 16, 17, 16 and 21 rounds for \textsc{Simon}32/64, \textsc{Simon}48/72, \textsc{Simon}48/96, \textsc{Simeck}32 and \textsc{Simeck}48, respectively. Our distinguishers for \textsc{Simon}32/64 is longer than the best differential and rotational-XOR distinguishers. As for \textsc{Simon}48/96, the distinguisher is longer than the rotational-XOR distinguisher and as long as the best differential distinguisher. Also, our distinguisher for \textsc{Simeck}32 is longer than the best differential distinguisher (14 rounds) and has the full weak key space (i.e., $2^{64}$) whereas the 16-round rotational-XOR distinguisher has a weak key class of $2^{36}$. In addition, our distinguisher for \textsc{Simeck}48 has a better weak key class ($2^{72}$ weak keys) than the 21-round rotational-XOR distinguisher ($2^{60}$ weak keys). To the best of our knowledge, this is the first time to consider the combinational cryptanalysis based on rotational-XOR and differential cryptanalysis using the rectangle structure.
Last updated:  2023-05-13
The geometric interpretation of the Tate pairing and its applications
Damien Robert
While the Weil pairing is geometric, the Tate pairing is arithmetic: its value depends on the base field considered. Nevertheless, the étale topology allows to interpret the Galois action in a geometric manner. In this paper, we discuss this point of view for the Tate pairing: its natural geometric interpretation is that it gives étale $\mu_n$-torsors. While well known to experts, this interpretation is perhaps less known in the cryptographic community. As an application, we explain how to use the Tate pairing to study the fibers of an isogeny, and we prove a conjecture by Castryck and Decru on multiradical isogenies.
Last updated:  2024-02-18
A New Algebraic Approach to the Regular Syndrome Decoding Problem and Implications for PCG Constructions
Pierre Briaud and Morten Øygarden
The Regular Syndrome Decoding (RSD) problem, a variant of the Syndrome Decoding problem with a particular error distribution, was introduced almost 20 years ago by Augot et al. . In this problem, the error vector is divided into equally sized blocks, each containing a single noisy coordinate. More recently, the last five years have seen increased interest in this assumption due to its use in MPC and ZK applications. Generally referred to as "LPN with regular noise" in this context, the assumption allows to achieve better efficiency when compared to plain LPN. We present the first attack on RSD relying on Gröbner bases techniques. After a careful theoretical analysis of the underlying polynomial system, we propose concrete attacks that are able to take advantage of the regular noise distribution. In particular, we can identify several examples of concrete parameters where our techniques outperform other algorithms.
Last updated:  2023-02-12
Linear codes of Schubert type and quadratic public keys of Multivariate Cryptography
Vasyl Ustimenko
Studies of linear codes in terms of finite projective geometries form traditional direction in Coding Theory. Some applications of projective geometries are known. Noncommutative groups and semigroups defined in terms of projective geometries can serve as platforms of protocols of Post Quantum Cryptography. We introduce an idea of public keys of Multivariate Cryptography given by quadratic public rules generated via walks on incidence substructures of projective geometry with vertexes from two largest Schubert cells. It differs from the known algorithms of Code Based Cryptography and can be considered as the first attempt to combine ideas of this area with the approach of Multivariate Cryptography.
Last updated:  2023-02-12
Improved Heuristics for Low-latency Implementations of Linear Layers
Uncategorized
Qun Liu, Zheng Zhao, Meiqin Wang
Show abstract
Uncategorized
In many applications, low area and low latency are required for the chip-level implementation of cryptographic primitives. The low-cost implementations of linear layers usually play a crucial role for symmetric ciphers. Some heuristic methods, such as the forward search and the backward search, minimize the number of XOR gates of the linear layer under the minimum latency limitation. For the sake of achieving further optimization for such implementation of the linear layer, we put forward a new general search framework attaching the division optimization and extending base techniques in this paper. In terms of the number of XOR gates and the searching time, our new search algorithm is better than the previous heuristics, including the forward search and the backward search when testing matrices provided by them. We obtain an improved implementation of AES MixColumns requiring only 102 XORs under minimum latency, which outdoes the previous best record provided by the forward search.
Last updated:  2023-11-22
Degree-$D$ Reverse Multiplication-Friendly Embeddings: Constructions and Applications
Daniel Escudero, Cheng Hong, Hongqing Liu, Chaoping Xing, and Chen Yuan
In the recent work of (Cheon & Lee, Eurocrypt'22), the concept of a degree-$D$ packing method was formally introduced, which captures the idea of embedding multiple elements of a smaller ring into a larger ring, so that element-wise multiplication in the former is somewhat "compatible" with the product in the latter. Then, several optimal bounds and results are presented, and furthermore, the concept is generalized from one multiplication to degrees larger than two. These packing methods encompass several constructions seen in the literature in contexts like secure multiparty computation and fully homomorphic encryption. One such construction is the concept of reverse multiplication-friendly embeddings (RMFEs), which are essentially degree-2 packing methods. In this work we generalize the notion of RMFEs to \emph{degree-$D$ RMFEs} which, in spite of being "more algebraic" than packing methods, turn out to be essentially equivalent. Then, we present a general construction of degree-$D$ RMFEs by generalizing the ideas on algebraic geometry used to construct traditional degree-$2$ RMFEs which, by the aforementioned equivalence, leads to explicit constructions of packing methods. Furthermore, our theory is given in an unified manner for general Galois rings, which include both rings of the form $\mathbb{Z}_{p^k}$ and fields like $\mathbb{F}_{p^k}$, which have been treated separately in prior works. We present multiple concrete sets of parameters for degree-$D$ RMFEs (including $D=2$), which can be useful for future works. Finally, we apply our RMFEs to the task of non-interactively generating high degree correlations for secure multiparty computation protocols. This requires the use of Shamir secret sharing for a large number of parties, which is known to require large-degree Galois ring extensions. Our RMFE enables the generation of such preprocessing data over small rings, without paying for the multiplicative overhead incurred by using Galois ring extensions of large degree. For our application we also construct along the way, as a side contribution of potential independent interest, a pseudo-random secret-sharing solution for non-interactive generation of packed Shamir-sharings over Galois rings with structured secrets, inspired by the PRSS solutions from (Benhamouda et al, TCC 2021).
Last updated:  2024-03-14
Impossibility of Efficient Information-Theoretic Fuzzy Extraction
Benjamin Fuller
Fuzzy extractors convert noisy signals from the physical world into reliable cryptographic keys. Fuzzy min-entropy is an important measure of the ability of a fuzzy extractor to distill keys from a distribution: in particular, it bounds the length of the key that can be derived (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). In general, fuzzy min-entropy that is superlogarithmic in the security parameter is required for a noisy distribution to be suitable for key derivation. There is a wide gap between what is possible with respect to computational and information-theoretic adversaries. Under the assumption of general-purpose obfuscation, keys can be securely derived from all distributions with superlogarithmic entropy. Against information-theoretic adversaries, however, it is impossible to build a single fuzzy extractor that works for all distributions (Fuller, Reyzin, and Smith, IEEE Transactions on Information Theory 2020). A weaker information-theoretic goal is to build a fuzzy extractor for each particular probability distribution. This is the approach taken by Woodage et al. (Crypto 2017). Prior approaches use the full description of the probability mass function and are inefficient. We show this is inherent: for a quarter of distributions with fuzzy min-entropy and $2^k$ points there is no secure fuzzy extractor that uses less $2^{\Theta(k)}$ bits of information about the distribution.} This result rules out the possibility of efficient, information-theoretic fuzzy extractors for many distributions with fuzzy min-entropy. We show an analogous result with stronger parameters for information-theoretic secure sketches. Secure sketches are frequently used to construct fuzzy extractors.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.