All papers in 2023 (Page 5 of 1971 results)

Last updated:  2023-10-11
Faulting Winternitz One-Time Signatures to forge LMS, XMSS, or SPHINCS+ signatures
Alexander Wagner, Vera Wesselkamp, Felix Oberhansl, Marc Schink, and Emanuele Strieder
Hash-based signature (HBS) schemes are an efficient method of guaranteeing the authenticity of data in a post-quantum world. The stateful schemes LMS and XMSS and the stateless scheme SPHINCS+ are already standardised or will be in the near future. The Winternitz one-time signature (WOTS) scheme is one of the fundamental building blocks used in all these HBS standardisation proposals. We present a new fault injection attack targeting WOTS that allows an adversary to forge signatures for arbitrary messages. The attack affects both the signing and verification processes of all current stateful and stateless schemes. Our attack renders the checksum calculation within WOTS useless. A successful fault injection allows at least an existential forgery attack and, in more advanced settings, a universal forgery attack. While checksum computation is clearly a critical point in WOTS, and thus in any of the relevant HBS schemes, its resilience against a fault attack has never been considered. To fill this gap, we theoretically explain the attack, estimate its practicability, and derive the brute-force complexity to achieve signature forgery for a variety of parameter sets. We analyse the reference implementations of LMS, XMSS and SPHINCS+ and pinpoint the vulnerable points. To harden these implementations, we propose countermeasures and evaluate their effectiveness and efficiency. Our work shows that exposed devices running signature generation or verification with any of these three schemes must have countermeasures in place.
Last updated:  2023-10-11
Key Filtering in Cube Attacks from the Implementation Aspect
Hao Fan, Yonglin Hao, Qingju Wang, Xinxin Gong, and Lin Jiao
In cube attacks, key filtering is a basic step of identifying the correct key candidates by referring to the truth tables of superpolies. When terms of superpolies get massive, the truth table lookup complexity of key filtering increases significantly. In this paper, we propose the concept of implementation dependency dividing all cube attacks into two categories: implementation dependent and implementation independent. The implementation dependent cube attacks can only be feasible when the assumption that one encryption oracle query is more complicated than one table lookup holds. On the contrary, implementation independent cube attacks remain feasible in the extreme case where encryption oracles are implemented in the full codebook manner making one encryption query equivalent to one table lookup. From this point of view, we scrutinize existing cube attack results of stream ciphers Trivium, Grain-128AEAD, Acorn and Kreyvium. As a result, many of them turn out to be implementation dependent. Combining with the degree evaluation and divide-and-conquer techniques used for superpoly recovery, we further propose new cube attack results on Kreyvium reduced to 898, 899 and 900 rounds. Such new results not only mount to the maximal number of rounds so far but also are implementation independent.
Last updated:  2024-08-30
Jackpot: Non-Interactive Aggregatable Lotteries
Nils Fleischhacker, Mathias Hall-Andersen, Mark Simkin, and Benedikt Wagner
In proof-of-stake blockchains, liveness is ensured by repeatedly selecting random groups of parties as leaders, who are then in charge of proposing new blocks and driving consensus forward. The lotteries that elect those leaders need to ensure that adversarial parties are not elected disproportionately often and that an adversary can not tell who was elected before those parties decide to speak, as this would potentially allow for denial-of-service attacks. Whenever an elected party speaks, it needs to provide a winning lottery ticket, which proves that the party did indeed win the lottery. Current solutions require all published winning tickets to be stored individually on-chain, which introduces undesirable storage overheads. In this work, we introduce non-interactive aggregatable lotteries and show how these can be constructed efficiently. Our lotteries provide the same security guarantees as previous lottery constructions, but additionally allow any third party to take a set of published winning tickets and aggregate them into one short digest. We provide a formal model of our new primitive in the universal composability framework. As one of our technical contributions, which may be of independent interest, we introduce aggregatable vector commitments with simulation-extractability and present a concretely efficient construction thereof in the algebraic group model in the presence of a random oracle. We show how these commitments can be used to construct non-interactive aggregatable lotteries. We have implemented our construction, called Jackpot, and provide benchmarks that underline its concrete efficiency.
Last updated:  2024-06-04
Advancing Scalability in Decentralized Storage: A Novel Approach to Proof-of-Replication via Polynomial Evaluation
Giuseppe Ateniese, Foteini Baldimtsi, Matteo Campanelli, Danilo Francati, and Ioanna Karantaidou
Proof-of-Replication (PoRep) plays a pivotal role in decentralized storage networks, serving as a mechanism to verify that provers consistently store retrievable copies of specific data. While PoRep’s utility is unquestionable, its implementation in large-scale systems, such as Filecoin, has been hindered by scalability challenges. Most existing PoRep schemes, such as Fisch’s (Eurocrypt 2019), face an escalating number of challenges and growing computational overhead as the number of stored files increases. This paper introduces a novel PoRep scheme distinctively tailored for expansive decentralized storage networks. At its core, our approach hinges on polynomial evaluation, diverging from the probabilistic checking prevalent in prior works. Remarkably, our design requires only a single challenge, irrespective of the number of files, ensuring both prover’s and verifier’s run-times remain manageable even as file counts soar. Our approach introduces a paradigm shift in PoRep designs, offering a blueprint for highly scalable and efficient decentralized storage solutions.
Last updated:  2024-06-12
Not Just Regular Decoding: Asymptotics and Improvements of Regular Syndrome Decoding Attacks
Andre Esser and Paolo Santini
Cryptographic constructions often base security on structured problem variants to enhance efficiency or to enable advanced functionalities. This led to the introduction of the Regular Syndrome Decoding (RSD) problem, which guarantees that a solution to the Syndrome Decoding (SD) problem follows a particular block-wise structure. Despite recent attacks exploiting that structure by Briaud and Øygarden (Eurocrypt ’23) and Carozza, Couteau and Joux (CCJ, Eurocrypt ’23), many questions about the impact of the regular structure on the problem hardness remain open. In this work we initiate a systematic study of the hardness of the RSD problem starting from its asymptotics. We classify different parameter regimes revealing large regimes for which RSD instances are solvable in polynomial time and on the other hand regimes that lead to particularly hard instances. Against previous perceptions, we show that a classification solely based on the uniqueness of the solution is not sufficient for isolating the worst case parameters. Further, we provide an in-depth comparison between SD and RSD in terms of reducibility and computational complexity, identifying regimes in which RSD instances are actually harder to solve. We provide the first asymptotic analyses of the algorithms presented by CCJ, establishing their worst case decoding complexities as $2^{0.141n}$ and $2^{0.135n}$, respectively. We then introduce regular-ISD algorithms by showing how to tailor the whole machinery of advanced Information Set Decoding (ISD) techniques from attacking SD to the RSD setting. The fastest regular-ISD algorithm improves the worst case decoding complexity significantly to $2^{0.112n}$. Eventually, we show that also with respect to suggested parameters regular-ISD outperforms previous approaches in most cases, reducing security levels by up to 30 bits.
Last updated:  2023-10-11
Depth-Optimized Quantum Implementation of ARIA
Yujin Yang, Kyungbae Jang, Yujin Oh, and Hwajeong Seo
The advancement of large-scale quantum computers poses a threat to the security of current encryption systems. In particular, symmetric-key cryptography significantly is impacted by general attacks using the Grover's search algorithm. In recent years, studies have been presented to estimate the complexity of Grover's key search for symmetric-key ciphers and assess post-quantum security. In this paper, we propose a depth-optimized quantum circuit implementation for ARIA, which is a symmetric key cipher included as a validation target the Korean Cryptographic Module Validation Program (KCMVP). Our quantum circuit implementation for ARIA improves the depth by more than 88.2% and Toffoli depth by more than 98.7% compared to the implementation presented in Chauhan et al.'s SPACE'20 paper. Finally, we present the cost of Grover's key search for our circuit and evaluate the post-quantum security strength of ARIA according to relevant evaluation criteria provided NIST.
Last updated:  2023-10-11
Optimized Quantum Implementation of SEED
Yujin Oh, Kyungbae Jang, Yujin Yang, and Hwajeong Seo
With the advancement of quantum computers, it has been demonstrated that Shor's algorithm enables public key cryptographic attacks to be performed in polynomial time. In response, NIST conducted a Post-Quantum Cryptography Standardization competition. Additionally, due to the potential reduction in the complexity of symmetric key cryptographic attacks to square root with Grover's algorithm, it is increasingly challenging to consider symmetric key cryptography as secure. In order to establish secure post-quantum cryptographic systems, there is a need for quantum post-quantum security evaluations of cryptographic algorithms. Consequently, NIST is estimating the strength of post-quantum security, driving active research in quantum cryptographic analysis for the establishment of secure post-quantum cryptographic systems. In this regard, this paper presents a depth-optimized quantum circuit implementation for SEED, a symmetric key encryption algorithm included in the Korean Cryptographic Module Validation Program (KCMVP). Building upon our implementation, we conduct a thorough assessment of the post-quantum security for SEED. Our implementation for SEED represents the first quantum circuit implementation for this cipher.
Last updated:  2023-10-11
Finding Shortest Vector Using Quantum NV Sieve on Grover
Hyunji Kim, Kyoungbae Jang, Yujin Oh, Woojin Seok, Wonhuck Lee, Kwangil Bae, Ilkwon Sohn, and Hwajeong Seo
Quantum computers, especially those with over 10,000 qubits, pose a potential threat to current public key cryptography systems like RSA and ECC due to Shor's algorithms. Grover's search algorithm is another quantum algorithm that could significantly impact current cryptography, offering a quantum advantage in searching unsorted data. Therefore, with the advancement of quantum computers, it is crucial to analyze potential quantum threats. While many works focus on Grover’s attacks in symmetric key cryptography, there has been no research on the practical implementation of the quantum approach for lattice-based cryptography. Currently, only theoretical analyses involve the application of Grover's search to various Sieve algorithms. In this work, for the first time, we present a quantum NV Sieve implementation to solve SVP, posing a threat to lattice-based cryptography. Additionally, we implement the extended version of the quantum NV Sieve (i.e., the dimension and rank of the lattice vector). Our extended implementation could be instrumental in extending the upper limit of SVP (currently, determining the upper limit of SVP is a vital factor). Lastly, we estimate the quantum resources required for each specific implementation and the application of Grover's search. In conclusion, our research lays the groundwork for the quantum NV Sieve to challenge lattice-based cryptography. In the future, we aim to conduct various experiments concerning the extended implementation and Grover's search.
Last updated:  2024-06-24
Fast Blind Rotation for Bootstrapping FHEs
Uncategorized
Binwu Xiang, Jiang Zhang, Yi Deng, Yiran Dai, and Dengguo Feng
Show abstract
Uncategorized
Blind rotation is one of the key techniques to construct fully homomorphic encryptions with the best known bootstrapping algorithms running in less than one second. Currently, the two main approaches, namely, AP and GINX, for realizing blind rotation are first introduced by Alperin-Sheriff and Peikert (CRYPTO 2014) and Gama, Izabachene, Nguyen and Xie (EUROCRYPT 2016), respectively. \qquad In this paper, we propose a new blind rotation algorithm based on a GSW-like encryption from the NTRU assumption. Our algorithm has performance asymptotically independent from the key distributions, and outperforms AP and GINX in both the evaluation key size and the computational efficiency(especially for large key distributions). By using our blind rotation algorithm as a building block, we present new bootstrapping algorithms for both LWE and RLWE ciphertexts. We implement our bootstrapping algorithm for LWE ciphertexts, and compare the actual performance with two bootstrapping algorithms, namely, FHEW/AP by Ducas and Micciancio (EUROCRYPT 2015) and TFHE/GINX by Chillotti, Gama, Georgieva and Izabach\`ene (Journal of Cryptology 2020), that were implemented in the OpenFHE library. For parameters with ternary key distribution at 128-bit security, our bootstrapping only needs to store evaluation key of size 18.65MB for blind rotation, which is about 89.8 times smaller than FHEW/AP and 2.9 times smaller than TFHE/GINX. Moreover, our bootstrapping can be done in 112ms on a laptop, which is about 3.2 times faster than FHEW/AP and 2.1 times faster than TFHE/GINX. More improvements are available for large key distributions such as Gaussian distributions.
Last updated:  2023-10-17
Formal Analysis of Non-profiled Deep-learning Based Side-channel Attacks
Akira Ito, Rei Ueno, Rikuma Tanaka, and Naofumi Homma
This paper formally analyzes two major non-profiled deep-learning-based side-channel attacks (DL-SCAs): differential deep-learning analysis (DDLA) by Timon and collision DL-SCA by Staib and Moradi. These DL-SCAs leverage supervised learning in non-profiled scenarios. Although some intuitive descriptions of these DL-SCAs exist, their formal analyses have been rarely conducted yet, which makes it unclear why and when the attacks succeed and how the attack can be improved. In this paper, we provide the first information-theoretical analysis of DDLA. We reveal its relevance to the mutual information analysis (MIA), and then present three theorems stating some limitations and impossibility results of DDLA. Subsequently, we provide the first probability-theoretical analysis on collision DL-SCA. After presenting its formalization with a proposal of our distinguisher for collision DL-SCA, we prove its optimality. Namely, we prove that the collision DL-SCA using our distinguisher theoretically maximizes the success rate if the neural network (NN) training is completely successful (namely, the NN completely imitates the true conditional probability distribution). Accordingly, we propose an improvement of the collision DL-SCA based on a dedicated NN architecture and a full-key recovery methodology using multiple neural distinguishers. Finally, we experimentally evaluate non-profiled (DL-)SCAs using a newly created dataset using publicly available first-order masked AES implementation. The existing public dataset of side-channel traces is insufficient to evaluate collision DL-SCAs due to a lack of substantive side-channel traces for different key values. Our dataset enables a comprehensive evaluation of collision (DL-)SCAs, which clarifies the current situation of non-profiled (DL-)SCAs.
Last updated:  2024-03-04
Generalized Implicit Factorization Problem
Yansong Feng, Abderrahmane Nitaj, and Yanbin Pan
The Implicit Factorization Problem (IFP) was first introduced by May and Ritzenhofen at PKC'09, which concerns the factorization of two RSA moduli $N_1=p_1q_1$ and $N_2=p_2q_2$, where $p_1$ and $p_2$ share a certain consecutive number of least significant bits. Since its introduction, many different variants of IFP have been considered, such as the cases where $p_1$ and $p_2$ share most significant bits or middle bits at the same positions. In this paper, we consider a more generalized case of IFP, in which the shared consecutive bits can be located at $any$ positions in each prime, not necessarily required to be located at the same positions as before. We propose a lattice-based algorithm to solve this problem under specific conditions, and also provide some experimental results to verify our analysis.
Last updated:  2023-10-10
LLM for SoC Security: A Paradigm Shift
Dipayan Saha, Shams Tarek, Katayoon Yahyaei, Sujan Kumar Saha, Jingbo Zhou, Mark Tehranipoor, and Farimah Farahmandi
As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, the task of incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to provide effective verification of modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks. Recognizing an opportunity, our research delves into leveraging the emergent capabilities of Generative Pre-trained Transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. By integrating LLMs into the SoC security verification paradigm, we open a new frontier of possibilities and challenges to ensure the security of increasingly complex SoCs. This paper offers an in-depth analysis of existing works, showcases practical case studies, demonstrates comprehensive experiments, and provides useful promoting guidelines. We also present the achievements, prospects, and challenges of employing LLM in different SoC security verification tasks.
Last updated:  2023-10-10
Check Alternating Patterns: A Physical Zero-Knowledge Proof for Moon-or-Sun
Samuel Hand, Alexander Koch, Pascal Lafourcade, Daiki Miyahara, and Léo Robert
A zero-knowledge proof (ZKP) allows a party to prove to another party that it knows some secret, such as the solution to a difficult puzzle, without revealing any information about it. We propose a physical zero-knowledge proof using only a deck of playing cards for solutions to a pencil puzzle called \emph{Moon-or-Sun}. In this puzzle, one is given a grid of cells on which rooms, marked by thick black lines surrounding a connected set of cells, may contain a number of cells with a moon or a sun symbol. The goal is to find a loop passing through all rooms exactly once, and in each room either passes through all cells with a moon, or all cells with a sun symbol. Finally, whenever the loop passes from one room to another, it must go through all cells with a moon if in the previous room it passed through all cells with a sun, and visa-versa. This last rule constitutes the main challenge for finding a physical zero-knowledge proof for this puzzle, as this must be verified without giving away through which borders the loop enters or leaves a given room. We design a card-based zero-knowledge proof of knowledge protocol for Moon-or-Sun solutions, together with an analysis of their properties. Our technique of verifying the alternation of a pattern along a non-disclosed path might be of independent interest for similar puzzles.
Last updated:  2024-03-05
AprèsSQI: Extra Fast Verification for SQIsign Using Extension-Field Signing
Maria Corte-Real Santos, Jonathan Komada Eriksen, Michael Meyer, and Krijn Reijnders
We optimise the verification of the SQIsign signature scheme. By using field extensions in the signing procedure, we are able to significantly increase the amount of available rational $2$-power torsion in verification, which achieves a significant speed-up. This, moreover, allows several other speed-ups on the level of curve arithmetic. We show that the synergy between these high-level and low-level improvements gives significant improvements, making verification $2.07$ times faster, or up to $3.41$ times when using size-speed trade-offs, compared to the state of the art, without majorly degrading the performance of signing.
Last updated:  2023-10-17
StaTI: Protecting against Fault Attacks Using Stable Threshold Implementations
Siemen Dhooghe, Artemii Ovchinnikov, and Dilara Toprakhisar
Fault attacks impose a serious threat against the practical implementations of cryptographic algorithms. Statistical Ineffective Fault Attacks (SIFA), exploiting the dependency between the secret data and the fault propagation overcame many of the known countermeasures. Later, several countermeasures have been proposed to tackle this attack using error detection methods. However, the efficiency of the countermeasures, in part governed by the number of error checks, still remains a challenge. In this work, we propose a fault countermeasure, StaTI, based on threshold implementations and linear encoding techniques. The proposed countermeasure protects the implementations of cryptographic algorithms against both side-channel and fault adversaries in a non-combined attack setting. We present a new composable notion, stability, to protect a threshold implementation against a formal gate/register-faulting adversary. Stability ensures fault propagation, making a single error check of the output suffice. To illustrate the stability notion, first, we provide stable encodings of the XOR and AND gates. Then, we present techniques to encode threshold implementations of S-boxes, and provide stable encodings of some quadratic S-boxes together with their security and performance evaluation. Additionally, we propose general encoding techniques to transform a threshold implementation of any function (e.g., non-injective functions) to a stable one. We then provide an encoding technique to use in symmetric primitives which encodes state elements together significantly reducing the encoded state size. Finally, we used StaTI to implement a secure Keccak on FPGA and report on its efficiency.
Last updated:  2023-10-10
Revisit Two Memoryless State-Recovery Cryptanalysis Methods on A5/1
Yanbin Xu, Yonglin Hao, and Mingxing Wang
At ASIACRYPT 2019, Zhang proposed a near collision attack on A5/1 claiming to recover the 64-bit A5/1 state with a time complexity around $2^{32}$ cipher ticks with negligible memory requirements. Soon after its proposal, Zhang's near collision attack was severely challenged by Derbez \etal who claimed that Zhang's attack cannot have a time complexity lower than Golic's memoryless guess-and-determine attack dating back to EUROCRYPT 1997. In this paper, we study both the guess-and-determine and the near collision attacks for recovering A5/1 states with negligible memory complexities. Firstly, we propose a new guessing technique called the \emph{move guessing technique} that can construct linear equation filters in a more efficient manner. Such a technique can be applied to both guess-and-determine and collision attacks for efficiency improvements. Secondly, we take the filtering strength of the linear equation systems into account for complexity analysis. Such filtering strength are evaluated with practical experiments making the complexities more convincing. Based on such new techniques, we are able to give 2 new guess-and-determine attacks on A5/1: the 1st attack recovers the internal state $\vec{s}^0$ with time complexity $2^{43.92}$; the 2nd one recovers a different state $\vec{s}^1$ with complexity $2^{43.25}$. We also revisit Golic's guess-and-determine attack and Zhang's near collision attacks. According to our detailed analysis, the complexity of Golic's $\vec{s}^1$ recovery attack is no lower than $2^{46.04}$, higher than the previously believed $2^{43}$. On the other hand, Zhang's near collision attack recovers $\vec{s}^0$ with the time complexity $2^{53.19}$: such a complexity can be further lowered to $2^{50.78}$ with our move guessing technique.
Last updated:  2023-11-03
Better Safe than Sorry: Recovering after Adversarial Majority
Srivatsan Sridhar, Dionysis Zindros, and David Tse
The security of blockchain protocols is a combination of two properties: safety and liveness. It is well known that no blockchain protocol can provide both to sleepy (intermittently online) clients under adversarial majority. However, safety is more critical in that a single safety violation can cause users to lose money. At the same time, liveness must not be lost forever. We show that, in a synchronous network, it is possible to maintain safety for all clients even during adversarial majority, and recover liveness after honest majority is restored. Our solution takes the form of a recovery gadget that can be applied to any protocol with certificates (such as HotStuff, Streamlet, Tendermint, and their variants).
Last updated:  2023-10-10
Polynomial IOPs for Memory Consistency Checks in Zero-Knowledge Virtual Machines
Yuncong Zhang, Shi-Feng Sun, Ren Zhang, and Dawu Gu
Zero-Knowledge Virtual Machines (ZKVMs) have gained traction in recent years due to their potential applications in a variety of areas, particularly blockchain ecosystems. Despite tremendous progress on ZKVMs in the industry, no formal definitions or security proofs have been established in the literature. Due to this lack of formalization, existing protocols exhibit significant discrepancies in terms of problem definitions and performance metrics, making it difficult to analyze and compare these advancements, or to trust the security of the increasingly complex ZKVM implementations. In this work, we focus on random-access memory, an influential and expensive component of ZKVMs. Specifically, we investigate the state-of-the-art protocols for validating the correct functioning of memory, which we refer to as the \emph{memory consistency checks}. Isolating these checks from the rest of the system allows us to formalize their definition and security notion. Furthermore, we summarize the state-of-the-art constructions using the Polynomial IOP model and formally prove their security. Observing that the bottleneck of existing designs lies in sorting the entire memory trace, we break away from this paradigm and propose a novel memory consistency check, dubbed $\mathsf{Permem}$. $\mathsf{Permem}$ bypasses this bottleneck by introducing a technique called the address cycle method, which requires fewer building blocks and---after instantiating the building blocks with state-of-the-art constructions---fewer online polynomial oracles and evaluation queries. In addition, we propose $\mathsf{gcq}$, a new construction for the lookup argument---a key building block of the memory consistency check, which costs fewer online polynomial oracles than the state-of-the-art construction $\mathsf{cq}$.
Last updated:  2024-01-31
Cornucopia: Distributed randomness beacons at scale
Miranda Christ, Kevin Choi, and Joseph Bonneau
We propose Cornucopia, a distributed randomness beacon protocol combining accumulators and verifiable delay functions. Cornucopia extends the Unicorn protocol of Lenstra and Wesolowski, utilizing an accumulator to enable efficient verification by each participant that their randomness contribution has been included in the beacon output. The output is unpredictable as long as at least one participant is honest, yielding a highly scalable distributed randomness beacon with strong security properties. The security of this construction reduces to a novel property of accumulators, insertion security. We first show that not all accumulators are insertion-secure. We then prove that common constructions (Merkle trees, RSA accumulators, and bilinear accumulators) are either naturally insertion-secure or can be made so with trivial modifications. Finally, we give two generic constructions for insertion-secure accumulators, from universal accumulators and vector commitments respectively.
Last updated:  2024-09-24
Adaptively Secure BLS Threshold Signatures from DDH and co-CDH
Sourav Das and Ling Ren
Threshold signatures are one of the most important cryptographic primitives in distributed systems. A popular choice of threshold signature scheme is the BLS threshold signature introduced by Boldyreva (PKC'03). Some attractive properties of Boldyreva's threshold signature are that the signatures are unique and short, the signing process is non-interactive, and the verification process is identical to that of non-threshold BLS. These properties have resulted in its practical adoption in several decentralized systems. However, despite its popularity and wide adoption, up until recently, the Boldyreva scheme has been proven secure only against a static adversary. Very recently, Bacho and Loss (CCS'22) presented the first proof of adaptive security for the Boldyreva scheme, but they have to rely on strong and non-standard assumptions such as the hardness of one-more discrete log (OMDL) and the Algebraic Group Model~(AGM). In this paper, we present the first adaptively secure threshold BLS signature scheme that relies on the hardness of DDH and co-CDH in asymmetric pairing groups in the Random Oracle Model~(ROM). Our signature scheme also has non-interactive signing, compatibility with non-threshold BLS verification, and practical efficiency like Boldyreva's scheme. These properties make our protocol a suitable candidate for practical adoption with the added benefit of provable adaptive security.
Last updated:  2024-11-19
Doubly Efficient Batched Private Information Retrieval
Xiuquan Ding, Giulio Malavolta, and Tianwei Zhang
Private information retrieval (PIR) allows a client to read data from a server, without revealing which information they are interested in. A PIR is doubly efficient if the server runtime is, after a one-time pre-processing, sublinear in the database size. A recent breakthrough result from Lin, Mook, and Wichs [STOC’23] proposed the first-doubly efficient PIR with (online) server computation poly-logarithmic in the size of the database, assuming the hardness of the standard Ring-LWE problem. In this work, we consider the problem of doubly efficient batched PIR (DEBPIR), where the client wishes to download multiple entries. This problem arises naturally in many practical applications of PIR, or when the database contains large entries. Our main result is a construction of DEBPIR where the amortized communication and server computation overhead is $\tilde{O}(1)$, from the Ring-LWE problem. This represents an exponential improvement compared with known constructions, and it is optimal up to poly-logarithmic factors in the security parameter. Interestingly, the server’s online operations are entirely combinatorial and all algebraic computations are done in the pre-processing or delegated to the client.
Last updated:  2023-10-09
Evaluating GPT-4’s Proficiency in Addressing Cryptography Examinations
Vasily Mikhalev, Nils Kopal, and Bernhard Esslinger
In the rapidly advancing domain of artificial intelligence, ChatGPT, powered by the GPT-4 model, has emerged as a state-of-the-art interactive agent, exhibiting substantial capabilities across various domains. This paper aims to assess the efficacy of GPT-4 in addressing and solving problems found within cryptographic examinations. We devised a multi-faceted methodology, presenting the model with a series of cryptographic questions of varying complexities derived from real academic examinations. Our evaluation encompasses both classical and modern cryptographic challenges, focusing on the model's ability to understand, interpret, and generate correct solutions while discerning its limitations. The model was challenged with a spectrum of cryptographic tasks, earning 201 out of 208 points by solving fundamental queries inspired by an oral exam, 80.5 out of 90 points on a written Crypto 1 exam, and 287 out of 385 points on advanced exercises from the Crypto 2 course. The results demonstrate that while GPT-4 shows significant promise in grasping fundamental cryptographic concepts and techniques, certain intricate problems necessitate domain-specific knowledge that may sometimes lie beyond the model's general training. Insights from this study can provide educators, researchers, and examiners with a deeper understanding of how cutting-edge AI models can be both an asset and a potential concern in academic settings related to cryptology. To enhance the clarity and coherence of our work, we utilized ChatGPT-4 to help us in formulating sentences in this paper.
Last updated:  2023-10-09
A Thorough Evaluation of RAMBAM
Daniel Lammers, Amir Moradi, Nicolai Müller, and Aein Rezaei Shahmirzadi
The application of masking, widely regarded as the most robust and reliable countermeasure against Side-Channel Analysis (SCA) attacks, has been the subject of extensive research across a range of cryptographic algorithms, especially AES. However, the implementation cost associated with applying such a countermeasure can be significant and even in some scenarios infeasible due to considerations such as area and latency overheads, as well as the need for fresh randomness to ensure the security properties of the resulting design. Most of these overheads originate from the ability to maintain security in the presence of physical defaults such as glitches and transitions. Among several schemes with a trade-off between such overheads, RAMBAM, presented at CHES 2022, offers an ultra-low latency in terms of the number of clock cycles. It is dedicated to the AES and utilizes redundant representations of the finite field elements to enhance protection against both passive and active physical attacks. In this paper, we have a deeper look at this technique and provide a comprehensive analysis. The original authors reported that the number of required traces to mount a successful attack increases exponentially with the size of the redundant representation. We however examine their scheme from theoretical point of view. More specifically, we investigate the relationship between RAMBAM and the well-established Boolean masking and, based on this, prove the insecurity of RAMBAM. Through the examples and use cases, we assess the leakage of the scheme in practice and use verification tools to demonstrate that RAMBAM does not necessarily offer adequate protection against SCA attacks neither in theory nor in practice. Confirmed by real-world experiments, we additionally highlight that -- if no dedicated facility is incorporated -- the RAMBAM designs are susceptible to fault-injection attacks despite providing some degree of protection against a sophisticated attack vector, i.e., SIFA.
Last updated:  2024-05-08
Signature-Free Atomic Broadcast with Optimal $O(n^2)$ Messages and $O(1)$ Expected Time
Xiao Sui, Xin Wang, and Sisi Duan
Byzantine atomic broadcast (ABC) is at the heart of permissioned blockchains and various multi-party computation protocols. We resolve a long-standing open problem in ABC, presenting the first information-theoretic (IT) and signature-free asynchronous ABC protocol that achieves optimal $O(n^2)$ messages and $O(1)$ expected time. Our ABC protocol adopts a new design, relying on a reduction from---perhaps surprisingly---a somewhat neglected primitive called multivalued Byzantine agreement (MBA).
Last updated:  2024-02-17
Cheater Identification on a Budget: MPC with Identifiable Abort from Pairwise MACs
Carsten Baum, Nikolas Melissaris, Rahul Rachuri, and Peter Scholl
Cheater identification in secure multi-party computation (MPC) allows the honest parties to agree upon the identity of a cheating party, in case the protocol aborts. In the context of a dishonest majority, this becomes especially critical, as it serves to thwart denial-of-service attacks and mitigate known impossibility results on ensuring fairness and guaranteed output delivery. In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort. At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol. Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022).
Last updated:  2024-06-07
Further Improvements of the Estimation of Key Enumeration with Applications to Solving LWE
Alessandro Budroni and Erik Mårtensson
In post-quantum cryptography, Learning With Errors (LWE) is one of the dominant underlying mathematical problems. The dual attack is one of the main strategies for solving the LWE problem, and it has recently gathered significant attention within the research community. The attack strategy consists of a lattice reduction part and a distinguishing part. The latter includes an enumeration subroutine over a certain number of positions of the secret key. Our contribution consists of giving a precise and efficient approach for calculating the expected complexity of such an enumeration procedure, which was missing in the literature. This allows us to decrease the estimated cost of the whole dual attack, both classically and quantumly, on well-known protocols such as Kyber, Saber, and TFHE. In addition, we explore different enumeration strategies to investigate some potential further improvements. As our method of calculating the expected cost of enumeration is pretty general, it might be of independent interest in other areas of cryptanalysis or even in different research areas.
Last updated:  2023-10-09
PERFORMANCE EVALUATION OF MACHINE LEARNING ALGORITHMS FOR INTRUSION DETECTION SYSTEM
Sudhanshu Sekhar Tripathy and Bichitrananda Behera
The escalation of hazards to safety and hijacking of digital networks are among the strongest perilous difficulties that must be addressed in the present day. Numerous safety procedures were set up to track and recognize any illicit activity on the network's infrastructure. IDS are the best way to resist and recognize intrusions on internet connections and digital technologies. To classify network traffic as normal or anomalous, Machine Learning (ML) classifiers are increasingly utilized. An IDS with machine learning increases the accuracy with which security attacks are detected. This paper focuses on intrusion detection systems (IDSs) analysis using ML techniques. IDSs utilizing ML techniques are efficient and precise at identifying network assaults. In data with large dimensional spaces, however, the efficacy of these systems degrades. correspondingly, the case is essential to execute a feasible feature removal technique capable of getting rid of characteristics that have little effect on the classification process. In this paper, we analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models. Then, we implement ML classifiers such as “Logistic Regression, Decision Tree, K-Nearest Neighbour, Naïve Bayes, Bernoulli Naïve Bayes, Multinomial Naïve Bayes, XG-Boost Classifier, Ada-Boost, Random Forest, SVM, Rocchio classifier, Ridge, Passive-Aggressive classifier, ANN besides Perceptron (PPN), the optimal classifiers are determined by comparing the results of Stochastic Gradient Descent and back-propagation neural networks for IDS”, Conventional categorization indicators, such as "accuracy, precision, recall, and the f1-measure", have been used to evaluate the performance of the ML classification algorithms.
Last updated:  2024-01-16
Exploiting Small-Norm Polynomial Multiplication with Physical Attacks: Application to CRYSTALS-Dilithium
Olivier Bronchain, Melissa Azouaoui, Mohamed ElGhamrawy, Joost Renes, and Tobias Schneider
We present a set of physical profiled attacks against CRYSTALS-Dilithium that accumulate noisy knowledge on secret keys over multiple signatures, finally leading to a full recovery attack. The methodology is composed of two steps. The first step consists of observing or inserting a bias in the posterior distribution of sensitive variables. The second step of an information processing phase which is based on belief propagation, which allows effectively exploiting that bias. The proposed concrete attacks rely on side-channel information, injection of fault attacks, or a combination of the two. Interestingly, the adversary benefits from the knowledge on the released signature, but is not dependent on it. We show that the combination of a physical attack with the binary knowledge of acceptance or rejection of a signature also leads to exploitable information on the secret key. Finally, we demonstrate that this approach is also effective against shuffled implementations of CRYSTALS-Dilithium.
Last updated:  2023-10-09
Arithmetic PCA for Encrypted Data
Jung Hee Cheon, Hyeongmin Choe, Saebyul Jung, Duhyeong Kim, Dah Hoon Lee, and Jai Hyun Park
Reducing the size of large dimensional data is a critical task in machine learning (ML) that often involves using principal component analysis (PCA). In privacy-preserving ML, data confidentiality is of utmost importance, and reducing data size is a crucial way to cut overall costs. This work focuses on minimizing the number of normalization processes in the PCA algorithm, which is a costly procedure in encrypted PCA. By modifying Krasulina's algorithm, non-polynomial operations were eliminated, except for a single delayed normalization at the end. Our PCA algorithm demonstrated similar performance to conventional PCA algorithms in face recognition applications. We also implemented it using the CKKS (Cheon-Kim-Kim-Song) homomorphic encryption scheme and obtained the first 6 principal components of a 128$\times$128 real matrix in 7.85 minutes using 8 threads.
Last updated:  2025-04-14
Switching the Top Slice of the Sandwich with Extra Filling Yields a Stronger Boomerang for NLFSR-based Block Ciphers
Amit Jana, Mostafizar Rahman, Prathamesh Ram, Dhiman Saha, and Goutam Paul
The Boomerang attack was one of the first attempts to visualize a cipher ($E$) as a composition of two sub-ciphers ($E_1\circ E_0$) to devise and exploit two high-probability (say $p,q$) shorter trails instead of relying on a single low probability (say $s$) longer trail for differential cryptanalysis. The attack generally works whenever $p^2 \cdot q^2 > s$. However, it was later succeeded by the so-called ``sandwich attack'' which essentially splits the cipher in three parts $E'_1\circ E_m \circ E'_0$ adding an additional middle layer ($E_m$) with distinguishing probability of $p^2\cdot r\cdot q^2$. It is primarily the generalization of a body of research in this direction that investigate what is referred to as the switching activity and capture the dependencies and potential incompatibilities of the layers that the middle layer separates. This work revisits the philosophy of the sandwich attack over multiple rounds for NLFSR-based block ciphers and introduces a new method to find high probability boomerang distinguishers. The approach formalizes boomerang attacks using only ladder, And switches. The cipher is treated as $E = E_1 \circ E_m$, a specialized form of a sandwich attack which we called as the ``open-sandwich attack''. The distinguishing probability for this attack configuration is $r \cdot q^2$. Using this innovative approach, the study successfully identifies a deterministic boomerang distinguisher for the keyed permutation of the TinyJambu cipher over 320 rounds. Additionally, a 640-round boomerang with a probability of $2^{-22}$ is presented with 95% success rate. In the related-key setting, we unveil full-round boomerangs with probabilities of $2^{-19}$, $2^{-18}$, and $2^{-12}$ for all three variants, demonstrating a 99% success rate. Similarly, for Katan-32, a more effective related-key boomerang spanning 140 rounds with a probability of $2^{-15}$ is uncovered with 70% success rate. Further, in the single-key setting, a 84-round boomerang with probability $2^{-30}$ found with success rate of 60%. This research deepens the understanding of boomerang attacks, enhancing the toolkit for cryptanalysts to develop efficient and impactful attacks on NLFSR-based block ciphers.
Last updated:  2024-05-28
Don’t Forget Pairing-Friendly Curves with Odd Prime Embedding Degrees
Uncategorized
Yu Dai, Fangguo Zhang, and Chang-an Zhao
Show abstract
Uncategorized
Pairing-friendly curves with odd prime embedding degrees at the 128-bit security level, such as BW13-310 and BW19-286, sparked interest in the field of public-key cryptography as small sizes of the prime fields. However, compared to mainstream pairing-friendly curves at the same security level, i.e., BN446 and BLS12-446, the performance of pairing computations on BW13-310 and BW19-286 is usually considered ineffcient. In this paper we investigate high performance software implementations of pairing computation on BW13-310 and corresponding building blocks used in pairing-based protocols, including hashing, group exponentiations and membership testings. Firstly, we propose effcient explicit formulas for pairing computation on this curve. Moreover, we also exploit the state-of-art techniques to implement hashing in G1 and G2, group exponentiations and membership testings. In particular, for exponentiations in G2 and GT , we present new optimizations to speed up computational effciency. Our implementation results on a 64-bit processor show that the gap in the performance of pairing computation between BW13-310 and BN446 (resp. BLS12-446) is only up to 4.9% (resp. 26%). More importantly, compared to BN446 and BLS12-446, BW13- 310 is about 109.1% − 227.3%, 100% − 192.6%, 24.5% − 108.5% and 68.2% − 145.5% faster in terms of hashing to G1, exponentiations in G1 and GT , and membership testing for GT , respectively. These results reveal that BW13-310 would be an interesting candidate in pairing-based cryptographic protocols.
Last updated:  2023-10-08
TMVP-based Polynomial Convolution for Saber and Sable on GPU using CUDA-cores and Tensor-cores
Muhammad Asfand Hafeez, Wai-Kong Lee, Angshuman Karmakar, and Seong Oun Hwang
Recently proposed lattice-based cryptography algorithms can be used to protect the IoT communication against the threat from quantum computers, but they are computationally heavy. In particular, polynomial multiplication is one of the most time-consuming operations in lattice-based cryptography. To achieve efficient implementation, the Number Theoretic Transform (NTT) algorithm is an ideal choice, but it has certain limitations on the parameters, which not all lattice-based schemes can employ directly. Hence, alternative techniques are proposed to accelerate polynomial multiplication on lattice-based schemes that cannot utilize the NTT directly. In this paper, we propose a parallel Toeplitz matrix-vector product (TMVP) version to accelerate the polynomial multiplication in PQC algorithms implemented it on a graphics processing unit (GPU). This is the first time a TMVP parallel version has been proposed and experimented on different GPU cores (i.e., CUDA-cores and Tensor-cores). The effectiveness of the proposed solution is validated on Saber (the NIST post-quantum standardization finalist) and Sable (an improved version of Saber) schemes. Experimental results show that TMVP-based polynomial convolution using CUDA-cores fails to exhibit a significant enhancement compared to the schoolbook CUDA-core method already proposed by Hafeez et al. 2023. However, when the TMVP technique is applied to Tensor-cores, it outperformed state-of-the-art implementations. The proposed Tensor-core approach outperformed the schoolbook Tensor-core method by up to 1.21×, and outperformed the dot-product-instructions method (Lee et al. 2022) by up to 3.63×. The proposed TMVP Tensor-cores is also faster than the TMVP CUDA-cores method by 13.76×
Last updated:  2023-10-07
A Note on ``a two-factor security authentication scheme for wireless sensor networks in IoT environments''
Zhengjun Cao and Lihua Liu
We show that the scheme [Neurocomputing, 2022 (500), 741-749] fails to keep anonymity, not as claimed. The scheme neglects the basic requirement for bit-wise XOR, and tries to encrypt data by the operator. The negligence results in some trivial equalities. An adversary can retrieve the user's identity from one captured string via the open channel.
Last updated:  2023-10-07
ELCA: Introducing Enterprise-level Cryptographic Agility for a Post-Quantum Era
Dimitrios Sikeridis, David Ott, Sean Huntley, Shivali Sharma, Vasantha Kumar Dhanasekar, Megha Bansal, Akhilesh Kumar, Anwitha U N, Daniel Beveridge, and Sairam Veeraswamy
Given the importance of cryptography to modern security and privacy solutions, it is surprising how little attention has been given to the problem of \textit{cryptographic agility}, or frameworks enabling the transition from one cryptographic algorithm or implementation to another. In this paper, we argue that traditional notions of cryptographic agility fail to capture the challenges facing modern enterprises that will soon be forced to implement a disruptive migration from today’s public key algorithms (e.g., RSA, ECDH) to quantum-safe alternatives (e.g., CRYSTALS-KYBER). After discussing the challenge of real-world cryptographic transition at scale, we describe our work on enterprise-level cryptographic agility for secure communications based on orchestrated \textit{cryptographic providers}. Our policy-driven approach, prototyped in service mesh, provides a much-needed re-envisioning for cryptographic agility and highlights what’s missing today to enable disruptive cryptographic change at scale.
Last updated:  2024-09-25
Unclonable Commitments and Proofs
Vipul Goyal, Giulio Malavolta, and Justin Raizes
Non-malleable cryptography, proposed by Dolev, Dwork, and Naor (SICOMP '00), has numerous applications in protocol composition. In the context of proofs, it guarantees that an adversary who receives a proof cannot maul it into another valid proof. However, non-malleable cryptography (particularly in the non-interactive setting) suffers from an important limitation: An attacker can always copy the proof and resubmit it to another verifier (or even multiple verifiers). In this work, we prevent even the possibility of copying the proof as it is, by relying on quantum information. We call the resulting primitive unclonable proofs, making progress on a question posed by Aaronson. We also consider the related notion of unclonable commitments. We introduce formal definitions of these primitives that model security in various settings of interest. We also provide a near tight characterization of the conditions under which these primitives are possible, including a rough equivalence between unclonable proofs and public-key quantum money.
Last updated:  2023-10-20
DEFEND: Towards Verifiable Delay Functions from Endomorphism Rings
Knud Ahrens and Jens Zumbrägel
We present a verifiable delay function based on isogenies of supersingular elliptic curves, using Deuring correspondence and computation of endomorphism rings for the delay. For each input x a verifiable delay function has a unique output y and takes a predefined time to evaluate, even with parallel computing. Additionally, it generates a proof by which the output can efficiently be verified. In our approach the input is a path in the 2-isogeny graph and the output is the maximal order isomorphic to the endomorphism ring of the curve at the end of that path. This approach is presumably quantum-secure, does not require a trusted setup or special primes and the verification is independent from the delay. It works completely within the isogeny setting and the computation of the proof causes no overhead. The efficient sampling of challenges however remains an open problem.
Last updated:  2025-02-11
Leaky McEliece: Secret Key Recovery From Highly Erroneous Side-Channel Information
Marcus Brinkmann, Chitchanok Chuengsatiansup, Alexander May, Julian Nowakowski, and Yuval Yarom
The McEliece cryptosystem is a strong contender for post-quantum schemes, including key encapsulation for confidentiality of key exchanges in network protocols. A McEliece secret key is a structured parity check matrix that is transformed via Gaussian elimination into an unstructured public key. We show that this transformation is highly critical with respect to side-channel leakage. We assume leakage of the elementary row operations during Gaussian elimination, motivated by McEliece implementations in the cryptographic libraries Classic McEliece and Botan. We propose a novel decoding algorithm to reconstruct a secret key from its public key with information from a Gaussian transformation leak. Even if the obtained side-channel leakage is extremely noisy, i.e., each bit is flipped with probability as high as τ ≈ 0.4, we succeed to recover the secret key in a matter of minutes for all proposed (Classic) McEliece instantiations. Remarkably, for high-security McEliece parameters, our attack is more powerful in the sense that it can tolerate even larger τ. We demonstrate our attack on the constant-time reference implementation of Classic McEliece in a single-trace setting, using an STM32L592 ARM processor. Our result stresses the necessity of properly protecting highly structured code-based schemes such as McEliece against side-channel leakage.
Last updated:  2023-10-07
A Total Break of the 3WISE Digital Signature Scheme
Daniel Smith-Tone
A new batch of ``complete and proper'' digital signature scheme submissions has recently been published by NIST as part of its process for establishing post-quantum cryptographic standards. This note communicates an attack on the 3WISE digital signature scheme that the submitters did not wish to withdraw after NIST communicated it to them. While the 3WISE digital signature scheme is based on a collection of cubic maps which are naturally modeled as symmetric 3-tensors and 3-tensor rank is a difficult problem, the multivariate signature scheme is still vulnerable to MinRank attacks upon projection. We are able to break the NIST security level I parameters within a few seconds. Since the attack is polynomial time, there is no reparametrization resulting in a secure scheme.
Last updated:  2024-09-13
Evolving Secret Sharing Made Short
Danilo Francati and Daniele Venturi
Evolving secret sharing (Komargodski, Naor, and Yogev, TCC’16) generalizes the notion of secret sharing to the setting of evolving access structures, in which the share holders are added to the system in an online manner, and where the dealer does not know neither the access structure nor the maximum number of parties in advance. Here, the main difficulty is to distribute shares to the new players without updating the shares of old players; moreover, one would like to minimize the share size as a function of the number of players. In this paper, we initiate a systematic study of evolving secret sharing in the computational setting, where the maximum number of parties is polynomial in the security parameter, but the dealer still does not know this value, neither it knows the access structure in advance. Moreover, the privacy guarantee only holds against computationally bounded adversaries corrupting an unauthorized subset of the players. Our main result is that for many interesting, and practically relevant, evolving access structures (including graphs access structures, DNF and CNF formulas access structures, monotone circuits access structures, and threshold access structures), under standard hardness assumptions, there exist efficient secret sharing schemes with computational privacy and in which the shares are succinct (i.e., much smaller compared to the size of a natural computational representation of the evolving access structure).
Last updated:  2025-02-25
On Linear Equivalence, Canonical Forms, and Digital Signatures
Tung Chou, Edoardo Persichetti, and Paolo Santini
Given two linear codes, the code equivalence problem asks to find an isometry mapping one code into the other. The problem can be described in terms of group actions and, as such, finds a natural application in signatures derived from a Zero-Knowledge Proof system. A recent paper, presented at Asiacrypt 2023, showed how a proof of equivalence can be significantly compressed by describing how the isometry acts only on an information set. Still, the resulting signatures are far from being optimal, as the size for a witness to this relation is still significantly larger than the theoretical lower bound, which is twice the security parameter. In this paper, we fill this gap and propose a new notion of equivalence, which leads to a drastically reduced witness size. For many cases, the resulting size is exactly the optimal one given by the lower bound. We achieve this by introducing the framework of canonical representatives, that is, representatives for classes of codes which are equivalent under some notion of equivalence. We propose new notions of equivalence which encompass and further extend all the existing ones: this allows to identify broader classes of equivalent codes, for which the equivalence can be proved with a very compact witness. We associate these new notions to a specific problem, called Canonical Form Linear Equivalence Problem (CF-LEP), which we show to be as hard as the original one (when random codes are considered), providing reductions in both ways. As an added consequence, this reduction leads to a new solver for the code equivalence problem, which is the fastest solver when the finite field size is large enough. Finally, we show that our framework yields a remarkable reduction in signature size when compared to the LESS submission. Our variant is able to obtain very compact signatures, around 2 KB or less, which are among the smallest in the code-based setting.
Last updated:  2024-09-24
Unclonable Non-Interactive Zero-Knowledge
Ruta Jawale and Dakshita Khurana
A non-interactive ZK (NIZK) proof enables verification of NP statements without revealing secrets about them. However, an adversary that obtains a NIZK proof may be able to clone this proof and distribute arbitrarily many copies of it to various entities: this is inevitable for any proof that takes the form of a classical string. In this paper, we ask whether it is possible to rely on quantum information in order to build NIZK proof systems that are impossible to clone. We define and construct unclonable non-interactive zero-knowledge arguments (of knowledge) for NP, addressing a question first posed by Aaronson (CCC 2009). Besides satisfying the zero-knowledge and argument of knowledge properties, these proofs additionally satisfy unclonability. Very roughly, this ensures that no adversary can split an honestly generated proof of membership of an instance $x$ in an NP language $\mathcal{L}$ and distribute copies to multiple entities that all obtain accepting proofs of membership of $x$ in $\mathcal{L}$. Our result has applications to unclonable signatures of knowledge, which we define and construct in this work; these non-interactively prevent replay attacks.
Last updated:  2024-09-27
Towards Practical Transciphering for FHE with Setup Independent of the Plaintext Space
Pierrick Méaux, Jeongeun Park, and Hilder V. L. Pereira
Fully Homomorphic Encryption (FHE) is a powerful tool to achieve non-interactive privacy preserving protocols with optimal computation/communication complexity. However, the main disadvantage is that the actual communication cost (bandwidth) is high due to the large size of FHE ciphertexts. As a solution, a technique called transciphering (also known as Hybrid Homomorphic Encryption) was introduced to achieve almost optimal bandwidth for such protocols. However, all of existing works require clients to fix a precision for the messages or a mathematical structure for the message space beforehand. It results in unwanted constraints on the plaintext size or underlying structure of FHE based applications. In this article, we introduce a new approach for transciphering which does not require fixed message precision decided by the client, for the first time. In more detail, a client uses any kind of FHE-friendly symmetric cipher for $\{0,1\}$ to send its input data encrypted bit-by-bit, then the server can choose a precision $p$ depending on the application and homomorphically transforms the encrypted bits into FHE ciphertexts encrypting integers in $\mathbb{Z}_p$. To illustrate our new technique, we evaluate a transciphering using FiLIP cipher and adapt the most practical homomorphic evaluation technique [CCS'22] to keep the practical latency. As a result, our proof-of-concept implementation for $p$ from $2^2$ to $2^8$ takes only from $13$ ms to $137$ ms.
Last updated:  2024-09-20
Proofs of Space with Maximal Hardness
Leonid Reyzin
In a proof of space, a prover performs a complex computation with a large output. A verifier periodically checks that the prover still holds the output. The security goal for a proof of space construction is to ensure that a prover who erases even a portion of the output has to redo a large portion of the complex computation in order to satisfy the verifier. In existing constructions of proofs of space, the computation that a cheating prover is forced to redo is a small fraction (vanishing or small constant) of the original complex computation. The only exception is a construction of Pietrzak (ITCS 2019) that requires extremely depth-robust graphs, which result in impractically high complexity of the initialization process. We present the first proof of space of reasonable complexity that ensures that the prover has to redo almost the entire computation (fraction arbitrarily close to 1) when trying to save even an arbitrarily small constant fraction of the space. Our construction is a generalization of an existing construction called SDR (Fisch, Eurocrypt 2019) deployed on the Filecoin blockchain. Our improvements, while general, also demonstrate that the already deployed construction has considerably better security than previously shown. Technically, our construction can be viewed as amplifying predecessor-robust graphs. These are directed acyclic graphs in which every subgraph of sufficient relative size $\pi$ contains a large single-sink connected component of relative size $\alpha_\pi$. We take a predecessor-robust graph with constant parameters $(\pi, \alpha_\pi)$, and build a bigger predecessor-robust graph with a near-optimal set of parameters and additional guarantees on sink placement, while increasing the degree only by a small additive constant.
Last updated:  2024-08-22
Shufflecake: Plausible Deniability for Multiple Hidden Filesystems on Linux
Elia Anzuoni and Tommaso Gagliardoni
We present Shufflecake, a new plausible deniability design to hide the existence of encrypted data on a storage medium making it very difficult for an adversary to prove the existence of such data. Shufflecake can be considered a ``spiritual successor'' of tools such as TrueCrypt and VeraCrypt, but vastly improved: it works natively on Linux, it supports any filesystem of choice, and can manage multiple volumes per device, so to make deniability of the existence of hidden partitions really plausible. Compared to ORAM-based solutions, Shufflecake is extremely fast and simpler but does not offer native protection against multi-snapshot adversaries. However, we discuss security extensions that are made possible by its architecture, and we show evidence why these extensions might be enough to thwart more powerful adversaries. We implemented Shufflecake as an in-kernel tool for Linux, adding useful features, and we benchmarked its performance showing only a minor slowdown compared to a base encrypted system. We believe Shufflecake represents a useful tool for people whose freedom of expression is threatened by repressive authorities or dangerous criminal organizations, in particular: whistleblowers, investigative journalists, and activists for human rights in oppressive regimes.
Last updated:  2024-01-16
Unmodified Half-Gates is Adaptively Secure - So is Unmodified Three-Halves
Xiaojie Guo, Kang Yang, Xiao Wang, Yu Yu, and Zheli Liu
Adaptive security is a crucial property for garbling schemes in pushing the communication of garbled circuits to an offline phase when the input is unknown. In this paper, we show that the popular half-gates scheme by Zahur et al. (Eurocrypt'15), without any modification, is adaptively secure in the non-programmable random permutation model (npRPM). Since real implementations of selective-secure half-gates are already based on npRPM, our result shows that these implementations are already adaptively secure under the same condition where selective security is proven. Additionally, we expand our analysis to cover the recent three-halves construction by Rosulek and Roy (Crypto'21). As a byproduct, we discuss some optimizations and separation when considering the programmable random permutation model instead.
Last updated:  2024-09-30
Adaptive Garbled Circuits and Garbled RAM from Non-Programmable Random Oracles
Cruz Barnum, David Heath, Vladimir Kolesnikov, and Rafail Ostrovsky
Garbled circuit techniques that are secure in the adaptive setting -- where inputs are chosen after the garbled program is sent -- are motivated by practice, but they are notoriously difficult to achieve. Prior adaptive garbling is either impractically expensive or encrypts the entire garbled program with the output of a programmable random oracle (PRO), a strong assumption. We present a simple framework for proving adaptive security of garbling schemes in the non-programmable random oracle (NPRO) model. NPRO is a milder assumption than PRO, and it is close to the assumption required by the widely used Free XOR extension. Our framework is applicable to a number of existing GC techniques, which are proved adaptively secure without modification. As our main application of the framework, we construct and prove adaptively secure a garbling scheme for tri-state circuits, a recently proposed circuit model that captures both Boolean circuits and RAM programs (Heath et al., Crypto 2023). For a given TSC $C$, our garbling of $C$ is at most $|C|\cdot \lambda$ bits long, where $\lambda$ is the security parameter. This implies both an adaptively secure garbled Boolean circuit scheme, as well an adaptively secure garbled RAM scheme where the garbling of $T$ steps of a RAM program has size $O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda)$ bits. Our scheme is concretely efficient: its Boolean circuit handling matches the performance of half-gates, and it is adaptively secure from NPRO.
Last updated:  2024-06-11
Polynomial Time Cryptanalytic Extraction of Neural Network Models
Isaac A. Canales-Martínez, Jorge Chavez-Saab, Anna Hambitzer, Francisco Rodríguez-Henríquez, Nitin Satpute, and Adi Shamir
Billions of dollars and countless GPU hours are currently spent on training Deep Neural Networks (DNNs) for a variety of tasks. Thus, it is essential to determine the difficulty of extracting all the parameters of such neural networks when given access to their black-box implementations. Many versions of this problem have been studied over the last 30 years, and the best current attack on ReLU-based deep neural networks was presented at Crypto’20 by Carlini, Jagielski, and Mironov. It resembles a differential chosen plaintext attack on a cryptosystem, which has a secret key embedded in its black-box implementation and requires a polynomial number of queries but an exponential amount of time (as a function of the number of neurons). In this paper, we improve this attack by developing several new techniques that enable us to extract with arbitrarily high precision all the real-valued parameters of a ReLU-based DNN using a polynomial number of queries and a polynomial amount of time. We demonstrate its practical efficiency by applying it to a full-sized neural network for classifying the CIFAR10 dataset, which has 3072 inputs, 8 hidden layers with 256 neurons each, and about 1.2 million neuronal parameters. An attack following the approach by Carlini et al. requires an exhaustive search over 2^256 possibilities. Our attack replaces this with our new techniques, which require only 30 minutes on a 256-core computer.
Last updated:  2025-01-07
Committing AE from Sponges: Security Analysis of the NIST LWC Finalists
Juliane Krämer, Patrick Struck, and Maximiliane Weishäupl
Committing security has gained considerable attention in the field of authenticated encryption (AE). This can be traced back to a line of recent attacks, which entail that AE schemes used in practice should not only provide confidentiality and authenticity, but also committing security. Roughly speaking, a committing AE scheme guarantees that ciphertexts will decrypt only for one key. Despite the recent research effort in this area, the finalists of the NIST lightweight cryptography standardization process have not been put under consideration yet. We close this gap by providing an analysis of these schemes with respect to their committing security. Despite the structural similarities the finalists exhibit, our results are of a quite heterogeneous nature: We break four of the schemes with effectively no costs, while for two schemes our attacks are costlier, yet still efficient. For the remaining three schemes ISAP, Ascon, and (a slightly modified version of) Schwaemm, we give formal security proofs. Our analysis reveals that sponges are well-suited for building committing AE schemes. Furthermore, we show several negative results when applying the zero-padding method to the NIST finalists.
Last updated:  2023-10-06
SoK: Signatures With Randomizable Keys
Sofía Celi, Scott Griffy, Lucjan Hanzlik, Octavio Perez Kempner, and Daniel Slamanig
Digital signature schemes with specific properties have recently seen various real-world applications with a strong emphasis on privacy-enhancing technologies. They have been extensively used to develop anonymous credentials schemes and to achieve an even more comprehensive range of functionalities in the decentralized web. Substantial work has been done to formalize different types of signatures where an allowable set of transformations can be applied to message-signature pairs to obtain new related pairs. Most of the previous work focused on transformations with respect to the message being signed, but little has been done to study what happens when transformations apply to the signing keys. A first attempt to thoroughly formalize such aspects was carried by Derler and Slamanig (ePrint '16, Designs, Codes and Cryptography '19), followed by the more recent efforts by Backes et. al (ASIACRYPT '18) and Eaton et. al (ePrint '23). However, the literature on the topic is vast and different terminology is used across contributions, which makes it difficult to compare related works and understand the range of applications covered by a given construction. In this work, we present a unified view of signatures with randomizable keys and revisit their security properties. We focus on state-of-the-art constructions and related applications, identifying existing challenges. Our systematization allows us to highlight gaps, open questions and directions for future research on signatures with randomizable keys.
Last updated:  2025-01-14
On the Privacy of Sublinear-Communication Jaccard Index Estimation via Min-hash Sketching
Seung Geol Choi, Dana Dachman-Soled, Mingyu Liang, Linsheng Liu, and Arkady Yerukhimovich
The min-hash sketch is a well-known technique for low-communication approximation of the Jaccard index between two input sets. Moreover, there is a folklore belief that min-hash sketch based protocols protect the privacy of the inputs. In this paper, we investigate this folklore to quantify the privacy of the min-hash sketch. We begin our investigation by considering the privacy of min-hash in a centralized setting where the hash functions are chosen by the min-hash functionality and are unknown to the participants. We show that in this case the min-hash output satisfies the standard definition of differential privacy (DP) without any additional noise. This immediately yields a privacy-preserving sublinear-communication semi-honest 2-PC protocol based on FHE where the hash function is evaluated homomorphically. To improve the efficiency of this protocol, we next consider an implementation in the random oracle model. Here, the protocol participants jointly sample public prefixes for domain separation of the random oracle, and locally evaluate the resulting hash functions on their input sets. Unfortunately, we show that in this public hash function setting, the min-hash output is no longer DP. We therefore consider the notion of distributional differential privacy (DDP) introduced by Bassily et al.(FOCS 2013). We show that if the honest party's set has sufficiently high min-entropy then the output of the min-hash functionality achieves DDP, again without any added noise. This yields a more efficient semi-honest two-party protocol in the random oracle model, where parties first locally hash their input sets and then perform a 2PC for comparison. By proving that our protocols satisfy DP and DDP respectively, our results formally confirm and qualify the folklore belief that min-hash based protocols protect the privacy of their inputs.
Last updated:  2023-10-06
cuML-DSA: Optimized Signing Procedure and Server-Oriented GPU Design for ML-DSA
Shiyu Shen, Hao Yang, Wenqian Li, and Yunlei Zhao
The threat posed by quantum computing has precipitated an urgent need for post-quantum cryptography. Recently, the post-quantum digital signature draft FIPS 204 has been published, delineating the details of the ML-DSA, which is derived from the CRYSTALS-Dilithium. Despite these advancements, server environments, especially those equipped with GPU devices necessitating high-throughput signing, remain entrenched in classical schemes. A conspicuous void exists in the realm of GPU implementation or server-specific designs for ML-DSA. In this paper, we propose the first server-oriented GPU design tailored for the ML-DSA signing procedure in high-throughput servers. We introduce several innovative theoretical optimizations to bolster performance, including depth-prior sparse ternary polynomial multiplication, the branch elimination method, and the rejection-prioritized checking order. Furthermore, exploiting server-oriented features, we propose a comprehensive GPU hardware design, augmented by a suite of GPU implementation optimizations to further amplify performance. Additionally, we present variants for sampling sparse polynomials, thereby streamlining our design. The deployment of our implementation on both server-grade and commercial GPUs demonstrates significant speedups, ranging from 170.7× to 294.2× against the CPU baseline, and an improvement of up to 60.9% compared to related work, affirming the effectiveness and efficiency of the proposed GPU architecture for ML-DSA signing procedure.
Last updated:  2023-10-11
A reduced set of submatrices for a faster evaluation of the MDS property of a circulant matrix with entries that are powers of two
Dragan Lambić
In this paper a reduced set of submatrices for a faster evaluation of the MDS property of a circulant matrix, with entries that are powers of two, is proposed. A proposition is made that under the condition that all entries of a t × t circulant matrix are powers of 2, it is sufficient to check only its 2x2 submatrices in order to evaluate the MDS property in a prime field. Although there is no theoretical proof to support this proposition at this point, the experimental results conducted on a sample of 100 thousand randomly generated matrices indicate that this proposition is true. There are benefits of the proposed MDS test on the efficiency of search methods for the generation of circulant MDS matrices, regardless of the correctness of this proposition. However, if this proposition is correct, its impact on the speed of search methods for circulant MDS matrices will be huge, which will enable generation of MDS matrices of large sizes. Also, a modified version of the make_binary_powers function is presented. Based on this modified function and the proposed MDS test, some examples of efficient 16 x 16 MDS matrices are presented. Also, an examples of efficient 24 x 24 matrices are generated, whose MDS property should be further validated.
Last updated:  2024-04-09
Kirby: A Robust Permutation-Based PRF Construction
Charlotte Lefevre, Yanis Belkheyar, and Joan Daemen
We present a construction, called Kirby, for building a variable-input-length pseudorandom function (VIL-PRF) from a $b$-bit permutation. For this construction we prove a tight bound of $b/2$ bits of security on the PRF distinguishing advantage in the random permutation model and in the multi-user setting. Similar to full-state keyed sponge/duplex, it supports full-state absorbing and additionally supports full-state squeezing, while the sponge/duplex can squeeze at most $b-c$ bits per permutation call, for a security level of $c$ bits. This advantage is especially relevant on constrained platforms when using a permutation with small width $b$. For instance, for $b=256$ at equal security strength the squeezing rate of Kirby is twice that of keyed sponge/duplex. This construction could be seen as a generalization of the construction underlying the stream cipher family Salsa. Furthermore, we define a simple mode on top of Kirby that turns it into a deck function with parallel expansion. This is similar to Farfalle but it has a much smaller memory footprint. Furthermore we prove that in the Kirby construction, the leakage of intermediate states does not allow recovering the key or earlier states.
Last updated:  2024-08-24
Accountable Decryption made Formal and Practical
Rujia Li, Yuanzhao Li, Qin Wang, Sisi Duan, Qi Wang, and Mark Ryan
With the increasing scale and complexity of online activities, accountability, as an after-the-fact mechanism, has become an effective complementary approach to ensure system security. Decades of research have delved into the connotation of accountability. They fail, however, to achieve practical accountability of decryption. This paper seeks to address this gap. We consider the scenario where a client (called encryptor, her) encrypts her data and then chooses a delegate (a.k.a. decryptor, him) that stores data for her. If the decryptor initiates an illegitimate decryption on the encrypted data, there is a non-negligible probability that this behavior will be detected, thereby holding the decryptor accountable for his decryption. We make three contributions. First, we review key definitions of accountability known so far. Based on extensive investigations, we formalize new definitions of accountability specifically targeting the decryption process, denoted as accountable decryption, and discuss the (im)possibilities when capturing this concept. We also define the security goals in correspondence. Secondly, we present a novel Trusted Execution Environment(TEE)-assisted solution aligning with definitions. Instead of fully trusting TEE, we take a further step, making TEE work in the "trust, but verify" model where we trust TEE and use its service, but empower users (i.e., decryptors) to detect the potentially compromised state of TEEs. Thirdly, we implement a full-fledged system and conduct a series of evaluations. The results demonstrate that our solution is efficient. Even in a scenario involving $300,000$ log entries, the decryption process concludes in approximately $5.5$ms, and malicious decryptors can be identified within $69$ms.
Last updated:  2024-06-03
Lookup Arguments: Improvements, Extensions and Applications to Zero-Knowledge Decision Trees
Matteo Campanelli, Antonio Faonio, Dario Fiore, Tianyu Li, and Helger Lipmaa
Lookup arguments allow to prove that the elements of a committed vector come from a (bigger) committed table. They enable novel approaches to reduce the prover complexity of general-purpose zkSNARKs, implementing ``non-arithmetic operations" such as range checks, XOR and AND more efficiently. We extend the notion of lookup arguments along two directions and improve their efficiency: (1) we extend vector lookups to matrix lookups (where we can prove that a committed matrix is a submatrix of a committed table). (2) We consider the notion of zero-knowledge lookup argument that keeps the privacy of both the sub-vector/sub-matrix and the table. (3) We present new zero-knowledge lookup arguments, dubbed cq+, zkcq+ and cq++, more efficient than the state of the art, namely the recent work by Eagen, Fiore and Gabizon named cq. Finally, we give a novel application of zero-knowledge matrix lookup argument to the domain of zero-knowledge decision tree where the model provider releases a commitment to a decision tree and can prove zero-knowledge statistics over the committed data structure. Our scheme based on lookup arguments has succinct verification, prover's time complexity asymptotically better than the state of the art, and is secure in a strong security model where the commitment to the decision tree can be malicious.
Last updated:  2023-10-05
Threshold Implementations with Non-Uniform Inputs
Siemen Dhooghe and Artemii Ovchinnikov
Modern block ciphers designed for hardware and masked with Threshold Implementations (TIs) provide provable security against first-order attacks. However, the application of TIs leaves designers to deal with a trade-off between its security and its cost, for example, the process to generate its required random bits. This generation cost comes with an increased overhead in terms of area and latency. Decreasing the number of random bits for the masking allows to reduce the aforementioned overhead. We propose to reduce the randomness to mask the secrets, like the plaintext. For that purpose, we suggest relaxing the requirement for the uniformity of the input shares and reuse randomness for their masking in first-order TIs. We apply our countermeasures to first-order TIs of the Prince and Midori64 ciphers with three shares. Since the designs with non-uniform masks are no longer perfect first-order probing secure, we provide further analysis by calculating bounds on the advantage of a noisy threshold-probing adversary. We then make use of the PROLEAD tool, which implements statistical tests verifying the robust probing security to compare its output with our estimates. Finally, we evaluate the designs on FPGA to highlight the practical security of our solution. We observe that their security holds while requiring four times less randomness over uniform TIs.
Last updated:  2024-09-13
On the Viability of Open-Source Financial Rails: Economic Security of Permissionless Consensus
Jacob D. Leshno, Rafael Pass, and Elaine Shi
Bitcoin demonstrated the possibility of a financial ledger that operates without the need for a trusted central authority. However, concerns persist regarding its security and considerable energy consumption. We assess the consensus protocols that underpin Bitcoin’s functionality, questioning whether they can ensure economically meaningful security while maintaining a permissionless design that allows free entry of operators. We answer this affirmatively by constructing a protocol that guarantees economic security and preserves Bitcoin's permissionless design. This protocol's security does not depend on monetary payments to miners or immense electricity consumption, which our analysis suggests are ineffective. Our framework integrates economic theory with distributed systems theory, and highlights the role of the protocol's user community.
Last updated:  2024-08-15
OPTIKS: An Optimized Key Transparency System
Julia Len, Melissa Chase, Esha Ghosh, Kim Laine, and Radames Cruz Moreno
Key Transparency (KT) refers to a public key distribution system with transparency mechanisms proving its correct operation, i.e., proving that it reports consistent values for each user's public key. While prior work on KT systems have offered new designs to tackle this problem, relatively little attention has been paid on the issue of scalability. Indeed, it is not straightforward to actually build a scalable and practical KT system from existing constructions, which may be too complex, inefficient, or non-resilient against machine failures. In this paper, we present OPTIKS, a full featured and optimized KT system that focuses on scalability. Our system is simpler and more performant than prior work, supporting smaller storage overhead while still meeting strong notions of security and privacy. Our design also incorporates a crash-tolerant and scalable server architecture, which we demonstrate by presenting extensive benchmarks. Finally, we address several real-world problems in deploying KT systems that have received limited attention in prior work, including account decommissioning and user-to-device mapping.
Last updated:  2025-01-06
Leakage-Free Probabilistic Jasmin Programs
José Bacelar Almeida, Denis Firsov, Tiago Oliveira, and Dominique Unruh
This paper presents a semantic characterization of leakage-freeness through timing side-channels for Jasmin programs. Our characterization covers probabilistic Jasmin programs that are not constant-time. In addition, we provide a characterization in terms of probabilistic relational Hoare logic and prove the equivalence between both definitions. We also prove that our new characterizations are compositional and relate our new definitions to existing ones from prior work, which could only be applied to deterministic programs. To provide practical evidence, we use the Jasmin framework to develop a rejection sampling algorithm and provide an EasyCrypt proof that ensures the algorithm's implementation is leakage-free while not being constant-time.
Last updated:  2024-01-12
Making an Asymmetric PAKE Quantum-Annoying by Hiding Group Elements
Marcel Tiepelt, Edward Eaton, and Douglas Stebila
The KHAPE-HMQV protocol is a state-of-the-art highly efficient asymmetric password-authenticated key exchange protocol that provides several desirable security properties, but has the drawback of being vulnerable to quantum adversaries due to its reliance on discrete logarithm-based building blocks: solving a single discrete logarithm allows the attacker to perform an offline dictionary attack and recover the password. We show how to modify KHAPE-HMQV to make the protocol quantum-annoying: a classical adversary who has the additional ability to solve discrete logarithms can only break the protocol by solving a discrete logarithm for each guess of the password. While not fully resistant to attacks by quantum computers, a quantum-annoying protocol could offer some resistance to quantum adversaries for whom discrete logarithms are relatively expensive. Our modification to the protocol is small: encryption (using an ideal cipher) is added to one message. Our analysis uses the same ideal cipher model assumption as the original analysis of KHAPE, and quantum annoyingness is modelled using an extension of the generic group model which gives a classical adversary a discrete logarithm oracle.
Last updated:  2023-10-03
List Oblivious Transfer and Applications to Round-Optimal Black-Box Multiparty Coin Tossing
Michele Ciampi, Rafail Ostrovsky, Luisa Siniscalchi, and Hendrik Waldner
In this work we study the problem of minimizing the round complexity for securely evaluating multiparty functionalities while making black-box use of polynomial time assumptions. In Eurocrypt 2016, Garg et al. showed that, assuming all parties have access to a broadcast channel, then at least four rounds of communication are required to securely realize non-trivial functionalities in the plain model. A sequence of works follow-up the result of Garg et al. matching this lower bound under a variety of assumptions. Unfortunately, none of these works make black-box use of the underlying cryptographic primitives. In Crypto 2021, Ishai, Khurana, Sahai, and Srinivasan came closer to matching the four-round lower bound, obtaining a five-round protocol that makes black-box use of oblivious transfer and PKE with pseudorandom public keys. In this work, we show how to realize any input-less functionality (e.g., coin-tossing, generation of key-pairs, and so on) in four rounds while making black-box use of two-round oblivious transfer. As an additional result, we construct the first four-round MPC protocol for generic functionalities that makes black-box use of the underlying primitives, achieving security against non-aborting adversaries. Our protocols are based on a new primitive called list two-party computation. This primitive offers relaxed security compared to the standard notion of secure two-party computation. Despite this relaxation, we argue that this tool suffices for our applications. List two-party computation is of independent interest, as we argue it can also be used for the generation of setups, like oblivious transfer correlated randomness, in three rounds. Prior to our work, generating such a setup required at least four rounds of interactions or a trusted third party.
Last updated:  2023-10-03
Lower bound of costs of formulas to compute image curves of $3$-isogenies in the framework of generalized Montgomery coordinates
Tomoki Moriya, Hiroshi Onuki, Yusuke Aikawa, and Tsuyoshi Takagi
In 2022, Moriya, Onuki, Aikawa, and Takagi proposed a new framework named generalized Montgomery coordinates to treat one-coordinate type formulas to compute isogenies. This framework generalizes some already known one-coordinate type formulas of elliptic curves. Their result shows that a formula to compute image points under isogenies is unique in the framework of generalized Montogmery coordinates; however, a formula to compute image curves is not unique. Therefore, we have a question: What formula is the most efficient to compute image curves in the framework of generalized Montogmery coordinates? In this paper, we analyze the costs of formulas to compute image curves of $3$-isogenies in the framework of generalized Montgomery coordinates. From our result, the lower bound of the costs is $1\mathbf{M}+1\mathbf{S}$ as a formula whose output and input are in affine coordinates, $2\mathbf{S}$ as an affine formula whose output is projective, and $2\mathbf{M}+3\mathbf{S}$ as a projective formula.
Last updated:  2024-01-12
Towards Practical Doubly-Efficient Private Information Retrieval
Hiroki Okada, Rachel Player, Simon Pohmann, and Christian Weinert
Private information retrieval (PIR) protocols allow clients to access database entries without revealing the queried indices. They have many real-world applications, including privately querying patent-, compromised credential-, and contact databases. While existing PIR protocols that have been implemented perform reasonably well in practice, they all have suboptimal asymptotic complexities. A line of work has explored so-called doubly-efficient PIR (DEPIR), which refers to single-server PIR protocols with optimal asymptotic complexities. Recently, Lin, Mook, and Wichs (STOC 2023) presented the first protocol that completely satisfies the DEPIR constraints and can be rigorously proven secure. Unfortunately, their proposal is purely theoretical in nature. It is even speculated that such protocols are completely impractical, and hence no implementation of any DEPIR protocol exists. In this work, we challenge this assumption. We propose several optimizations for the protocol of Lin, Mook, and Wichs that improve both asymptotic and concrete running times, as well as storage requirements, by orders of magnitude. Furthermore, we implement the resulting protocol and show that for batch queries it outperforms state-of-the-art protocols.
Last updated:  2023-10-03
Efficient and Usable Coercion-Resistant E-Voting on the Blockchain
Neyire Deniz Sarier
In [1], Sarier presents a practical biometric-based non-transferable credential scheme that maintains the efficiency of the underlying Brands credential. In this paper, we design a new Blockchain-Based E-Voting (BBEV) scheme that combines the system of [1] with encrypted Attribute Based Credentials for a non-transferable code-voting approach to achieve efficient, usable, anonymous, transparent, auditable, verifiable, receipt-free and coercion-resistant remote voting system for small/medium scale elections. To the best of our knowledge, the system is the first user-centric BBEV scheme that depends on the one-show Brands' Credential both for biometric authentication and pre-encrypted ballot generation leading to a natural prevention against double voting. Even though the system is instantiated with Bitcoin Blockchain due to its prevalence and various coin mixers available for anonymity, the system is designed to be generic, i.e. independent of the cryptocurrency. Thus, the new BBEV scheme can be extended to large-scale elections for public Blockchains with higher throughput/cheaper transaction fees. Finally, a cost analysis based on the last USA presidential election data shows that, the new BBEV is advantageous over the traditional one if implemented for three consecutive elections.
Last updated:  2024-02-21
Provable Dual Attacks on Learning with Errors
Amaury Pouly and Yixin Shen
Learning with Errors (LWE) is an important problem for post-quantum cryptography (PQC) that underlines the security of several NIST PQC selected algorithms. Several recent papers have claimed improvements on the complexity of so-called dual attacks on LWE. These improvements make dual attacks comparable to or even better than primal attacks in certain parameter regimes. Unfortunately, those improvements rely on a number of untested and hard-to-test statistical assumptions. Furthermore, a recent paper claims that the whole premise of those improvements might be incorrect. The goal of this paper is to improve the situation by proving the correctness of a dual attack without relying on any statistical assumption. Although our attack is greatly simplified compared to the recent ones, it shares many important technical elements with those attacks and can serve as a basis for the analysis of more advanced attacks. We provide some rough estimates on the complexity of our simplified attack on Kyber using a Monte Carlo Markov Chain discrete Gaussian sampler. Our main contribution is to clearly identify a set of parameters under which our attack (and presumably other recent dual attacks) can work. Furthermore, our analysis completely departs from the existing statistics-based analysis and is instead rooted in geometry. We also compare the regime in which our algorithm works to the ``contradictory regime'' of [Ducas and Pulles,2023]. We observe that those two regimes are essentially complementary. Finally, we give a quantum version of our algorithm to speed up the computation. The algorithm is inspired by [Albrecht, and Shen,2022] but is completely formal and does not rely on any heuristics.
Last updated:  2023-10-02
Efficient Agreement Over Byzantine Gossip
Ran Cohen, Julian Loss, and Tal Moran
Byzantine agreement (BA) asks for a set of parties to reach agreement in an adversarial setting. A central question is how to construct efficient BA protocols that scale well with the number of parties. In particular, the communication complexity is a critical barrier for large-scale implementations. State-of-the-art, scalable BA protocols typically work by sampling a small, unpredictable committee of parties that will send messages in each round. These messages must reach all honest parties, to allow the next round's committee to function. In practice, this is usually accomplished by propagating messages over a gossip network, implemented over a partial communication graph. Most formulations of gossip networks have an ideal guarantee that every message delivered to any honest party will be delivered to every other honest party. Unfortunately, realizing this guarantee necessarily makes the protocol vulnerable to denial-of-service attacks, since an adversary can flood the network with many messages that the protocol must deliver to all parties. In this paper, we make several contributions towards realizing the goal of efficient, scalable byzantine agreement over a gossip network: 1. We define ``gossip with abort,'' a relaxed gossip model that can be efficiently realized with minor modifications to existing gossip protocols, yet allows for significant savings in communication compared to the full point-to-point model. 2. Our protocols work in a graded PKI model, in which honest parties only have partial agreement about the set of participants in the protocol. This model arises naturally in settings without trusted setup, such as the ``permissionless'' setting underlying many blockchain protocols. 3. We construct a new, player-replaceable BA protocol in the graded PKI model. The concrete communication complexity of our protocol, for typical parameter values, is more than 25 times better than the current state-of-the-art BA protocols in the honest-majority setting.
Last updated:  2024-02-26
IS-CUBE: An isogeny-based compact KEM using a boxed SIDH diagram
Tomoki Moriya
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. One of the benefits of using isogeny-based cryptography is its compactness. In particular, a key exchange scheme SIDH allowed us to use a $4\lambda$-bit prime for the security parameter $\lambda$. Unfortunately, SIDH was broken in 2022 by some studies. After that, some isogeny-based key exchange and public key encryption schemes have been proposed; however, most of these schemes use primes whose sizes are not guaranteed as linearly related to the security parameter $\lambda$. As far as we know, the remaining schemes have not been implemented due to the computation of isogenies of high dimensional abelian varieties, or they need to use a ``weak" curve (\textit{i.e.}, a curve whose endomorphism ring is known) as the starting curve. In this study, we propose a novel compact isogeny-based key encapsulation mechanism named IS-CUBE via Kani's theorem and a $3$-dimensional SIDH diagram. A prime used in IS-CUBE is of the size of about $8\lambda$ bits, and we can use a random supersingular elliptic curve for the starting curve. The public key of IS-CUBE is about $3$ times larger than that of SIKE, and the ciphertext of IS-CUBE is about $4$ times larger than that of SIKE from theoretical estimation. In practice, compared to FESTA, the public key of IS-CUBE is slightly larger and its ciphertext is slightly smaller. The core idea of IS-CUBE comes from the hardness of some already known computational problems and a novel computational problem (the Long Isogeny with Torsion (LIT) problem), which is the problem to compute a hidden isogeny from two given supersingular elliptic curves and information of torsion points of relatively small order.
Last updated:  2024-01-10
PQ.V.ALU.E: Post-Quantum RISC-V Custom ALU Extensions on Dilithium and Kyber
Konstantina Miteloudi, Joppe Bos, Olivier Bronchain, Björn Fay, and Joost Renes
This paper explores the challenges and potential solutions of implementing the recommended upcoming post-quantum cryptography standards (the CRYSTALS-Dilithium and CRYSTALS-Kyber algorithms) on resource constrained devices. The high computational cost of polynomial operations, fundamental to cryptography based on ideal lattices, presents significant challenges in an efficient implementation. This paper proposes a hardware/software co-design strategy using RISC-V extensions to optimize resource utilization and speed up the number-theoretic transformations (NTTs). The primary contributions include a lightweight custom arithmetic logic unit (ALU), integrated into a 4-stage pipeline 32-bit RISC-V processor. This ALU is tailored towards the NTT computations and supports modular arithmetic as well as NTT butterfly operations. Furthermore, an extension to the RISC-V instruction set is introduced, with ten new instructions accessing the custom ALU to perform the necessary operations. The new instructions reduce the cycle count of the Kyber and Dilithium NTTs by more than 80% compared to optimized assembly, while being more lightweight than other works that exist in the literature.
Last updated:  2023-10-02
Algebraic Group Model with Oblivious Sampling
Helger Lipmaa, Roberto Parisella, and Janno Siim
In the algebraic group model (AGM), an adversary has to return with each group element a linear representation with respect to input group elements. In many groups, it is easy to sample group elements obliviously without knowing such linear representations. Since the AGM does not model this, it can be used to prove the security of spurious knowledge assumptions. We show several well-known zk-SNARKs use such assumptions. We propose AGM with oblivious sampling (AGMOS), an AGM variant where the adversary can access an oracle that allows sampling group elements obliviously from some distribution. We show that AGM and AGMOS are different by studying the family of ``total knowledge-of-exponent'' assumptions, showing that they are all secure in the AGM, but most are not secure in the AGMOS if the DL holds. We show an important separation in the case of the KZG commitment scheme. We show that many known AGM reductions go through also in the AGMOS, assuming a novel falsifiable assumption TOFR. We prove that TOFR is secure in a version of GGM with oblivious sampling.
Last updated:  2023-10-02
zk-Bench: A Toolset for Comparative Evaluation and Performance Benchmarking of SNARKs
Jens Ernstberger, Stefanos Chaliasos, George Kadianakis, Sebastian Steinhorst, Philipp Jovanovic, Arthur Gervais, Benjamin Livshits, and Michele Orrù
Zero-Knowledge Proofs (ZKPs), especially Succinct Non-interactive ARguments of Knowledge (SNARKs), have garnered significant attention in modern cryptographic applications. Given the multitude of emerging tools and libraries, assessing their strengths and weaknesses is nuanced and time-consuming. Often, claimed results are generated in isolation, and omissions in details render them irreproducible. The lack of comprehensive benchmarks, guidelines, and support frameworks to navigate the ZKP landscape effectively is a major barrier in the development of ZKP applications. In response to this need, we introduce zk-Bench, the first benchmarking framework and estimator tool designed for performance evaluation of public-key cryptography, with a specific focus on practical assessment of general-purpose ZKP systems. To simplify navigating the complex set of metrics and qualitative properties, we offer a comprehensive open-source evaluation platform, which enables the rigorous dissection and analysis of tools for ZKP development to uncover their trade-offs throughout the entire development stack; from low-level arithmetic libraries, to high-level tools for SNARK development. Using zk-Bench, we (i) collect data across $13$ different elliptic curves implemented across $9$ libraries, (ii) evaluate $5$ tools for ZKP development and (iii) provide a tool for estimating cryptographic protocols, instantiated for the $\mathcal{P}\mathfrak{lon}\mathcal{K}$ proof system, achieving an accuracy of 6 − 32% for ZKP circuits with up to millions of gates. By evaluating zk-Bench for various hardware configurations, we find that certain tools for ZKP development favor compute-optimized hardware, while others benefit from memory-optimized hardware. We observed performance enhancements of up to $40$ % for memory-optimized configurations and $50$ % for compute-optimized configurations, contingent on the specific ZKP development tool utilized.
Last updated:  2023-10-02
(In)security of stream ciphers against quantum annealing attacks on the example of the Grain 128 and Grain 128a ciphers
Michał Wroński, Elżbieta Burek, and Mateusz Leśniak
The security level of a cipher is a key parameter. While general-purpose quantum computers significantly threaten modern symmetric ciphers, other quantum approaches like quantum annealing have been less concerning. However, this paper argues that a quantum annealer specifically designed to attack Grain 128 and Grain 128a ciphers could soon be technologically feasible. Such an annealer would require 5,751 (6,751) qubits and 77,496 (94,708) couplers, with a qubit connectivity of 225 (249). Notably, the forthcoming D-Wave Advantage 2 with Zephyr topology will feature over 7,000 qubits and 60,000 couplers and a qubit connectivity 20. This work also shows that modern stream ciphers like Grain 128 and Grain 128a could be vulnerable to quantum annealing attacks. Although the exact complexity of quantum annealing is unknown, heuristic estimates suggest a \(\sqrt{N}\) exponential advantage over simulated annealing for problems with \(N\) variables. We detail how to transform algebraic attacks on Grain ciphers into the QUBO problem, making our attack potentially more efficient than classical brute-force methods. We also show that applying our attack to rescaled versions of the Grain cipher, namely Grain \( l \) and Grain \( la \) versions, where \( l \) represents the key size, leads to our attack overtaking both brute-force and Grover's attacks for sufficiently large \( l \). This is true, provided that quantum annealing has an exponential advantage over simulated annealing. Specifically, it is sufficient for the time complexity of quantum annealing for problems with \( N \) variables to be $e^{N^{\beta}}$ for any \( \beta < 1 \). Finally, given the general nature of our attack method, all new ciphers should be scrutinized for vulnerability to quantum annealing attacks and at least match the AES cipher in terms of security level.
Last updated:  2024-06-26
Space-Efficient and Noise-Robust Quantum Factoring
Seyoon Ragavan and Vinod Vaikuntanathan
We provide two improvements to Regev's quantum factoring algorithm (arXiv:2308.06572), addressing its space efficiency and its noise-tolerance. Our first contribution is to improve the quantum space efficiency of Regev's algorithm while keeping the circuit size the same. Our main result constructs a quantum factoring circuit using $O(n \log n)$ qubits and $O(n^{3/2} \log n)$ gates. We achieve the best of Shor and Regev (upto a logarithmic factor in the space complexity): on the one hand, Regev's circuit requires $O(n^{3/2})$ qubits and $O(n^{3/2} \log n)$ gates, while Shor's circuit requires $O(n^2 \log n)$ gates but only $O(n)$ qubits. As with Regev, to factor an $n$-bit integer $N$, we run our circuit independently $\approx \sqrt{n}$ times and apply Regev's classical postprocessing procedure. Our optimization is achieved by implementing efficient and reversible exponentiation with Fibonacci numbers in the exponent, rather than the usual powers of 2, adapting work by Kaliski (arXiv:1711.02491) from the classical reversible setting to the quantum setting. This technique also allows us to perform quantum modular exponentiation that is efficient in both space and size without requiring significant precomputation, a result that may be useful for other quantum algorithms. A key ingredient of our exponentiation implementation is an efficient circuit for a function resembling in-place quantum-quantum modular multiplication. This implementation works with only black-box access to any quantum circuit for out-of-place modular multiplication, which we believe is yet another result of potentially broader interest. Our second contribution is to show that Regev's classical postprocessing procedure can be modified to tolerate a constant fraction of the quantum circuit runs being corrupted by errors. In contrast, Regev's analysis of his classical postprocessing procedure requires all $\approx \sqrt{n}$ runs to be successful. In a nutshell, we achieve this using lattice reduction techniques to detect and filter out corrupt samples.
Last updated:  2023-10-02
Holographic SNARGs for P and Batch-NP from (Polynomially Hard) Learning with Errors
Susumu Kiyoshima
A succinct non-interactive argument (SNARG) is called holographic if the verifier runs in time sub-linear in the input length when given oracle access to an encoding of the input. We present holographic SNARGs for P and Batch-NP under the learning with errors (LWE) assumption. Our holographic SNARG for P has a verifier that runs in time $\mathsf{poly}(\lambda, \log T, \log n)$ for $T$-time computations and $n$-bit inputs ($\lambda$ is the security parameter), while our holographic SNARG for Batch-NP has a verifier that runs in time $\mathsf{poly}(\lambda, T, \log k)$ for $k$ instances of $T$-time computations. Before this work, constructions with the same asymptotic efficiency were known in the designated-verifier setting or under the sub-exponential hardness of the LWE assumption. We obtain our holographic SNARGs (in the public-verification setting under the polynomial hardness of the LWE assumption) by constructing holographic SNARGs for certain hash computations and then applying known/trivial transformations. As an application, we use our holographic SNARGs to weaken the assumption needed for a recent public-coin 3-round zero-knowledge (ZK) argument [Kiyoshima, CRYPTO 2022]. Specifically, we use our holographic SNARGs to show that a public-coin 3-round ZK argument exists under the same assumptions as the state-of-the-art private-coin 3-round ZK argument [Bitansky et al., STOC 2018].
Last updated:  2023-10-01
Linearly-Homomorphic Signatures for Short Randomizable Proofs of Subset Membership
David Pointcheval
Electronic voting is one of the most interesting application of modern cryptography, as it involves many innovative tools (such as homomorphic public-key encryption, non-interactive zero-knowledge proofs, and distributed cryptography) to guarantee several a priori contradictory security properties: the integrity of the tally and the privacy of the individual votes. While many efficient solutions exist for honest-but-curious voters, that follow the official procedure but try to learn more than just the public result, preventing attacks from malicious voters is much more complex: when voters may have incentive to send biased ballots, the privacy of the ballots is much harder to satisfy, whereas this is the crucial security property for electronic voting. We present a new technique to prove that an ElGamal ciphertext contains a message from a specific subset (quasi-adaptive NIZK of subset membership), using linearly-homomorphic signatures. The proofs are both quite efficient to generate, allowing the use of low-power devices to vote, and randomizable, which is important for the strong receipt-freeness property. They are well-suited to prevent vote-selling and replay attacks, which are the main threats against the privacy in electronic voting, with security proofs in the generic group model and the random oracle model.
Last updated:  2024-10-06
LWE with Quantum Amplitudes: Algorithm, Hardness, and Oblivious Sampling
Yilei Chen, Zihan Hu, Qipeng Liu, Han Luo, and Yaxin Tu
The learning with errors problem (LWE) is one of the most important building blocks for post-quantum cryptography. To better understand the quantum hardness of LWE, it is crucial to explore quantum variants of LWE. To this end, Chen, Liu, and Zhandry [Eurocrypt 2022] defined S|LWE> and C|LWE> problems by encoding the error of LWE samples into quantum amplitudes, and showed efficient quantum algorithms for a few interesting amplitudes. However, algorithms or hardness results of the most interesting amplitude, Gaussian, were not addressed before. In this paper, we show new algorithms, hardness results and applications for S|LWE> and C|LWE> with real Gaussian, Gaussian with linear or quadratic phase terms, and other related amplitudes. Let n be the dimension of LWE samples. Our main results are 1. There is a $2^{\tilde{O}(\sqrt{n})}$-time algorithm for S|LWE> with Gaussian amplitude with known phase, given $2^{\tilde{O}(\sqrt{n})}$ many quantum samples. The algorithm is modified from Kuperberg's sieve, and in fact works for more general amplitudes as long as the amplitudes and phases are completely known. 2. There is a polynomial time quantum algorithm for solving S|LWE> and C|LWE> for Gaussian with quadratic phase amplitudes, where the sample complexity is as small as $\tilde{O}(n)$. As an application, we give a quantum oblivious LWE sampler where the core quantum sampler requires only quasi-linear sample complexity. This improves upon the previous oblivious LWE sampler given by Debris-Alazard, Fallahpour, Stehlé [STOC 2024], whose core quantum sampler requires $\tilde{O}(nr)$ sample complexity, where $r$ is the standard deviation of the error. 3. There exist polynomial time quantum reductions from standard LWE or worst-case GapSVP to S|LWE> with Gaussian amplitude with small unknown phase, and arbitrarily many samples. Compared to the first two items, the appearance of the unknown phase term places a barrier in designing efficient quantum algorithm for solving standard LWE via S|LWE>.
Last updated:  2023-10-01
A note on ``authenticated key agreement protocols for dew-assisted IoT systems''
Zhengjun Cao and Lihua Liu
We show that the key agreement scheme [J. Supercomput., 78:12093-12113, 2022] is flawed. (1) It neglects the representation of a point over an elliptic curve and the basic requirement for bit-wise XOR, which results in a trivial equality. By the equality, an adversary can recover a target device's identity, which means the scheme fails to keep anonymity. (2) It falsely requires that the central server should share its master secret key with each dew server. (3) The specified certificate is almost nonsensical.
Last updated:  2023-09-30
A Privacy-preserving Central Bank Ledger for Central Bank Digital Currency
Chan Wang Mong Tikvah
Central banks around the world are actively exploring the issuance of retail central bank digital currency (rCBDC), which is widely seen as a key upgrade of the monetary system in the 21st century. However, privacy concerns are the main impediment to rCBDC’s development and roll-out. A central bank as the issuer of rCBDC would typically need to keep a digital ledger to record all the balances and transactions of citizens. These data, when combined with other data, could possibly disclose the spending habits of all citizens. On the one hand, the eligible rights of people to keep their transactions private should be protected, including against central bank surveillance. On the other hand, the central bank needs to ensure that no over-issuance of money or other frauds occur, necessarily demanding a certain form of knowledge of rCBDC transactions to safeguard against malicious users who create counterfeit money or spend duplicated money. This work investigates cryptographic tools and privacy-enhancing technology with the aim to craft a scalable solution to strike a balance between user privacy and transaction verifiability. Different from the current mainstream thought among central banks, it assumes that the central bank maintains a ledger to record all balances and transactions of citizens, but in a concealed form. Specifically, this work focuses on rCBDC architectures based on the unspent transaction output (UTXO) data model and tackles the research problem of preserving a sufficient degree of privacy for UTXO transaction records while allowing the central bank to verify their correctness. While UTXO-based rCBDC architectures were widely tested among major central banks, user privacy is not adequately addressed. The adoption of evolving public keys as pseudonyms to hide the real identities of users is the most advanced privacy design for UTXO-based rCBDC, but it only solves the privacy issue partially. Some information could still be leaked out. This work investigates techniques to address the shortcomings of the pseudonym approach. First, a Pedersen commitment scheme is applied to hide the transaction values of a UTXO transaction while allowing the central bank to verify that no over-issuance of rCBDC has occurred in the transaction. Contrary to the conventional approach, which applies a zero knowledge proof to prove no over-issuance, this work uses a Schnorr signature. This not only reduces the overheads but also enables a non-interactive proof. Then, Coinjoin is applied to aggregate UTXO transactions from different users into one larger UTXO transaction to obfuscate the payer-payee relationship while preserving the correctness of the amount of money flow. This work applies a well-developed notion in database research, namely, k-anonymity, to analyse the privacy guarantee of Coinjoin. Through modelling the transaction traffic by a Poisson process, the trade-off between anonymity and transaction confirmation time of Coinjoin is analysed.
Last updated:  2023-10-06
Key Committing Security Analysis of AEGIS
Takanori Isobe and Mostafizar Rahman
Recently, there has been a surge of interest in the security of authenticated encryption with associated data (AEAD) within the context of key commitment frameworks. Security within this framework ensures that a ciphertext chosen by an adversary does not decrypt to two different sets of key, nonce, and associated data. Despite this increasing interest, the security of several widely deployed AEAD schemes has not been thoroughly examined within this framework. In this work, we assess the key committing security of AEGIS, which emerged as a winner in the Competition for Authenticated Encryption: Security, Applicability, and Robustness (CAESAR). A recent assertion has been made suggesting that there are no known attacks on AEGIS in the key committing settings and AEGIS qualifies as a fully committing AEAD scheme in IETF document. However, contrary to this claim, we propose a novel O(1) attack applicable to all variants of AEGIS. This demonstrates the ability to execute a key committing attack within the FROB game setting, which is known to be one of the most stringent key committing frameworks. This implies that our attacks also hold validity in other, more relaxed frameworks, such as CMT-1, CMT-4, and so forth.
Last updated:  2023-09-29
Committing authenticated encryption based on SHAKE
Joan Daemen, Silvia Mella, and Gilles Van Assche
Authenticated encryption is a cryptographic mechanism that allows communicating parties to protect the confidentiality and integrity of message exchanged over a public channel, provided they share a secret key. Some applications require committing authenticated encryption schemes, a security notion that is not covered by the classical requirements of confidentiality and integrity given a secret key. An authenticated encryption (AE) scheme is committing in the strongest sense when it is impossible to generate the same ciphertext for different $(K, [N, ]A, P)$ tuples, with $K$ the key, $N$ the nonce, $A$ the associated data and $P$ the plaintext. In this work, we present authenticated encryption schemes for which we provably reduce their confidentiality, integrity and commitment security to the security of an underlying sponge function. In particular, we instantiate them with SHAKE128 and SHAKE256, offering 128 and 256 bits of security strength and based on the security claim in the SHA-3 standard FIPS 202. Cryptanalysis of reduced-round versions of SHA-3 and SHAKE functions suggests that the number of rounds can be divided by two without noticeable security degeneration, and this had lead to the definition of TurboSHAKE128 and TurboSHAKE256; hence we also instantiate our scheme with these functions, offering the same security strength at twice the speed. The AE schemes we propose therefore have the unique advantages that 1) their security is based on a security claim that has received a large amount of public scrutiny and that 2) it makes use of the standard Keccak-p permutation that has dedicated hardware support on more and more CPUs. In more details, we build two online AE modes on top of a sponge function, in multiple layers. At the bottom layer, we use a variant of the duplex construction, referred to as overwrite duplex or OD for short, that uses an overwrite operation leading to a smaller state footprint. Our first AE mode is nonce-based and built using a variant of the SpongeWrap mode on top of OD, and security-equivalent to it. Our second AE mode makes use of the Deck-BO mode published at Asiacrypt 2022, an online version of a Synthetic Initial Value (SIV) authenticated encryption scheme. It requires a deck function that we build on top of the OD, again security-equivalent to it.
Last updated:  2023-10-03
Measuring the Concentration of Control in Contemporary Ethereum
Simon Brown
Ethereum is undergoing significant changes to its architecture as it evolves. These changes include its switch to PoS consensus and the introduction of significant infrastructural changes that do not require a change to the core protocol, but that fundamentally affect the way users interact with the network. These changes represent an evolution toward a more modular architecture, in which there exists new exogenous vectors for centralization. This paper builds on previous studies of decentralization of Ethereum to reflect these recent significant changes, and Ethereum’s new modular paradigm.
Last updated:  2025-04-04
A Quantum Approach For Reducing Communications in Classical Secure Computations with Long Outputs
Jiayu Zhang
How could quantum cryptography help us achieve what are not achievable in classical cryptography? In this work we study the classical cryptographic problem that two parties would like to perform secure computations with long outputs. As a basic primitive and example, we first consider the following problem which we call secure function sampling with long outputs: suppose $f:\{0,1\}^n\rightarrow \{0,1\}^m$ is a public, efficient classical function, where $m$ is big; Alice would like to sample $x$ from its domain and sends $f(x)$ to Bob; what Bob knows should be no more than $f(x)$ even if it behaves maliciously. Classical cryptography, like FHE and succinct arguments [Gen09,Kil92,HW15], allows us to achieve this task within communication complexity $O(n+m)$; could we achieve this task with communication complexity independent of $m$? In this work, we first design a quantum cryptographic protocol that achieves secure function sampling with approximate security, within $O(n)$ communication (omitting the dependency on the security parameter and error tolerance). We also prove the classical impossibility using techniques in [HW15], which means that our protocol indeed achieves a type of quantum advantage. Building on the secure function sampling protocol, we further construct protocols for general secure two-party computations [Yao86,GB01] with approximate security, with communication complexity only depending on the input length and the targeted security. In terms of the assumptions, we construct protocols for these problems assuming only the existence of collapsing hash functions [Unr16]; what's more, we also construct a classical-channel protocol for these problems additionally assuming the existence of noisy trapdoor claw-free functions [BCMVV,BKVV].
Last updated:  2023-09-29
Subversion-Resilient Signatures without Random Oracles
Pascal Bemmann, Sebastian Berndt, and Rongmao Chen
In the aftermath of the Snowden revelations in 2013, concerns about the integrity and security of cryptographic systems have grown significantly. As adversaries with substantial resources might attempt to subvert cryptographic algorithms and undermine their intended security guarantees, the need for subversion-resilient cryptography has become paramount. Security properties are preserved in subversion-resilient schemes, even if the adversary implements the scheme used in the security experiment. This paper addresses this pressing concern by introducing novel constructions of subversion-resilient signatures and hash functions while proving the subversion-resilience of existing cryptographic primitives. Our main contribution is the first construction of subversion-resilient signatures under complete subversion in the offline watchdog model (with trusted amalgamation) without relying on random oracles. We demonstrate that one-way permutations naturally yield subversion-resilient one-way functions, thereby enabling us to establish the subversion-resilience of Lamport signatures, assuming a trusted comparison is available. Additionally, we develop subversion-resilient target-collision-resistant hash functions using a trusted XOR. By leveraging this approach, we expand the arsenal of cryptographic tools that can withstand potential subversion attacks. Our research builds upon previous work in the offline watchdog model with trusted amalgamation (Russell et al. ASIACRYPT'16) and subversion-resilient pseudo-random functions (Bemmann et al. ACNS'23), culminating in the formal proof of subversion-resilience for the classical Naor-Yung signature construction.
Last updated:  2024-11-06
Formulations and Constructions of Remote State Preparation with Verifiability, with Applications
Jiayu Zhang
Remote state preparation with verifiability (RSPV) is an important quantum cryptographic primitive [GV19,Zha22]. In this primitive, a client would like to prepare a quantum state (sampled or chosen from a state family) on the server side, such that ideally the client knows its full description, while the server holds and only holds the state itself. In this work we make several contributions on its formulations, constructions and applications. In more detail: - We first work on the definitions and abstract properties of the RSPV problem. We select and compare different variants of definitions [GV19,GMP22,Zha22], and study their basic properties (like composability and amplification). - We also study a closely related question of how to certify the server's operations (instead of solely the states). We introduce a new notion named remote operator application with verifiability (ROAV). We compare this notion with related existing definitions [SW87,MY04,MV21,NZ23], study its abstract properties and leave its concrete constructions for further works. - Building on the abstract properties and existing results [BGKPV23], we construct a series of new RSPV protocols. Our constructions not only simplify existing results [GV19] but also cover new state families, for example, states in the form of $\frac{1}{\sqrt{2}}(|0\rangle|x_0\rangle+|1\rangle|x_1\rangle)$. All these constructions rely only on the existence of weak NTCF [BKVV,AMR22], without additional requirements like the adaptive hardcore bit property [BCMVV,AMR22]. - As a further application, we show that the classical verification of quantum computations (CVQC) problem [ABEM,Mah18] could be constructed from assumptions on group actions [ADMP20]. This is achieved by combining our results on RSPV with group-action-based instantiation of weak NTCF [AMR22], and then with the quantum-gadget-assisted quantum verification protocol [FKD18].
Last updated:  2023-09-29
To Broadcast or Not to Broadcast: Decision-Making Strategies for Mining Empty Blocks
Chon Kit Lao, Rui Jiang, Luyao Zhang, Fan Zhang, and Ye Wang
Resource efficiency in blockchain systems remains a pivotal concern in their design. While Ethereum often experiences network congestion, leading to rewarding opportunities for miners through transaction inclusions, a significant amount of block space remains underutilized. Remarkably, instances of entirely unutilized blocks contribute to resource wastage within the Ethereum ecosystem. This study delves into the incentives driving miners to produce empty blocks. We ascertain that the immediate rewards of mining empty blocks often lead miners to forego potential benefits from transaction inclusions. Moreover, our investigation reveals a marked reduction in empty blocks after the Ethereum's Merge, highlighting that the Proof-of-Stake (PoS) consensus mechanism enhances block space efficiency in the blockchain sphere.
Last updated:  2024-04-15
SCALLOP-HD: group action from 2-dimensional isogenies
Mingjie Chen, Antonin Leroux, and Lorenz Panny
We present SCALLOP-HD, a novel group action that builds upon the recent SCALLOP group action introduced by De Feo, Fouotsa, Kutas, Leroux, Merz, Panny and Wesolowski in 2023. While our group action uses the same action of the class group $\textnormal{Cl}(\mathfrak{O})$ on $\mathfrak{O}$-oriented curves where $\mathfrak{O} = \mathbb{Z}[f\sqrt{-1}]$ for a large prime $f$ and small $d$ as SCALLOP, we introduce a different orientation representation: The new representation embeds an endomorphism generating $\mathfrak{O}$ in a $2^e$-isogeny between abelian varieties of dimension $2$ with Kani's Lemma, and this representation comes with a simple algorithm to compute the class group action. Our new approach considerably simplifies the SCALLOP framework, potentially surpassing it in efficiency — a claim supported by preliminary implementation results in SageMath. Additionally, our approach streamlines parameter selection. The new representation allows us to select efficiently a class group $\textnormal{Cl}(\mathfrak{O})$ of smooth order, enabling polynomial-time generation of the lattice of relation, hence enhancing scalability in contrast to SCALLOP. To instantiate our SCALLOP-HD group action, we introduce a new technique to apply Kani's Lemma in dimension 2 with an isogeny diamond obtained from commuting endomorphisms. This method allows one to represent arbitrary endomorphisms with isogenies in dimension 2, and may be of independent interest.
Last updated:  2023-09-29
A Novel Mathematical Formal Proof in Unreliability Protocol with XOR in Two's Complement System
Chenglian Liu and Sonia Chien-I Chen
Exclusive OR (XOR), a common Boolean logical operation, is an operation on two factors where the result is true if and only if one operand is true and the other is false. A simple way to state this is ``one or the other, but not both''. Using this logical operation, a text string can be encrypted by applying the XOR operator to every character using a ``key''. If you want to decrypt the output, simply reapply the key and the resulting output will be the original message.
Last updated:  2024-08-22
RC4OK. An improvement of the RC4 stream cipher
Khovayko O. and Schelkunov D.
In this paper we present an improved version of the classical RC4 stream cipher. The improvements allow to build lightweight high-performance cryptographically strong random number generator suitable for use in IoT and as a corresponding component of operating systems. The criterion for high performance is both a high speed of generating a stream of random numbers and low overhead costs for adding entropy from physical events to the state of the generator.
Last updated:  2023-09-28
How to Physically Hold Your Bitcoins ?
Uncategorized
Houda Ferradi, Antoine Houssais, and David Naccache
Show abstract
Uncategorized
The rise of virtual currencies has revolutionized the way we conduct financial transactions. These digital assets, governed by intricate online protocols, have rapidly gained prominence as a viable medium of exchange, offering convenience and security. However, as we delve deeper into the digital realm, a challenge persists: How can we bridge the gap between the virtual and the physical? This paper tackles this challenge by proposing a way to materialize virtual coins and make them physically exchangeable offline at the cost of some plausible trust assumptions.
Last updated:  2023-09-28
Blind signatures from Zero knowledge in the Kummer variety
Paulo L. Barreto, Devin D. Reich, Marcos A. Simplicio Jr., and Gustavo H. M. Zanon
We show how to apply the BZ methodology (Blind signatures from Zero knowledge) to obtain blind signatures in the Kummer varieties defined by Montgomery curves. We also describe specially-tailored arithmetic algorithms to facilitate their efficient implementation. The result can be proved secure under appropriate assumptions, appears to resist even the ROS attack (to which most elliptic-curve blind signature schemes succumb), and is arguably one of the most efficient among those proposals that offer similar security guarantees.
Last updated:  2023-09-28
Lower Bounds on Anonymous Whistleblowing
Willy Quach, LaKyah Tyner, and Daniel Wichs
Anonymous transfer, recently introduced by Agrikola, Couteau and Maier [ACM22] (TCC '22), allows a sender to leak a message anonymously by participating in a public non-anonymous discussion where everyone knows who said what. This opens up the intriguing possibility of using cryptography to ensure strong anonymity guarantees in a seemingly non-anonymous environment. The work of [ACM22] presented a lower bound on anonymous transfer, ruling out constructions with strong anonymity guarantees (where the adversary's advantage in identifying the sender is negligible) against arbitrary polynomial-time adversaries. They also provided a (heuristic) upper bound, giving a scheme with weak anonymity guarantees (the adversary's advantage in identifying the sender is inverse in the number of rounds) against fine-grained adversaries whose run-time is bounded by some fixed polynomial that exceeds the run-time of the honest users. This leaves a large gap between the lower bound and the upper bound, raising the intriguing possibility that one may be able to achieve weak anonymity against arbitrary polynomial time adversaries, or strong anonymity against fine grained adversaries. In this work, we present improved lower bounds on anonymous transfer, that rule out both of the above possibilities: - We rule out the existence of anonymous transfer with any non-trivial anonymity guarantees against general polynomial time adversaries. - Even if we restrict ourselves to fine-grained adversaries whose run-time is essentially equivalent to that of the honest parties, we cannot achieve strong anonymity, or even quantitatively improve over the inverse polynomial anonymity guarantees (heuristically) achieved by [ACM22]. Consequently, constructions of anonymous transfer can only provide security against fine-grained adversaries, and even in that case they achieve at most weak quantitative forms of anonymity.
Last updated:  2024-02-26
Twinkle: Threshold Signatures from DDH with Full Adaptive Security
Uncategorized
Renas Bacho, Julian Loss, Stefano Tessaro, Benedikt Wagner, and Chenzhi Zhu
Show abstract
Uncategorized
Sparkle is the first threshold signature scheme in the pairing-free discrete logarithm setting (Crites, Komlo, Maller, Crypto 2023) to be proven secure under adaptive corruptions. However, without using the algebraic group model, Sparkle's proof imposes an undesirable restriction on the adversary. Namely, for a signing threshold $t<n$, the adversary is restricted to corrupt at most $t/2$ parties. In addition, Sparkle's proof relies on a strong one-more assumption. In this work, we propose Twinkle, a new threshold signature scheme in the pairing-free setting which overcomes these limitations. Twinkle is the first pairing-free scheme to have a security proof under up to $t$ adaptive corruptions without relying on the algebraic group model. It is also the first such scheme with a security proof under adaptive corruptions from a well-studied non-interactive assumption, namely, the Decisional Diffie-Hellman (DDH) assumption. We achieve our result in two steps. First, we design a generic scheme based on a linear function that satisfies several abstract properties and prove its adaptive security under a suitable one-more assumption related to this function. In the context of this proof, we also identify a gap in the security proof of Sparkle and develop new techniques to overcome this issue. Second, we give a suitable instantiation of the function for which the corresponding one-more assumption follows from DDH.
Last updated:  2023-09-27
A Total Break of the Scrap Digital Signature Scheme
Daniel Smith-Tone
Recently a completely new post-quantum digital signature scheme was proposed using the so called ``scrap automorphisms''. The structure is inherently multivariate, but differs significantly from most of the multivariate literature in that it relies on sparsity and rings containing zero divisors. In this article, we derive a complete and total break of Scrap, performing a key recovery in not much more time than verifying a signature. We also generalize the result, breaking unrealistic instances of the scheme for which there is no particularly efficient signing algorithm and key sizes are unmanageable.
Last updated:  2023-10-17
The Pre-Shared Key Modes of HPKE
Joël Alwen, Jonas Janneck, Eike Kiltz, and Benjamin Lipp
The Hybrid Public Key Encryption (HPKE) standard was recently published as RFC 9180 by the Crypto Forum Research Group (CFRG) of the Internet Research Task Force (IRTF). The RFC specifies an efficient public key encryption scheme, combining asymmetric and symmetric cryptographic building blocks. Out of HPKE’s four modes, two have already been formally analyzed by Alwen et al. (EUROCRYPT 2021). This work considers the remaining two modes: HPKE_PSK and HPKE_AuthPSK . Both of them are “pre-shared key” modes that assume the sender and receiver hold a symmetric pre-shared key. We capture the schemes with two new primitives which we call pre-shared key public-key encryption (pskPKE) and pre-shared key authenticated public-key encryption (pskAPKE). We provide formal security models for pskPKE and pskAPKE and prove (via general composition theorems) that the two modes HPKE_PSK and HPKE_AuthPSK offer active security (in the sense of insider privacy and outsider authenticity) under the Gap Diffie-Hellman assumption. We furthermore explore possible post-quantum secure instantiations of the HPKE standard and propose new solutions based on lattices and isogenies. Moreover, we show how HPKE’s basic HPKEPSK and HPKEAuthPSK modes can be used black-box in a simple way to build actively secure post-quantum/classic-hybrid (authenticated) encryption schemes. Our hybrid constructions provide a cheap and easy path towards a practical post-quantum secure drop-in replacement for the basic HPKE modes HPKE_Base and HPKE_Auth .
Last updated:  2023-09-27
Rational Broadcast Protocols against Timid Adversaries
Keigo Yamashita and Kenji Yasunaga
We present a constant-round deterministic broadcast protocol against timid adversaries in the synchronous authenticated setting. A timid adversary is a game-theoretically rational adversary who tries to attack the protocol but prefers the actions to be undetected. Our protocol is secure against such an adversary corrupting t out of n parties for any t < n. The round complexity is 5 for timid adversaries and is at most t + 5 for general malicious adversaries. Our results demonstrate that game-theoretic rationality enables us to circumvent the impossibility of constructing constant-round deterministic broadcast protocols for t = ω(1).
Last updated:  2023-11-22
Succinct Proofs and Linear Algebra
Alex Evans and Guillermo Angeris
The intuitions behind succinct proof systems are often difficult to separate from some of the deep cryptographic techniques that are used in their construction. In this paper, we show that, using some simple abstractions, a number of commonly-used tools used in the construction of succinct proof systems may be viewed as basic consequences of linear algebra over finite fields. We introduce notation which considerably simplifies these constructions and slowly build a toolkit of useful techniques that can be combined to create different protocols. We also show a simple 'probabilistic calculus' which specifies how to combine these tools and bounds on their resulting security. To show the power of these abstractions and toolkit, we give a short proof of the security of the FRI protocol. Along the way, we discuss some natural generalizations of protocols in the literature and propose a conjecture related to proximity testing using linear error-correcting codes that is of independent interest.
Last updated:  2023-11-13
G+G: A Fiat-Shamir Lattice Signature Based on Convolved Gaussians
Julien Devevey, Alain Passelègue, and Damien Stehlé
We describe an adaptation of Schnorr's signature to the lattice setting, which relies on Gaussian convolution rather than flooding or rejection sampling as previous approaches. It does not involve any abort, can be proved secure in the ROM and QROM using existing analyses of the Fiat-Shamir transform, and enjoys smaller signature sizes (both asymptotically and for concrete security levels).
Last updated:  2023-09-26
Auditable Obfuscation
Shalini Banerjee and Steven D. Galbraith
We introduce a new variant of malicious obfuscation. Our formalism is incomparable to the existing definitions by Canetti and Varia (TCC 2010), Canetti et al. (EUROCRYPT 2022) and Badrinarayanan et al. (ASIACRYPT 2016). We show that this concept is natural and applicable to obfuscation-as-a-service platforms. We next define a new notion called auditable obfuscation which provides security against malicious obfuscation. Finally, we construct a proof of concept of the developed notions based on well-studied theoretical obfuscation proposals.
Last updated:  2024-01-17
Tropical cryptography III: digital signatures
Jiale Chen, Dima Grigoriev, and Vladimir Shpilrain
We use tropical algebras as platforms for a very efficient digital signature protocol. Security relies on computational hardness of factoring one-variable tropical polynomials; this problem is known to be NP-hard. We also offer countermeasures against recent attacks by Panny and by Brown and Monico.
Last updated:  2024-02-28
Efficacy and Mitigation of the Cryptanalysis on AIM
Seongkwang Kim, Jincheol Ha, Mincheol Son, and Byeonghak Lee
Recent advancements in post-quantum cryptography have highlighted signature schemes based on the MPC-in-the-Head (MPCitH) framework due to their reliance only on the one-way function of the underlying primitive. This reliance offers a diverse set of assumptions regarding the difficulty of post-quantum cryptographic problems. In this context, Kim et al. proposed $\mathsf{AIM}$, an MPCitH-compatible one-way function. This function is distinguished by its large algebraic S-boxes and parallel architecture, contributing to the reduced size of signatures, as presented at CCS 2023. However, $\mathsf{AIM}$ has faced several cryptanalytic challenges, which have potentially weakened its security by up to 15 bits. This paper provides a comprehensive overview of these cryptanalytic methods and proposes $\mathsf{AIM2}$, an enhanced version that addresses these identified vulnerabilities. We conduct an extensive analysis of its resilience to algebraic attacks and detail the modifications in its efficiency.
Last updated:  2024-03-14
Cicada: A framework for private non-interactive on-chain auctions and voting
Noemi Glaeser, István András Seres, Michael Zhu, and Joseph Bonneau
Auction and voting schemes play a crucial role in the Web3 ecosystem. Yet currently deployed implementations either lack privacy or require at least two rounds, hindering usability and security. We introduce Cicada, a general framework for using linearly homomorphic time-lock puzzles (HTLPs) to enable provably secure, non-interactive private auction and voting protocols. We instantiate our framework with an efficient new HTLP construction and novel packing techniques that enable succinct ballot correctness proofs independent of the number of candidates. We demonstrate the practicality of our approach by implementing our protocols for the Ethereum Virtual Machine (EVM).
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.