All papers in 2023 (Page 7 of 1971 results)

Last updated:  2023-09-15
Cryptographic Key Exchange: An Innovation Outlook
Gideon Samid
This article evaluates the innovation landscape facing the challenge of generating fresh shared randomness for cryptographic key exchange and various cyber security protocols. It discusses the main innovation thrust today, focused on quantum entanglement and on efficient engineering solutions to BB84, and its related alternatives. This innovation outlook highlights non-quantum solutions, and describes NEPSAR – a mechanical complexity based solution, which is applicable to any number of key sharing parties. Short-lived secret keys are also mentioned, as well as emerging innovation routes based on Richard Feynman’s observation: “there is plenty of room at the bottom,” extracting plenty of digital randomness from tiny amounts of matter, yielding very many measurable attributes (nanotechnology).
Last updated:  2023-10-18
Oracle Recording for Non-Uniform Random Oracles, and its Applications
Minki Hhan and Aaram Yun
In Crypto 2019, Zhandry showed how to define compressed oracles, which record quantum superposition queries to the quantum random oracle. In this paper, we extend Zhandry's compressed oracle technique to non-uniformly distributed functions with independently sampled outputs. We define two quantum oracles $\mathsf{CStO}_D$ and $\mathsf{CPhsO}_D$, which are indistinguishable to the non-uniform quantum random oracle where quantum access is given to a random function $H$ whose images $H(x)$ are sampled from a probability distribution $D$ independently for each $x$. We show that these compressed oracles record the adversarial quantum superposition queries. Also, we re-prove the optimality of Grover search and the collision resistance of non-uniform random functions, using our extended compressed oracle technique.
Last updated:  2023-09-13
Ideal-SVP is Hard for Small-Norm Uniform Prime Ideals
Joël Felderhoff, Alice Pellet-Mary, Damien Stehlé, and Benjamin Wesolowski
The presumed hardness of the Shortest Vector Problem for ideal lattices (Ideal-SVP) has been a fruitful assumption to understand other assumptions on algebraic lattices and as a security foundation of cryptosystems. Gentry [CRYPTO'10] proved that Ideal-SVP enjoys a worst-case to average-case reduction, where the average-case distribution is the uniform distribution over the set of inverses of prime ideals of small algebraic norm (below $d^{O(d)}$ for cyclotomic fields, here $d$ refers to the field degree). De Boer et al. [CRYPTO'20] obtained another random self-reducibility result for an average-case distribution involving integral ideals of norm $2^{O(d^2)}$. In this work, we show that Ideal-SVP for the uniform distribution over inverses of small-norm prime ideals reduces to Ideal-SVP for the uniform distribution over small-norm prime ideals. Combined with Gentry's reduction, this leads to a worst-case to average-case reduction for the uniform distribution over the set of \emph{small-norm prime ideals}. Using the reduction from Pellet-Mary and Stehl\'e [ASIACRYPT'21], this notably leads to the first distribution over NTRU instances with a polynomial modulus whose hardness is supported by a worst-case lattice problem.
Last updated:  2023-09-16
Ramp hyper-invertible matrices and their applications to MPC protocols
Hongqing Liu, Chaoping Xing, Yanjiang Yang, and Chen Yuan
Beerliová-Trubíniová and Hirt introduced hyper-invertible matrix technique to construct the first perfectly secure MPC protocol in the presence of maximal malicious corruptions $\lfloor \frac{n-1}{3} \rfloor$ with linear communication complexity per multiplication gate [5]. This matrix allows MPC protocol to generate correct shares of uniformly random secrets in the presence of malicious adversary. Moreover, the amortized communication complexity of generating each sharing is linear. Due to this prominent feature, the hyper-invertible matrix plays an important role in the construction of MPC protocol and zero-knowledge proof protocol where the randomness needs to be jointly generated. However, the downside of this matrix is that the size of its base field is linear in the size of its matrix. This means if we construct an $n$-party MPC protocol over $\mathbb{F}_q$ via hyper-invertible matrix, $q$ is at least $2n$. In this paper, we propose the ramp hyper-invertible matrix which can be seen as the generalization of hyper-invertible matrix. Our ramp hyper-invertible matrix can be defined over constant-size field regardless of the size of this matrix. Similar to the arithmetic secret sharing scheme, to apply our ramp hyper-invertible matrix to perfectly secure MPC protocol, the maximum number of corruptions has to be compromised to $\frac{(1-\epsilon)n}{3}$. As a consequence, we present the first perfectly secure MPC protocol in the presence of $\frac{(1-\epsilon)n}{3}$ malicious corruptions with constant communication complexity. Besides presenting the variant of hyper-invertible matrix, we overcome several obstacles in the construction of this MPC protocol. Our arithmetic secret sharing scheme over constant-size field is compatible with the player elimination technique, i.e., it supports the dynamic changes of party number and corrupted party number. Moreover, we rewrite the public reconstruction protocol to support the sharings over constant-size field. Putting these together leads to the constant-size field variant of celebrated MPC protocol in [5]. We note that although it was widely acknowledged that there exists an MPC protocol with constant communication complexity by replacing Shamir secret sharing scheme with arithmetic secret sharing scheme, there is no reference seriously describing such protocol in detail. Our work fills the missing detail by providing MPC primitive for any applications relying on MPC protocol of constant communication complexity. As an application of our perfectly secure MPC protocol which implies perfect robustness in the MPC-in-the-Head framework, we present the constant-rate zero-knowledge proof with $3$ communication rounds. The previous work achieves constant-rate with $5$ communication rounds [32] due to the statistical robustness of their MPC protocol. Another application of our ramp hyper-invertible matrix is the information-theoretic multi-verifier zero-knowledge for circuit satisfiability[43]. We manage to remove the dependence of the size of circuit and security parameter from the share size.
Last updated:  2024-07-24
Towards post-quantum secure PAKE - A tight security proof for OCAKE in the BPR model
Nouri Alnahawi, Kathrin Hövelmanns, Andreas Hülsing, Silvia Ritsch, and Alexander Wiesmaier
We revisit OCAKE (ACNS 23), a generic recipe that constructs password-based authenticated key exchange (PAKE) from key encapsulation mechanisms (KEMs), to allow instantiations with post-quantums KEM like KYBER. The ACNS23 paper left as an open problem to argue security against quantum attackers, with its security proof being in the universal composability (UC) framework. This is common for PAKE, however, at the time of this submission’s writing, it was not known how to prove (computational) UC security against quantum adversaries. Doing this becomes even more involved if the proof uses idealizations like random oracles or ideal ciphers. To pave the way towards post-quantum security proofs, we therefore resort to a (still classical) game-based security proof in the BPR model (EUROCRYPT 2000). We consider this a crucial stepping stone towards a fully satisfying post-quantum security proof. We also hope that a game-based proof is easier to (potentially formally) verify. We prove security of (a minor variation of) OCAKE, assuming the underlying KEM satisfies notions of ciphertext indistinguishability, anonymity, and (computational) public-key uniformity. Using multi-user variants of these properties, we achieve tight security bounds. We provide a full detailed proof – something often omitted in publications on game-based security of PAKE. As a side-contribution, we demonstrate in detail how to handle password guesses, which is something we were unable to find in the existing literature at the time of writing. Finally, we discuss which current PQC KEMs can be plugged into the proposed protocol and provide a concrete instantiation, accompanied by a proof-of-concept implementation and respective run-time benchmarks.
Last updated:  2024-02-26
Practical Constructions for Single Input Functionality against a Dishonest Majority
Zhelei Zhou, Bingsheng Zhang, Hong-Sheng Zhou, and Kui Ren
Single Input Functionality (SIF) is a special case of MPC, where only one distinguished party called dealer holds the secret input. SIF allows the dealer to complete a computation task and send to other parties their respective outputs without revealing any additional information about its secret input. SIF has many applications, including multiple-verifier zero-knowledge and verifiable relation sharing, etc. Recently, several works devote to round-efficient realization of SIF, and achieve 2-round communication in the honest majority setting (Applebaum et al., Crypto 2022; Baum et al., CCS 2022; Yang and Wang, Asiacrypt 2022). In this work, we focus on concrete efficiency and propose \emph{the first} practical construction for SIF against \emph{a dishonest majority} in the preprocessing model; moreover, the online phase of our protocol is only 2-round and is highly efficient, as it requires no cryptographic operations and achieves information theoretical security. For SIF among 5 parties, our scheme takes 152.34ms (total) to evaluate an AES-128 circuit with 7.36ms online time. Compared to the state-of-the-art (honest majority) solution (Baum et al., CCS 2022), our protocol is roughly 2$\times$ faster in the online phase, although more preprocessing time is needed. Compared to the state-of-the-art generic MPC against a dishonest majority (Wang et al., CCS 2017; Cramer et al., Crypto 2018), our protocol outperforms them with respect to both total running time and online running time.
Last updated:  2023-09-25
Compact Frequency Estimators in Adversarial Environments
Sam A. Markelon, Mia Filić, and Thomas Shrimpton
Count-Min Sketch (CMS) and HeavyKeeper (HK) are two realizations of a compact frequency estimator (CFE). These are a class of probabilistic data structures that maintain a compact summary of (typically) high-volume streaming data, and provides approximately correct estimates of the number of times any particular element has appeared. CFEs are often the base structure in systems looking for the highest-frequency elements (i.e., top-$K$ elements, heavy hitters, elephant flows). Traditionally, probabilistic guarantees on the accuracy of frequency estimates are proved under the implicit assumption that stream elements do not depend upon the internal randomness of the structure. Said another way, they are proved in the presence of data streams that are created by non-adaptive adversaries. Yet in many practical use-cases, this assumption is not well-matched with reality; especially, in applications where malicious actors are incentivized to manipulate the data stream. We show that the CMS and HK structures can be forced to make significant estimation errors, by concrete attacks that exploit adaptivity. We analyze these attacks analytically and experimentally, with tight agreement between the two. Sadly, these negative results seem unavoidable for (at least) sketch-based CFEs with parameters that are reasonable in practice. On the positive side, we give a new CFE (Count-Keeper) that can be seen as a composition of the CMS and HK structures. Count-Keeper estimates are typically more accurate (by at least a factor of two) than CMS for ``honest" streams; our attacks against CMS and HK are less effective (and more resource intensive) when used against Count-Keeper; and Count-Keeper has a native ability to flag estimates that are suspicious, which neither CMS or HK (or any other CFE, to our knowledge) admits.
Last updated:  2023-09-12
On The Black-Box Complexity of Correlation Intractability
Nico Döttling and Tamer Mour
Correlation intractability is an emerging cryptographic paradigm that enabled several recent breakthroughs in establishing soundness of the Fiat-Shamir transform and, consequently, basing non-interactive zero-knowledge proofs and succinct arguments on standard cryptographic assumptions. In a nutshell, a hash family is said to be \emph{correlation intractable} for a class of relations $\mathcal{R}$ if, for any relation $R\in\mathcal{R}$, it is hard given a random hash function $h\gets H$ to find an input $z$ s.t. $(z,h(z))\in R$, namely a correlation. Despite substantial progress in constructing correlation intractable hash functions, all constructions known to date are based on highly-structured hardness assumptions and, further, are of complexity scaling with the circuit complexity of the target relation class. In this work, we initiate the study of the barriers for building correlation intractability. Our main result is a lower bound on the complexity of any black-box construction of CIH from collision resistant hash (CRH), or one-way permutations (OWP), for any sufficiently expressive relation class. In particular, any such construction for a class of relations with circuit complexity $t$ must make at least $\Omega(t)$ invocations of the underlying building block. We see this as a first step in developing a methodology towards broader lower bounds.
Last updated:  2024-10-23
Convex Consensus with Asynchronous Fallback
Andrei Constantinescu, Diana Ghinea, Roger Wattenhofer, and Floris Westermann
Convex Consensus (CC) allows a set of parties to agree on a value $v$ inside the convex hull of their inputs with respect to a predefined abstract convexity notion, even in the presence of byzantine parties. In this work, we focus on achieving CC in the best-of-both-worlds paradigm, i.e., simultaneously tolerating at most $t_s$ corruptions if communication is synchronous, and at most $t_a \leq t_s$ corruptions if it is asynchronous. Our protocol is randomized, which is a requirement under asynchrony, and we prove that it achieves optimal resilience. In the process, we introduce communication primitives tailored to the network-agnostic model. These are a deterministic primitive allowing parties to obtain intersecting views (Gather), and a randomized primitive leading to identical views (Agreement on a Core-Set). Our primitives provide stronger guarantees than previous counterparts, making them of independent interest.
Last updated:  2023-09-12
Amortized NISC over $\mathbb{Z}_{2^k}$ from RMFE
Fuchun Lin, Chaoping Xing, Yizhou Yao, and Chen Yuan
Reversed multiplication friendly embedding (RMFE) amortization has been playing an active role in the state-of-the-art constructions of MPC protocols over rings (in particular, the ring $\mathbb{Z}_{2^k}$). As far as we know, this powerful technique has NOT been able to find applications in the crown jewel of two-party computation, the non-interactive secure computation (NISC), where the requirement of the protocol being non-interactive constitutes a formidable technical bottle-neck. We initiate such a study focusing on statistical NISC protocols in the VOLE-hybrid model. Our study begins with making the {\em decomposable affine randomized encoding (DARE)} based semi-honest NISC protocol compatible with RMFE techniques, which together with known techniques for forcing a malicious sender Sam to honestly follow DARE already yield a secure amortized protocol, assuming both parties follow RMFE encoding. Achieving statistical security in the full malicious setting is much more challenging, as applying known techniques for enforcing compliance with RMFE incurs interaction. To solve this problem, we put forward a new notion dubbed non-malleable RMFE (NM-RMFE), which is a randomized RMFE such that, once one party deviates from the encoding specification, the randomness injected by the other party will randomize the output, preventing information from being leaked. NM-RMFE simultaneously forces both parties to follow RMFE encoding, offering a desired {\em non-interactive} solution to amortizing NISC. We believe that NM-RMFE is on its own an important primitive that has applications in secure computation and beyond, interactive and non-interactive alike. With an asymptotically good instantiation of our NM-RMFE, we obtain the first {\em statistical} reusable NISC protocols in the VOLE-hybrid model with {\em constant communication overhead} for arithmetic branching programs over $\mathbb{Z}_{2^k}$. As side contributions, we consider computational security and present two concretely efficient NISC constructions in the random oracle model from conventional RMFEs.
Last updated:  2023-09-12
Comments on certain past cryptographic flaws affecting fully encrypted censorship circumvention protocols
David Fifield
This article presents three retrospective case studies of cryptography-related flaws in censorship circumvention protocols: a decryption oracle in Shadowsocks “stream cipher” methods, non-uniform Elligator public key representatives in obfs4, and a replay-based active distinguishing attack exploiting malleability in VMess. These three protocols come from the family of “fully encrypted” circumvention protocols: their traffic in both directions is indistinguishable from a uniformly random stream of bytes (or at least, is supposed to be). Some of the flaws are fixable implementation errors; others are rooted in more fundamental design errors. Their consequences range from enabling passive probabilistic detection to complete loss of confidentiality. All have been fixed, mitigated, or superseded since their discovery. My primary purpose is to provide an introduction of circumvention threat models to specialists in cryptography, and to make the point that while cryptography is a necessary tool in circumvention, it is not the sole or even most important consideration. Secondarily, I want to furnish a few instructive examples of cryptographic design and implementation errors in uncontrived, deployed protocols. While the flaws I discuss affected systems of significant social importance with millions of collective users, they are not well-known outside a small circle of specialists in circumvention.
Last updated:  2023-09-11
Let's Go Eevee! A Friendly and Suitable Family of AEAD Modes for IoT-to-Cloud Secure Computation
Amit Singh Bhati, Erik Pohle, Aysajan Abidin, Elena Andreeva, and Bart Preneel
IoT devices collect privacy-sensitive data, e.g., in smart grids or in medical devices, and send this data to cloud servers for further processing. In order to ensure confidentiality as well as authenticity of the sensor data in the untrusted cloud environment, we consider a transciphering scenario between embedded IoT devices and multiple cloud servers that perform secure multi-party computation (MPC). Concretely, the IoT devices encrypt their data with a lightweight symmetric cipher and send the ciphertext to the cloud servers. To obtain the secret shares of the cleartext message for further processing, the cloud servers engage in an MPC protocol to decrypt the ciphertext in a distributed manner. This way, the plaintext is never exposed to the individual servers. As an important building block in this scenario, we propose a new, provably secure family of lightweight modes for authenticated encryption with associated data (AEAD), called Eevee. The Eevee family has fully parallel decryption, making it suitable for MPC protocols for which the round complexity depends on the complexity of the function they compute. Further, our modes use the lightweight forkcipher primitive that offers fixed-length output expansion and a compact yet parallelizable internal structure. All Eevee members improve substantially over the few available state-of-the-art (SotA) MPC-friendly modes and other standard solutions. We benchmark the Eevee family on a microcontroller and in MPC. Our proposed mode Jolteon (when instantiated with ForkSkinny) provides 1.85x to 3.64x speedup in IoT-encryption time and 3x to 4.5x speedup in both MPC-decryption time and data for very short queries of 8 bytes and, 1.55x to 3.04x and 1.23x to 2.43x speedup, respectively, in MPC-decryption time and data for queries up to 500 bytes when compared against SotA MPC-friendly modes instantiated with SKINNY. We also provide two advanced modes, Umbreon and Espeon, that show a favorable performance-security trade-off with stronger security guarantees such as nonce-misuse security. Additionally, all Eevee members have full $n$-bit security (where $n$ is the block size of the underlying primitive), use a single primitive and require smaller state and HW area when compared with the SotA modes under their original security settings.
Last updated:  2023-09-11
Payment Splitting in Lightning Network as a Mitigation Against Balance Discovery Attacks
Gijs van Dam
Bitcoin has a low throughput of around 7 transactions per second. The Lightning Network (LN) is a solution meant to improve that throughput while also improving privacy. LN is a Payment Channel Network (PCN) that runs as a peer-to-peer network on top of Bitcoin and improves scalability by keeping most transactions off-chain without sacrificing the trustless character of Bitcoin. Prior work showed that LN is susceptible to the Balance Discovery Attack that allows for individual channel balances to be revealed, threatening users' privacy. In this work we introduce Payment Splitting and Switching (PSS), a way of splitting up payments in LN at intermediary hops along the payment path. PSS drastically reduces the information an attacker can obtain through a BDA. Using real-world data in an LN simulator we demonstrate that the information gain for the attacker drops up to 62% when PSS is deployed. Apart from its potential as mitigation against BDA, PSS also shows promise for increased LN throughput and as a mitigation against jamming attacks.
Last updated:  2023-09-14
Automated Meet-in-the-Middle Attack Goes to Feistel
Qingliang Hou, Xiaoyang Dong, Lingyue Qin, Guoyan Zhang, and Xiaoyun Wang
Feistel network and its generalizations (GFN) are another important building blocks for constructing hash functions, e.g., Simpira v2, Areion, and the ISO standard Lesamnta-LW. The Meet-in-the-Middle (MitM) is a general paradigm to build preimage and collision attacks on hash functions, which has been automated in several papers. However, those automatic tools mostly focus on the hash function with Substitution-Permutation network (SPN) as building blocks, and only one for Feistel network by Schrottenloher and Stevens (at CRYPTO 2022). In this paper, we introduce a new automatic model for MitM attacks on Feistel networks by generalizing the traditional direct or indirect partial matching strategies and also Sasaki’s multi-round matching strategy. Besides, we find the equivalent transformations of Feistel and GFN can significantly simplify the MILP model. Based on our automatic model, we improve the preimage attacks on Feistel-SP-MMO, Simpira-2/-4-DM, Areion-256/-512-DM by 1-2 rounds or significantly reduce the complexities. Furthermore, we fill in the gap left by Schrottenloher and Stevens at CRYPTO 2022 on the large branch (b > 4) Simpira-b’s attack and propose the first 11-round attack on Simpira-6. Besides, we significantly improve the collision attack on the ISO standard hash Lesamnta-LW by increasing the attacked round number from previous 11 to ours 17 rounds.
Last updated:  2023-09-11
The Locality of Memory Checking
Weijie Wang, Yujie Lu, Charalampos Papamanthou, and Fan Zhang
Motivated by the extended deployment of authenticated data structures (e.g., Merkle Patricia Tries) for verifying massive amounts of data in blockchain systems, we begin a systematic study of the I/O efficiency of such systems. We first explore the fundamental limitations of memory checking, a previously-proposed abstraction for verifiable storage, in terms of its locality---a complexity measure that we introduce for the first time and is defined as the number of non-contiguous memory regions a checker must query to verifiably answer a read or a write query. Our central result is an $\Omega(\log n /\log \log n)$ lower bound for the locality of any memory checker. Then we turn our attention to (dense and sparse) Merkle trees, one of the most celebrated memory checkers, and provide stronger lower bounds for their locality. For example, we show that any dense Merkle tree layout will have average locality at least $\frac 13 \log n$. Furthermore, if we allow node duplication, we show that if any write operation has at most polylog complexity, then the read locality cannot be less than $\log n/\log \log n$. Our lower bounds help us construct two new locality-optimized authenticated data structures (DupTree and PrefixTree) which we implement and evaluate on random operations and real workloads, and which are shown to outperform traditional Merkle trees, especially as the number of leaves increases.
Last updated:  2023-09-11
Multimixer-128: Universal Keyed Hashing Based on Integer Multiplication
Koustabh Ghosh, Parisa Amiri Eliasi, and Joan Daemen
In this paper we introduce a new keyed hash function based on 32-bit integer multiplication that we call Multimixer-128. In our approach, we follow the key-then-hash parallel paradigm. So, we first add a variable length input message to a secret key and split the result into blocks. A fixed length public function based on integer multiplication is then applied on each block and their results are added to form the digest. We prove an upper bound of $2^{-127}$ for the universality of Multimixer-128 by means of the differential probability and image probability of the underlying public function. There are vector instructions for fast 32-bit integer multiplication on many CPUs and in such platforms, Multimixer-128 is very efficient. We compare our implementation of Multimixer-128 with NH hash function family that offers similar levels of security and with two fastest NIST LWC candidates. To the best of our knowledge, NH hash function is the fastest keyed hash function on software and Multimixer-128 outperforms NH while providing same levels of security.
Last updated:  2024-07-10
Small Private Key Attack Against a Family of RSA-like Cryptosystems
George Teseleanu and Paul Cotan
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Elkamchouchi, Elshenawy and Shaban presented in 2002 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2-1)(q^2-1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is more secure than RSA. Unfortunately, the common attacks developed against RSA can be adapted for Elkamchouchi \emph{et al.}'s scheme. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k (p^n-1)(q^n-1) = 1$, where $n>0$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Last updated:  2023-09-11
Security Proofs for Key-Alternating Ciphers with Non-Independent Round Permutations
Liqing Yu, Yusai Wu, Yu Yu, Zhenfu Cao, and Xiaolei Dong
This work studies the key-alternating ciphers (KACs) whose round permutations are not necessarily independent. We revisit existing security proofs for key-alternating ciphers with a single permutation (KACSPs), and extend their method to an arbitrary number of rounds. In particular, we propose new techniques that can significantly simplify the proofs, and also remove two unnatural restrictions in the known security bound of 3-round KACSP (Wu et al., Asiacrypt 2020). With these techniques, we prove the first tight security bound for t-round KACSP, which was an open problem. We stress that our techniques apply to all variants of KACs with non-independent round permutations, as well as to the standard KACs.
Last updated:  2023-09-11
Privacy Preserving Feature Selection for Sparse Linear Regression
Adi Akavia, Ben Galili, Hayim Shaul, Mor Weiss, and Zohar Yakhini
Privacy-Preserving Machine Learning (PPML) provides protocols for learning and statistical analysis of data that may be distributed amongst multiple data owners (e.g., hospitals that own proprietary healthcare data), while preserving data privacy. The PPML literature includes protocols for various learning methods, including ridge regression. Ridge regression controls the $L_2$ norm of the model, but does not aim to strictly reduce the number of non-zero coefficients, namely the $L_0$ norm of the model. Reducing the number of non-zero coefficients (a form of feature selection) is important for avoiding overfitting, and for reducing the cost of using learnt models in practice. In this work, we develop a first privacy-preserving protocol for sparse linear regression under $L_0$ constraints. The protocol addresses data contributed by several data owners (e.g., hospitals). Our protocol outsources the bulk of the computation to two non-colluding servers, using homomorphic encryption as a central tool. We provide a rigorous security proof for our protocol, where security is against semi-honest adversaries controlling any number of data owners and at most one server. We implemented our protocol, and evaluated performance with nearly a million samples and up to 40 features.
Last updated:  2023-09-11
Automatic Search Model for Related-Tweakey Impossible Differential Cryptanalysis
Huiqin Chen, Yongqiang Li, Xichao Hu, Zhengbin Liu, Lin Jiao, and Mingsheng Wang
The design and analysis of dedicated tweakable block ciphers constitute a dynamic and relatively recent research field in symmetric cryptanalysis. The assessment of security in the related-tweakey model is of utmost importance owing to the existence of a public tweak. This paper proposes an automatic search model for identifying related-tweakey impossible differentials based on the propagation of states under specific constraints, which is inspired by the research of Hu et al. in ASIACRYPT 2020. Our model is universally applicable to block ciphers, but its search efficiency may be limited in some cases. To address this issue, we introduce the Locality Constraint Analysis (LCA) technique to impossible differential cryptanalysis and propose a generalized automatic search model. Technically, we transform our models into Satisfiability Modulo Theories (SMT) problems and solve them using the STP solver. We have applied our tools to several tweakable block ciphers, such as Joltik-BC, SKINNY, QARMA, and CRAFT, to evaluate their effectiveness and practicality. Specifically, we have discovered 7-round related-tweakey impossible differentials for Joltik-BC-192, and 12-round related-tweak impossible differentials, as well as 15-round related-tweakey impossible differentials for CRAFT for the first time. Based on the search results, we demonstrate that the LCA technique can be effectively performed when searching and determining the contradictory positions for the distinguisher with long trails or ciphers with large sizes in impossible differential cryptanalysis.
Last updated:  2023-09-11
ACE-HoT: Accelerating an extreme amount of symmetric Cipher Evaluations for High-Order avalanche Tests
Emanuele Bellini, Juan Grados, Mohamed Rachidi, Nitin Satpute, Joan Daemen, and Solane Elhirch
In this work, we tackle the problem of estimating the security of iterated symmetric ciphers in an efficient manner, with tests that do not require a deep analysis of the internal structure of the cipher. This is particularly useful during the design phase of these ciphers, especially for quickly testing several combinations of possible parameters defining several cipher design variants. We consider a popular statistical test that allows us to determine the probability of flipping each cipher output bit, given a small variation in the input of the cipher. From these probabilities, one can compute three measurable metrics related to the well-known full diffusion, avalanche and strict avalanche criteria. This highly parallelizable testing process scales linearly with the number of samples, i.e., cipher inputs, to be evaluated and the number of design variants to be tested. But, the number of design variants might grow exponentially with respect to some parameters. The high cost of CPUs, makes them a bad candidate for this kind of parallelization. As a main contribution, we propose a framework, ACE-HoT, to parallelize the testing process using multi-GPU. Our implementation does not perform any intermediate CPU-GPU data transfers. The diffusion and avalanche criteria can be seen as an application of discrete first-order derivatives. As a secondary contribution, we generalize these criteria to their high-order version. Our generalization requires an exponentially larger number of samples, in order to compute sufficiently accurate probabilities. As a case study, we apply ACE-HoT on most of the finalists of the NIST lightweight standardization process, with a special focus on the winner ASCON.
Last updated:  2023-09-11
Bicameral and Auditably Private Signatures
Khoa Nguyen, Partha Sarathi Roy, Willy Susilo, and Yanhong Xu
This paper introduces Bicameral and Auditably Private Signatures (BAPS) -- a new privacy-preserving signature system with several novel features. In a BAPS system, given a certified attribute $\mathbf{x}$ and a certified policy $P$, a signer can issue a publicly verifiable signature $\Sigma$ on a message $m$ as long as $(m, \mathbf{x})$ satisfies $P$. A noteworthy characteristic of BAPS is that both attribute $\mathbf{x}$ and policy $P$ are kept hidden from the verifier, yet the latter is convinced that these objects were certified by an attribute-issuing authority and a policy-issuing authority, respectively. By considering bicameral certification authorities and requiring privacy for both attributes and policies, BAPS generalizes the spirit of existing advanced signature primitives with fine-grained controls on signing capabilities (e.g., attribute-based signatures, predicate signatures, policy-based signatures). Furthermore, BAPS provides an appealing feature named auditable privacy, allowing the signer of $\Sigma$ to verifiably disclose various pieces of partial information about $P$ and $\mathbf{x}$ when asked by auditor(s)/court(s) at later times. Auditable privacy is intrinsically different from and can be complementary to the notion of accountable privacy traditionally incorporated in traceable anonymous systems such as group signatures. Equipped with these distinguished features, BAPS can potentially address interesting application scenarios for which existing primitives do not offer a direct solution. We provide rigorous security definitions for BAPS, following a ``sim-ext'' approach. We then demonstrate a generic construction based on commonly used cryptographic building blocks, which employs a sign-then-commit-then-prove design. Finally, we present a concrete instantiation of BAPS, that is proven secure in the random oracle model under lattice assumptions. The scheme can handle arbitrary policies represented by polynomial-size Boolean circuits and can address quadratic disclosing functions. In the construction process, we develop a new technical building block that could be of independent interest: a zero-knowledge argument system allowing to prove the satisfiability of a certified-and-hidden Boolean circuit on certified-and-committed inputs.
Last updated:  2023-09-10
On the Security of KZG Commitment for VSS
Atsuki Momose, Sourav Das, and Ling Ren
The constant-sized polynomial commitment scheme by Kate, Zaverucha, and Goldberg (Asiscrypt 2010), also known as the KZG commitment, is an essential component in designing bandwidth-efficient verifiable secret-sharing (VSS) protocols. We point out, however, that the KZG commitment is missing two important properties that are crucial for VSS protocols. First, the KZG commitment has not been proven to be degree binding in the standard adversary model without idealized group assumptions. In other words, the committed polynomial is not guaranteed to have the claimed degree, which is supposed to be the reconstruction threshold of VSS. Without this property, shareholders in VSS may end up reconstructing different secrets depending on which shares are used. Second, the KZG commitment does not support polynomials with different degrees at once with a single setup. If the reconstruction threshold of the underlying VSS protocol changes, the protocol must redo the setup, which involves an expensive multi-party computation known as the powers of tau setup. In this work, we augment the KZG commitment to address both of these limitations. Our scheme is degree-binding in the standard model under the strong Diffie-Hellman (SDH) assumption. It supports any degree $0 < d \le m$ under a powers-of-tau common reference string with $m+1$ group elements generated by a one-time setup.
Last updated:  2023-09-10
Communication Lower Bounds of Key-Agreement Protocols via Density Increment Arguments
Mi-Ying (Miryam) Huang, Xinyu Mao, Guangxu Yang, and Jiapeng Zhang
Constructing key-agreement protocols in the random oracle model (ROM) is a viable method to assess the feasibility of developing public-key cryptography within Minicrypt. Unfortunately, as shown by Impagliazzo and Rudich (STOC 1989) and Barak and Mahmoody (Crypto 2009), such protocols can only guarantee limited security: any $\ell$-query protocol can be attacked by an $O(\ell^2)$-query adversary. This quadratic gap matches the key-agreement protocol proposed by Merkle (CACM 78), known as Merkle's Puzzles. Besides query complexity, the communication complexity of key-agreement protocols in the ROM is also an interesting question in the realm of find-grained cryptography, even though only limited security is achievable. Haitner et al. (ITCS 2019) first observed that in Merkle's Puzzles, to obtain secrecy against an eavesdropper with $O(\ell^2)$ queries, the honest parties must exchange $\Omega(\ell)$ bits. Therefore, they conjectured that high communication complexity is unavoidable, i.e., any $\ell$-query protocols with $c$ bits of communication could be attacked by an $O(c\cdot \ell)$-query adversary. This, if true, will suggest that Merkle's Puzzle is also optimal regarding communication complexity. Building upon techniques from communication complexity, Haitner et al. (ITCS 2019) confirmed this conjecture for two types of key agreement protocols with certain natural properties. This work affirms the above conjecture for all non-adaptive protocols with perfect completeness. Our proof uses a novel idea called density increment argument. This method could be of independent interest as it differs from previous communication lower bounds techniques (and bypasses some technical barriers).
Last updated:  2023-09-11
Adaptively Secure (Aggregatable) PVSS and Application to Distributed Randomness Beacons
Renas Bacho and Julian Loss
Publicly Verifiable Secret Sharing (PVSS) is a fundamental primitive that allows to share a secret $S$ among $n$ parties via a publicly verifiable transcript $T$. Existing (efficient) PVSS are only proven secure against static adversaries who must choose who to corrupt ahead of a protocol execution. As a result, any protocol (e.g., a distributed randomness beacon) that builds on top of such a PVSS scheme inherits this limitation. To overcome this barrier, we revisit the security of PVSS under adaptive corruptions and show that, surprisingly, many protocols from the literature already achieve it in a meaningful way: - We propose a new security definition for aggregatable PVSS, i.e., schemes that allow to homomorphically combine multiple transcripts into one compact aggregate transcript $AT$ that shares the sum of their individual secrets. Our notion captures that if the secret shared by $AT$ contains at least one contribution from an honestly generated transcript, it should not be predictable. We then prove that several existing schemes satisfy this notion against adaptive corruptions in the algebraic group model. - To motivate our new notion, we show that it implies the adaptive security of two recent random beacon protocols, SPURT (S&P '22) and OptRand (NDSS '23), who build on top of aggregatable PVSS schemes satisfying our notion of unpredictability. For a security parameter $\lambda$, our result improves the communication complexity of the best known adaptively secure random beacon protocols to $O(\lambda n^2)$ for synchronous networks with $t<n/2$ corruptions and partially synchronous networks with $t<n/3$ corruptions.
Last updated:  2024-05-07
Decentralised Repeated Modular Squaring Service Revisited: Attack and Mitigation
Aydin Abadi
Repeated modular squaring plays a crucial role in various time-based cryptographic primitives, such as Time-Lock Puzzles and Verifiable Delay Functions. At ACM CCS 2021, Thyagarajan et al. introduced “OpenSquare”, a decentralised protocol that lets a client delegate the computation of repeated modular squaring to third-party servers while ensuring that these servers are compensated only if they deliver valid results. In this work, we unveil a significant vulnerability in OpenSquare, which enables servers to receive payments without fulfilling the delegated task. To tackle this issue, we present a series of mitigation measures.
Last updated:  2023-09-09
Street Rep: A Privacy-Preserving Reputation Aggregation System
Christophe Hauser, Shirin Nilizadeh, Yan Shoshitaishvili, Ni Trieu, Srivatsan Ravi, Christopher Kruegel, and Giovanni Vigna
Over the last decade, online reputation has become a central aspect of our digital lives. Most online services and communities assign a reputation score to users, based on feedback from other users about various criteria such as how reliable, helpful, or knowledgeable a person is. While many online services compute reputation based on the same set of such criteria, users currently do not have the ability to use their reputation scores across services. As a result, users face trouble establishing themselves on new services or trusting each other on services that do not support reputation tracking. Existing systems that aggregate reputation scores, unfortunately, provide no guarantee in terms of user privacy, and their use makes user accounts linkable. Such a lack of privacy may result in embarrassment, or worse, place users in danger. In this paper, we present StreetRep, a practical system for aggregating user reputation scores in a privacy-preserving manner. StreetRep makes it possible for users to provide their aggregated scores over multiple services without revealing their respective identities on each service. We discuss our novel approach for tamper-proof privacy preserving score aggregation from multiple sources by combining existing techniques such as blind signatures, homomorphic signatures and private information retrieval. We discuss its practicality and resiliency against different types of attacks. We also built a prototype implementation of StreetRep. Our evaluation demonstrates that StreetRep (a) performs efficiently and (b) practically scales to a large user base.
Last updated:  2023-09-08
Experimenting with Zero-Knowledge Proofs of Training
Sanjam Garg, Aarushi Goel, Somesh Jha, Saeed Mahloujifar, Mohammad Mahmoody, Guru-Vamsi Policharla, and Mingyuan Wang
How can a model owner prove they trained their model according to the correct specification? More importantly, how can they do so while preserving the privacy of the underlying dataset and the final model? We study this problem and formulate the notion of zero-knowledge proof of training (zkPoT), which formalizes rigorous security guarantees that should be achieved by a privacy-preserving proof of training. While it is theoretically possible to design zkPoT for any model using generic zero-knowledge proof systems, this approach results in extremely unpractical proof generation times. Towards designing a practical solution, we propose the idea of combining techniques from MPC-in-the-head and zkSNARKs literature to strike an appropriate trade-off between proof size and proof computation time. We instantiate this idea and propose a concretely efficient, novel zkPoT protocol for logistic regression. Crucially, our protocol is streaming-friendly and does not require RAM proportional to the size of the circuit being trained and, hence, can be adapted to the requirements of available hardware. We expect the techniques developed in this paper to also generally be useful for designing efficient zkPoT protocols for other relatively more sophisticated ML models. We implemented and benchmarked prover/verifier runtimes and proof sizes for training a logistic regression model using mini-batch gradient descent on a 4~GB dataset of 262,144 records with 1024 features. We divide our protocol into three phases: (1) data-independent offline phase (2) data-dependent phase that is independent of the model (3) online phase that depends both on the data and the model. The total proof size (across all three phases) is less than $10\%$ of the data set size ($<350$~MB). In the online phase, the prover and verifier times are under 10 minutes and half a minute respectively, whereas in the data-dependent phase, they are close to one hour and a few seconds respectively.
Last updated:  2023-11-23
Analyzing the Real-World Security of the Algorand Blockchain
Fabrice Benhamouda, Erica Blum, Jonathan Katz, Derek Leung, Julian Loss, and Tal Rabin
The Algorand consensus protocol is interesting both in theory and in practice. On the theoretical side, to achieve adaptive security, it introduces the novel idea of player replaceability, where each step of the protocol is executed by a different randomly selected committee whose members remain secret until they send their first and only message. The protocol provides consistency under arbitrary network conditions and liveness under intermittent network partitions. On the practical side, the protocol is used to secure the Algorand cryptocurrency, whose total value is approximately $850M at the time of writing. The Algorand protocol in use differs substantially from the protocols described in the published literature on Algorand. Despite its significance, it lacks a formal analysis. In this work, we describe and analyze the Algorand consensus protocol as deployed today in Algorand’s ecosystem. We show that the overall protocol framework is sound by characterizing network conditions and parameter settings under which the protocol can be proven secure.
Last updated:  2023-09-08
Universally Composable Auditable Surveillance
Valerie Fetzer, Michael Klooß, Jörn Müller-Quade, Markus Raiber, and Andy Rupp
User privacy is becoming increasingly important in our digital society. Yet, many applications face legal requirements or regulations that prohibit unconditional anonymity guarantees, e.g., in electronic payments where surveillance is mandated to investigate suspected crimes. As a result, many systems have no effective privacy protections at all, or have backdoors, e.g., stored at the operator side of the system, that can be used by authorities to disclose a user’s private information (e.g., lawful interception). The problem with such backdoors is that they also enable silent mass surveillance within the system. To prevent such misuse, various approaches have been suggested which limit possible abuse or ensure it can be detected. Many works consider auditability of surveillance actions but do not enforce that traces are left when backdoors are retrieved. A notable exception which offers retrospective and silent surveillance is the recent work on misuse-resistant surveillance by Green et al. (EUROCRYPT’21). However, their approach relies on extractable witness encryption, which is a very strong primitive with no known efficient and secure implementations. In this work, we develop a building block for auditable surveillance. In our protocol, backdoors or escrow secrets of users are protected in multiple ways: (1) Backdoors are short-term and user-specific; (2) they are shared between trustworthy parties to avoid a single point of failure; and (3) backdoor access is given conditionally. Moreover (4) there are audit trails and public statistics for every (granted) backdoor request; and (5) surveillance remains silent, i.e., users do not know they are surveilled. Concretely, we present an abstract UC-functionality which can be used to augment applications with auditable surveillance capabilities. Our realization makes use of threshold encryption to protect user secrets, and is concretely built in a blockchain context with committee-based YOSO MPC. As a consequence, the committee can verify that the conditions for backdoor access are given, e.g., that law enforcement is in possession of a valid surveillance warrant (via a zero-knowledge proof). Moreover, access leaves an audit trail on the ledger, which allows an auditor to retrospectively examine surveillance decisions. As a toy example, we present an Auditably Sender-Traceable Encryption scheme, a PKE scheme where the sender can be deanonymized by law enforcement. We observe and solve problems posed by retrospective surveillance via a special non-interactive non-committing encryption scheme which allows zero-knowledge proofs over message, sender identity and (escrow) secrets.
Last updated:  2024-04-18
Modular Sumcheck Proofs with Applications to Machine Learning and Image Processing
David Balbás, Dario Fiore, Maria Isabel González Vasco, Damien Robissout, and Claudio Soriente
Cryptographic proof systems provide integrity, fairness, and privacy in applications that outsource data processing tasks. However, general-purpose proof systems do not scale well to large inputs. At the same time, ad-hoc solutions for concrete applications - e.g., machine learning or image processing - are more efficient but lack modularity, hence they are hard to extend or to compose with other tools of a data-processing pipeline. In this paper, we combine the performance of tailored solutions with the versatility of general-purpose proof systems. We do so by introducing a modular framework for verifiable computation of sequential operations. The main tool of our framework is a new information-theoretic primitive called Verifiable Evaluation Scheme on Fingerprinted Data (VE) that captures the properties of diverse sumcheck-based interactive proofs, including the well-established GKR protocol. Thus, we show how to compose VEs for specific functions to obtain verifiability of a data-processing pipeline. We propose a novel VE for convolution operations that can handle multiple input-output channels and batching, and we use it in our framework to build proofs for (convolutional) neural networks and image processing. We realize a prototype implementation of our proof systems, and show that we achieve up to $5 \times$ faster proving time and $10 \times$ shorter proofs compared to the state-of-the-art, in addition to asymptotic improvements.
Last updated:  2023-09-08
Combined Private Circuits - Combined Security Refurbished
Jakob Feldtkeller, Tim Güneysu, Thorben Moos, Jan Richter-Brockmann, Sayandeep Saha, Pascal Sasdrich, and François-Xavier Standaert
Physical attacks are well-known threats to cryptographic implementations. While countermeasures against passive Side-Channel Analysis (SCA) and active Fault Injection Analysis (FIA) exist individually, protecting against their combination remains a significant challenge. A recent attempt at achieving joint security has been published at CCS 2022 under the name CINI-MINIS. The authors introduce relevant security notions and aim to construct arbitrary-order gadgets that remain trivially composable in the presence of a combined adversary. Yet, we show that all CINI-MINIS gadgets at any order are susceptible to a devastating attack with only a single fault and probe due to a lack of error correction modules in the compression. We explain the details of the attack, pinpoint the underlying problem in the constructions, propose an additional design principle, and provide new (fixed) provably secure and composable gadgets for arbitrary order. Luckily, the changes in the compression stage help us to save correction modules and registers elsewhere, making the resulting Combined Private Circuits (CPC) more secure and more efficient than the original ones. We also explain why the discovered flaws have been missed by the associated formal verification tool VERICA (TCHES 2022) and propose fixes to remove its blind spot. Finally, we explore alternative avenues to repair the compression stage without additional corrections based on non-completeness, i.e., constructing a compression that never recombines any secret. Yet, while this approach could have merit for low-order gadgets, it is, for now, hard to generalize and scales poorly to higher orders. We conclude that our refurbished arbitrary order CINI gadgets provide a solid foundation for further research.
Last updated:  2023-09-12
Methods for Masking CRYSTALS-Kyber Against Side-Channel Attacks
Uncategorized
Sıla ÖZEREN and Oğuz YAYLA
Show abstract
Uncategorized
In the context of post-quantum secure algorithms like CRYSTALS-Kyber, the importance of protecting sensitive polynomial coefficients from side-channel attacks is increasingly recognized. Our research introduces two alternative masking methods to enhance the security of the compression function in Kyber through masking. Prior to this, the topic had been addressed by only one other research study. The "Double and Check" method integrates arithmetic sharing and symmetry adjustments, introducing a layer of obfuscation by determining coefficient values based on modular overflows. In contrast, the Look-Up-Table (LUT) integration method employs arithmetic-to-Boolean conversions, augmented by a pre-computed table for efficient value verifications. Furthermore, by leveraging the alternative prime 7681, we propose a novel masked compression function. This prime, 7681, is also notable as the smallest prime suitable for fast NTT multiplication. While both algorithms prioritize data protection and streamlined processing, they also underscore the inherent challenges of balancing computational speed with the potential vulnerabilities to side-channel attacks.
Last updated:  2023-12-30
FlexiRand: Output Private (Distributed) VRFs and Application to Blockchains
Aniket Kate, Easwar Vivek Mangipudi, Siva Mardana, and Pratyay Mukherjee
Web3 applications based on blockchains regularly need access to randomness that is unbiased, unpredictable, and publicly verifiable. For Web3 gaming applications, this becomes a crucial selling point to attract more users by providing credibility to the "random reward" distribution feature. A verifiable random function (VRF) protocol satisfies these requirements naturally, and there is a tremendous rise in the use of VRF services. As most blockchains cannot maintain the secret keys required for VRFs, Web3 applications interact with external VRF services via a smart contract where a VRF output is exchanged for a fee. While this smart contract-based plain-text exchange offers the much-needed public verifiability immediately, it severely limits the way the requester can employ the VRF service: the requests cannot be made in advance, and the output cannot be reused. This introduces significant latency and monetary overhead. This work overcomes this crucial limitation of the VRF service by introducing a novel privacy primitive Output Private VRF ( Pri-VRF) and thereby adds significantly more flexibility to the Web3-based VRF services. We call our framework FlexiRand. While maintaining the pseudo-randomness and public verifiability properties of VRFs, FlexiRand ensures that the requester alone can observe the VRF output. The smart contract and anybody else can only observe a blinded-yet-verifiable version of the output. We formally define Pri-VRF, put forward a practically efficient design, and provide provable security analysis in the universal composability (UC) framework (in the random oracle model) using a variant of one-more Diffie-Hellman assumption over bilinear groups. As the VRF service, with its ownership of the secret key, be- comes a single point of failure, it is realized as a distributed VRF with the key secret-shared across distinct nodes in our framework. We develop our distributed Pri-VRF construction by combining approaches from Distributed VRF and Distributed Oblivious PRF literature. We provide provable security analysis (in UC), implement it and compare its performance with existing distributed VRF schemes. Our distributed Pri-VRF only introduces a minimal computation and communication overhead for the VRF service, the requester, and the contract.
Last updated:  2023-09-07
Lanturn: Measuring Economic Security of Smart Contracts Through Adaptive Learning
Kushal Babel, Mojan Javaheripi, Yan Ji, Mahimna Kelkar, Farinaz Koushanfar, and Ari Juels
We introduce Lanturn: a general purpose adaptive learning-based framework for measuring the cryptoeconomic security of composed decentralized-finance (DeFi) smart contracts. Lanturn discovers strategies comprising of concrete transactions for extracting economic value from smart contracts interacting with a particular transaction environment. We formulate the strategy discovery as a black-box optimization problem and leverage a novel adaptive learning-based algorithm to address it. Lanturn features three key properties. First, it needs no contract-specific heuristics or reasoning, due to our black-box formulation of cryptoeconomic security. Second, it utilizes a simulation framework that operates natively on blockchain state and smart contract machine code, such that transactions returned by Lanturn’s learning-based optimization engine can be executed on-chain without modification. Finally, Lanturn is scalable in that it can explore strategies comprising a large number of transactions that can be reordered or subject to insertion of new transactions. We evaluate Lanturn on the historical data of the biggest and most active DeFi Applications: Sushiswap, UniswapV2, UniswapV3, and AaveV2. Our results show that Lanturn not only rediscovers existing, well-known strategies for extracting value from smart contracts, but also discovers new strategies that are previously undocumented. Lanturn also consistently discovers higher value than evidenced in the wild, surpassing a natural baseline computed using value extracted by bots and other strategic agents.
Last updated:  2023-09-07
SoK: Public Key Encryption with Openings
Carlo Brunetta, Hans Heum, and Martijn Stam
When modelling how public key encryption can enable secure communication, we should acknowledge that secret information, such as private keys or the randomness used for encryption, could become compromised. Intuitively, one would expect unrelated communication to remain secure, yet formalizing this intuition has proven challenging. Several security notions have appeared that aim to capture said scenario, ranging from the multi-user setting with corruptions, via selective opening attacks (SOA), to non-committing encryption (NCE). Remarkably, how the different approaches compare has not yet been systematically explored. We provide a novel framework that maps each approach to an underlying philosophy of confidentiality: indistinguishability versus simulatability based, each with an a priori versus an a posteriori variant, leading to four distinct philosophies. In the absence of corruptions, these notions are largely equivalent; yet, in the presence of corruptions, they fall into a hierarchy of relative strengths, from IND-CPA and IND-CCA at the bottom, via indistinguishability SOA and simulatability SOA, to NCE at the top. We provide a concrete treatment for the four notions, discuss subtleties in their definitions and asymptotic interpretations and identify limitations of each. Furthermore, we re-cast the main implications of the hierarchy in a concrete security framework, summarize and contextualize other known relations, identify open problems, and close a few gaps. We end on a survey of constructions known to achieve the various notions. We identify and name a generic random-oracle construction that has appeared in various guises to prove security in seemingly different contexts. It hails back to Bellare and Rogaway's seminal work on random oracles (CCS'93) and, as previously shown, suffices to meet one of the strongest notions of our hierarchy (single-user NCE with bi-openings).
Last updated:  2023-09-07
Riggs: Decentralized Sealed-Bid Auctions
Nirvan Tyagi, Arasu Arun, Cody Freitag, Riad Wahby, Joseph Bonneau, and David Mazières
We introduce the first practical protocols for fully decentralized sealed-bid auctions using timed commitments. Timed commitments ensure that the auction is finalized fairly even if all participants drop out after posting bids or if $n-1$ bidders collude to try to learn the $n^{th}$ bidder’s bid value. Our protocols rely on a novel non-malleable timed commitment scheme which efficiently supports range proofs to establish that bidders have sufficient funds to cover a hidden bid value. This allows us to penalize users who abandon bids for exactly the bid value, while supporting simultaneous bidding in multiple auctions with a shared collateral pool. Our protocols are concretely efficient and we have implemented them in an Ethereum- compatible smart contract which automatically enforces payment and delivery of an auctioned digital asset.
Last updated:  2023-10-03
Antrag: Annular NTRU Trapdoor Generation
Thomas Espitau, Thi Thu Quyen Nguyen, Chao Sun, Mehdi Tibouchi, and Alexandre Wallet
In this paper, we introduce a novel trapdoor generation technique for Prest's hybrid sampler over NTRU lattices. Prest's sampler is used in particular in the recently proposed Mitaka signature scheme (Eurocrypt 2022), a variant of the Falcon signature scheme, one of the candidates selected by NIST for standardization. Mitaka was introduced to address Falcon's main drawback, namely the fact that the lattice Gaussian sampler used in its signature generation is highly complex, difficult to implement correctly, to parallelize or protect against side-channels, and to instantiate over rings of dimension not a power of two to reach intermediate security levels. Prest's sampler is considerably simpler and solves these various issues, but when applying the same trapdoor generation approach as Falcon, the resulting signatures have far lower security in equal dimension. The Mitaka paper showed how certain randomness-recycling techniques could be used to mitigate this security loss, but the resulting scheme is still substantially less secure than Falcon (by around 20 to 50 bits of CoreSVP security depending on the parameters), and has much slower key generation. Our new trapdoor generation techniques solves all of those issues satisfactorily: it gives rise to a much simpler and faster key generation algorithm than Mitaka's (achieving similar speeds to Falcon), and is able to comfortably generate trapdoors reaching the same NIST security levels as Falcon as well. It can also be easily adapted to rings of intermediate dimensions, in order to support the same versatility as Mitaka in terms of parameter selection. All in all, this new technique combines all the advantages of both Falcon and Mitaka (and more) with none of the drawbacks.
Last updated:  2023-09-07
A Generic Construction of Tightly Secure Password-based Authenticated Key Exchange
Jiaxin Pan and Runzhi Zeng
We propose a generic construction of password-based authenticated key exchange (PAKE) from key encapsulation mechanisms (KEM). Assuming that the KEM is oneway secure against plaintext-checkable attacks (OW-PCA), we prove that our PAKE protocol is \textit{tightly secure} in the Bellare-Pointcheval-Rogaway model (EUROCRYPT 2000). Our tight security proofs require ideal ciphers and random oracles. The OW-PCA security is relatively weak and can be implemented tightly with the Diffie-Hellman assumption, which generalizes the work of Liu et al. (PKC 2023), and ``almost'' tightly with lattice-based assumptions, which tightens the security loss of the work of Beguinet et al. (ACNS 2023) and allows more efficient practical implementation with Kyber. Beyond these, it opens an opportunity of constructing tight PAKE based on various assumptions.
Last updated:  2023-09-07
Neutrosophic Boolean Function and Rejection Sampling in Post Quantum Cryptography
Shashi Kant Pandey
The use of random seeds to a deterministic random bit generator to generate uniform random sampling has been applied multiple times in post-quantum algorithms. The finalists Dilithium and Kyber use SHAKE and AES to generate the random sequence at multiple stages of the algorithm. Here we characterize one of the sampleing techniques available in Dilithium for a random sequence of length 256 with the help of the neutrosophic Boolean function. The idea of the neutrosophic Boolean function came from the theory of neutrosophy and it is useful to study any ternary distributions. We present the non-existence of neutrobalanced bent functions specifically with respect to the sampling named SampleInBall in Dilithium.
Last updated:  2024-07-19
Abuse-Resistant Location Tracking: Balancing Privacy and Safety in the Offline Finding Ecosystem
Harry Eldridge, Gabrielle Beck, Matthew Green, Nadia Heninger, and Abhishek Jain
Location tracking accessories (or "tracking tags") such as those sold by Apple, Samsung, and Tile, allow owners to track the location of their property via offline finding networks. The tracking protocols were designed to ensure that no entity (including the vendor) can use a tag's broadcasts to surveil its owner. These privacy guarantees, however, seem to be at odds with the phenomenon of $\textit{tracker-based stalking}$, where attackers use these very tags to monitor a target's movements. Numerous such criminal incidents have been reported, and in response, manufacturers have chosen to substantially weaken privacy guarantees in order to allow users to detect stalker tags. This compromise has been adopted in a recent IETF draft jointly proposed by Apple and Google. We put forth the notion of $\textit{abuse-resistant offline finding protocols}$ that aim to achieve a better balance between user privacy and stalker detection. We present an efficient protocol that achieves stalker detection under realistic conditions without sacrificing honest user privacy. At the heart of our result, and of independent interest, is a new notion of $\textit{multi-dealer secret sharing}$ which strengthens standard secret sharing with novel privacy and correctness guarantees. We show that this primitive can be instantiated efficiently on edge devices using variants of Interleaved Reed-Solomon codes combined with new lattice-based decoding algorithms.
Last updated:  2023-09-06
Pantheon: Private Retrieval from Public Key-Value Store
Ishtiyaque Ahmad, Divyakant Agrawal, Amr El Abbadi, and Trinabh Gupta
Consider a cloud server that owns a key-value store and provides a private query service to its clients. Preserving client privacy in this setting is difficult because the key-value store is public, and a client cannot encrypt or modify it. Therefore, privacy in this context implies hiding the access pattern of a client. Pantheon is a system that cryptographically allows a client to retrieve the value corresponding to a key from a public key-value store without allowing the server or any adversary to know any information about the key or value accessed. Pantheon devises a single-round retrieval protocol which reduces server-side latency by refining its cryptographic machinery and massively parallelizing the query execution workload. Using these novel techniques, Pantheon achieves a $93\times$ improvement for server-side latency over a state-of-the-art solution.
Last updated:  2024-01-07
Notes on Small Private Key Attacks on Common Prime RSA
Mengce Zheng
We point out critical deficiencies in lattice-based cryptanalysis of common prime RSA presented in ``Remarks on the cryptanalysis of common prime RSA for IoT constrained low power devices'' [Information Sciences, 538 (2020) 54--68]. To rectify these flaws, we carefully scrutinize the relevant parameters involved in the analysis during solving a specific trivariate integer polynomial equation. Additionally, we offer a synthesized attack illustration of small private key attacks on common prime RSA.
Last updated:  2023-09-06
Layered Symbolic Security Analysis in DY$^\star$
Karthikeyan Bhargavan, Abhishek Bichhawat, Pedram Hosseyni, Ralf Kuesters, Klaas Pruiksma, Guido Schmitz, Clara Waldmann, and Tim Würtele
While cryptographic protocols are often analyzed in isolation, they are typically deployed within a stack of protocols, where each layer relies on the security guarantees provided by the protocol layer below it, and in turn provides its own security functionality to the layer above. Formally analyzing the whole stack in one go is infeasible even for semi-automated verification tools, and impossible for pen-and-paper proofs. The DY$^\star$ protocol verification framework offers a modular and scalable technique that can reason about large protocols, specified as a set of F$^\star$ modules. However, it does not support the compositional verification of layered protocols since it treats the global security invariants monolithically. In this paper, we extend DY$^\star$ with a new methodology that allows analysts to modularly analyze each layer in a way that compose to provide security for a protocol stack. Importantly, our technique allows a layer to be replaced by another implementation, without affecting the proofs of other layers. We demonstrate this methodology on two case studies. We also present a verified library of generic authenticated and confidential communication patterns that can be used in future protocol analyses and is of independent interest.
Last updated:  2023-09-22
Optimizing HE operations via Level-aware Key-switching Framework
Intak Hwang, Jinyeong Seo, and Yongsoo Song
In lattice-based Homomorphic Encryption (HE) schemes, the key-switching procedure is a core building block of non-linear operations but also a major performance bottleneck. The computational complexity of the operation is primarily determined by the so-called gadget decomposition, which transforms a ciphertext entry into a tuple of small polynomials before being multiplied with the corresponding evaluation key. However, the previous studies such as Halevi et al. (CT-RSA 2019) and Han and Ki (CT-RSA 2020) fix a decomposition function in the setup phase which is applied commonly across all ciphertext levels, resulting in suboptimal performance. In this paper, we introduce a novel key-switching framework for leveled HE schemes. We aim to allow the use of different decomposition functions during the evaluation phase so that the optimal decomposition method can be utilized at each level to achieve the best performance. A naive solution might generate multiple key-switching keys corresponding to all possible decomposition functions, and sends them to an evaluator. However, our solution can achieve the goal without such communication overhead since it allows an evaluator to dynamically derive other key-switching keys from a single key-switching key depending on the choice of gadget decomposition. We implement our framework at a proof-of-concept level to provide concrete benchmark results. Our experiments show that we achieve the optimal performance at every level while maintaining the same computational capability and communication costs.
Last updated:  2023-09-06
Fine-Grained Secure Attribute-Based Encryption
Yuyu Wang, Jiaxin Pan, and Yu Chen
Fine-grained cryptography is constructing cryptosystems in a setting where an adversary’s resource is a-prior bounded and an honest party has less resource than an adversary. Currently, only simple form of encryption schemes, such as secret-key and public-key encryption, are constructed in this setting. In this paper, we enrich the available tools in fine-grained cryptography by proposing the first fine-grained secure attribute-based encryption (ABE) scheme. Our construction is adaptively secure under the widely accepted worst-case assumption, NC1$\subsetneq \oplus$L/poly, and it is presented in a generic manner using the notion of predicate encodings (Wee, TCC’14). By properly instantiating the underlying encoding, we can obtain different types of ABE schemes, including identity-based encryption. Previously, all of these schemes were unknown in fine-grained cryptography. Our main technical contribution is constructing ABE schemes without using pairing or the Diffie-Hellman assumption. Hence, our results show that, even if one-way functions do not exist, we still have ABE schemes with meaningful security. For more application of our techniques, we construct an efficient (quasi-adaptive) non-interactive zero-knowledge (QA-NIZK) proof system.
Last updated:  2023-09-06
Accio: Variable-Amount, Optimized-Unlinkable and NIZK-Free Off-Chain Payments via Hubs
Zhonghui Ge, Jiayuan Gu, Chenke Wang, Yu Long, Xian Xu, and Dawu Gu
Payment channel hubs (PCHs) serve as a promising solution to achieving quick off-chain payments between pairs of users. They work by using an untrusted tumbler to relay the payments between the payer and payee and enjoy the advantages of low cost and high scalability. However, the most recent privacy-preserving payment channel hub solution that supports variable payment amounts suffers from limited unlinkability, e.g., being vulnerable to the abort attack. Moreover, this solution utilizes non-interactive zero-knowledge proofs, which bring huge costs on both computation time and communication overhead. Therefore, how to design PCHs that support variable amount payments and unlinkability, but reduce the use of huge-cost cryptographic tools as much as possible, is significant for the large-scale practical applications of off-chain payments. In this paper, we propose Accio, a variable amount payment channel hub solution with optimized unlinkability, by deepening research on unlinkability and constructing a new cryptographic tool. We provide the detailed Accio protocol and formally prove its security and privacy under the Universally Composable framework. Our prototype demonstrates its feasibility and the evaluation shows that Accio outperforms the other state-of-the-art works in both communication and computation costs.
Last updated:  2023-09-05
The Grant Negotiation and Authorization Protocol: Attacking, Fixing, and Verifying an Emerging Standard
Florian Helmschmidt, Pedram Hosseyni, Ralf Kuesters, Klaas Pruiksma, Clara Waldmann, and Tim Würtele
The Grant Negotiation and Authorization Protocol (GNAP) is an emerging authorization and authentication protocol which aims to consolidate and unify several use-cases of OAuth 2.0 and many of its common extensions while providing a higher degree of security. OAuth 2.0 is an essential cornerstone of the security of authorization and authentication for the Web, IoT, and beyond, and is used, among others, by many global players, like Google, Facebook, and Microsoft. Because of historically grown limitations and issues of OAuth 2.0 and its various extensions, prominent members of the OAuth community decided to create GNAP, a new and completely resigned authorization and authentication protocol. Given GNAP's advantages over OAuth 2.0 and its support within the OAuth community, GNAP is expected to become at least as important as OAuth 2.0. In this paper, we present the first formal security analysis of GNAP. We build a detailed formal model of GNAP, based on the Web Infrastructure Model (WIM) of Fett, Küsters, and Schmitz. Based on this model, we provide formal statements of the key security properties of GNAP, namely, authorization, authentication, and session integrity for both authorization and authentication. In the process of trying to prove these properties, we have discovered several attacks on GNAP. We present these attacks as well as modifications to the protocol that prevent them. These modifications have been incorporated into the GNAP specification after discussion with the GNAP working group. We give the first formal security guarantees for GNAP, by proving that GNAP, with our modifications applied, satisfies the mentioned security properties. GNAP was still an early draft when we started our analysis, but is now on track to be adopted as an IETF standard. Hence, our analysis is just in time to help ensure the security of this important emerging standard.
Last updated:  2023-09-05
Fine-Grained Proxy Re-Encryption: Definitions & Constructions from LWE
Yunxiao Zhou, Shengli Liu, Shuai Han, and Haibin Zhang
Proxy re-encryption (PRE) allows a proxy with a re-encryption key to translate a ciphertext intended for Alice (delegator) to another ciphertext intended for Bob (delegatee) without revealing the underlying message. However, with PRE, Bob can obtain the whole message from the re-encrypted ciphertext, and Alice cannot take flexible control of the extent of the message transmitted to Bob. In this paper, we propose a new variant of PRE, called Fine-Grained PRE (FPRE), to support fine-grained re-encryptions. An FPRE is associated with a function family F, and each re-encryption key rk_{A→B}^f is associated with a function f ∈ F. With FPRE, Alice now can authorize re-encryption power to proxy by issuing rk_{A→B}^f to it, with f chosen by herself. Then the proxy can translate ciphertext encrypting m to Bob's ciphertext encrypting f(m) with such a fine-grained re-encryption key, and Bob only obtains a function of message m. In this way, Alice can take flexible control of the message spread by specifying functions. For FPRE, we formally define its syntax and formalize security notions including CPA security, ciphertext pseudo-randomness, unidirectionality, non-transitivity, collusion-safety under adaptive corruptions in the multi-user setting. Moreover, we propose a new security notion named ciphertext unlinkability, which blurs the link between a ciphertext and its re-encrypted ciphertext to hide the proxy connections between users. We establish the relations between those security notions. As for constructions, we propose two FPRE schemes, one for bounded linear functions and the other for deletion functions, based on the learning-with-errors (LWE) assumption. Our FPRE schemes achieve all the aforementioned desirable securities under adaptive corruptions in the standard model. As far as we know, our schemes provide the first solution to PRE with security under adaptive corruptions in the standard model.
Last updated:  2023-09-10
MAFIA: Protecting the Microarchitecture of Embedded Systems Against Fault Injection Attacks
Thomas Chamelot, Damien Couroussé, and Karine Heydemann
Fault injection attacks represent an effective threat to embedded systems. Recently, Laurent et al. have reported that fault injection attacks can leverage faults inside the microarchitecture. However, state-of-the-art counter-measures, hardware-only or with hardware support, do not consider the integrity of microarchitecture control signals that are the target of these faults. We present MAFIA, a microarchitecture protection against fault injection attacks. MAFIA ensures integrity of pipeline control signals through a signature-based mechanism, and ensures fine-grained control-flow integrity with a complete indirect branch support and code authenticity. We analyse the security properties of two different implementations with different security/overhead trade-offs: one with a CBC-MAC/Prince signature function, and another one with a CRC32. We present our implementation of MAFIA in a RISC-V processor, supported by a dedicated compiler toolchain based on LLVM/Clang. We report a hardware area overhead of 23.8 % and 6.5 % for the CBC-MAC/Prince and CRC32 respectively. The average code size and execution time overheads are 29.4% and 18.4% respectively for the CRC32 implementation and are 50 % and 39 % for the CBC-MAC/Prince.
Last updated:  2024-05-21
Boosting the Performance of High-Assurance Cryptography: Parallel Execution and Optimizing Memory Access in Formally-Verified Line-Point Zero-Knowledge
Samuel Dittmer, Karim Eldefrawy, Stéphane Graham-Lengrand, Steve Lu, Rafail Ostrovsky, and Vitor Pereira
Despite the notable advances in the development of high-assurance, verified implementations of cryptographic protocols, such implementations typically face significant performance overheads, particularly due to the penalties induced by formal verification and automated extraction of executable code. In this paper, we address some core performance challenges facing computer-aided cryptography by presenting a formal treatment for accelerating such verified implementations based on multiple generic optimizations covering parallelism and memory access. We illustrate our techniques for addressing such performance bottlenecks using the Line-Point Zero-Knowledge (LPZK) protocol as a case study. Our starting point is a new verified implementation of LPZK that we formalize and synthesize using EasyCrypt; our first implementation is developed to reduce the proof effort and without considering the performance of the extracted executable code. We then show how such (automatically) extracted code can be optimized in three different ways to obtain a 3000x speedup and thus matching the performance of the manual implementation of LPZK. We obtain such performance gains by first modifying the algorithmic specifications, then by adopting a provably secure parallel execution model, and finally by optimizing the memory access structures. All optimizations are first formally verified inside EasyCrypt, and then executable code is automatically synthesized from each step of the formalization. For each optimization, we analyze performance gains resulting from it and also address challenges facing the computer-aided security proofs thereof, and challenges facing automated synthesis of executable code with such an optimization.
Last updated:  2023-09-05
Generic Constructions of Compact and Tightly Selective-Opening Secure Public-key Encryption Schemes
Jiaxin Pan, Benedikt Wagner, and Runzhi Zeng
We propose two generic constructions of public-key encryption (PKE) with tight simulation-based selective-opening security against chosen-ciphertext attacks (SIM-SO-CCA) in the random oracle model. Our constructions can be instantiated with a small constant number of elements in the ciphertext, ignoring smaller contributions from symmetric-key encryption. That is, they have compact ciphertexts. Furthermore, three of our instantiations have compact public keys as well. Known (almost) tightly SIM-SO-CCA secure PKE schemes are due to the work of Lyu et al. (PKC 2018) and Libert et al. (Crypto 2017). They have either linear-size ciphertexts or linear-size public keys. Moreover, they only achieve almost tightness, namely, with security loss depending on the security parameter. In contrast to them, our schemes are the first ones achieving both tight SIM-SO-CCA security and compactness. More precisely, our two generic constructions are: - From Pseudorandom KEM: Our first generic construction is from a key encapsulation mechanism (KEM) with pseudorandom ciphertexts against plaintext-checking attacks. Such a KEM can be constructed directly from the Strong Diffie-Hellman (StDH), Computational DH (CDH), and Decisional DH assumptions. Both their ciphertexts and public keys are compact. Their security loss is a small constant. Interestingly, our CDH-based construction is the first scheme achieving all these advantages based on a weak search assumption. Furthermore, we also give a generic construction of such a KEM, which yields an efficient tightly SIM-SO-CCA PKE from lattices. - From Lossy Encryption: Our second scheme is the well-known Fujisaki-Okamoto transformation. We show that it can turn a lossy encryption scheme into a tightly SIM-SO-CCA secure PKE. This transformation preserves both tightness and compactness of the underlying lossy encryption, which is in contrast to the non-tight proof of Heuer et al. (PKC 2015).
Last updated:  2023-09-05
Practical Privacy-Preserving Machine Learning using Fully Homomorphic Encryption
Michael Brand and Gaëtan Pradel
Machine learning is a widely-used tool for analysing large datasets, but increasing public demand for privacy preservation and the corresponding introduction of privacy regulations have severely limited what data can be analysed, even when this analysis is for societal benefit. Homomorphic encryption, which allows computation on encrypted data, is a natural solution to this dilemma, allowing data to be analysed without sacrificing privacy. Because homomorphic encryption is computationally expensive, however, current solutions are mainly restricted to use it for inference and not training. In this work, we present a practically viable approach to privacy-preserving machine learning training using fully homomorphic encryption. Our method achieves fast training speeds, taking less than 45 seconds to train a binary classifier over thousands of samples on a single mid-range computer, significantly outperforming state-of-the-art results.
Last updated:  2023-09-05
On the Black-Box Separation Between Ring Signatures and Public Key Encryptions
Kyosuke Yamashita and Keisuke Hara
In this paper, we show that it is impossible to construct a public key encryption scheme (PKE) from a ring signature scheme in a black-box fashion in the standard model. Such an impossibility is highly non-trivial because, to the best of our knowledge, known generic constructions of ring signature scheme are based on public key cryptosystems or in the random oracle model. Technically, we introduce a new cryptographic primitive named indistinguishable multi-designated verifiers signature (IMDVS), and prove that (i) IMDVS is equivalent to PKE, and (ii) it is impossible to construct IMDVS from a ring signature scheme in a generic way. Our result suggests an essential gap between ring signature and group signature, as it is known that group signature implies PKE.
Last updated:  2024-06-10
Two-Round Threshold Lattice-Based Signatures from Threshold Homomorphic Encryption
Kamil Doruk Gur, Jonathan Katz, and Tjerand Silde
Much recent work has developed efficient protocols for threshold signatures, where $n$ parties share a signing key and some threshold $t$ of those parties must interact to produce a signature. Yet efficient threshold signatures with post-quantum security have been elusive, with the state-of-the-art being a two-round scheme by Damgård et al. (PKC'21) based on lattices that supports only the full threshold case (i.e., $t=n$). We show here a two-round threshold signature scheme based on standard lattice assumptions that supports arbitrary thresholds $t\leq n$. Estimates of our scheme's performance at the $128$-bit security level show that in the 3-out-of-5 case, we obtain signatures of size $46.6$ KB and public keys of size $13.6$ KB. We achieve $\approx 5\times$ improved parameters if only a small number of signatures are ever issued with the same key. As an essential building block and independent contribution, we construct an actively secure threshold (linearly) homomorphic encryption scheme that supports arbitrary thresholds $t \leq n$.
Last updated:  2023-09-04
Pisces: Private and Compliable Cryptocurrency Exchange
Ya-Nan Li, Tian Qiu, and Qiang Tang
Cryptocurrency exchange platforms such as Coinbase, Binance, enable users to purchase and sell cryptocurrencies conveniently just like trading stocks/commodities. However, because of the nature of blockchain, when a user withdraws coins (i.e., transfers coins to an external on-chain account), all future transactions can be learned by the platform. This is in sharp contrast to conventional stock exchange where all external activities of users are always hidden from the platform. Since the platform knows highly sensitive user private information such as passport number, and bank information, linking all (on-chain) transactions raises a serious privacy concern about the potential disastrous data breach in those cryptocurrency exchange platforms. In this paper, we propose a cryptocurrency exchange that restores user anonymity for the first time. To our surprise, the seemingly well-studied privacy/anonymity problem has several new challenges in this setting. Since the public blockchain and internal transaction activities naturally provide many non-trivial leakages to the platform, internal privacy is not only useful in the usual sense but also becomes necessary for regaining the basic anonymity of user transactions. We also ensure that the user cannot double spend, and the user has to properly report accumulated profit for tax purposes, even in the private setting. We give a careful modeling and efficient construction of the system that achieves constant computation and communication overhead (with only simple cryptographic tools and rigorous security analysis); we also implement our system and evaluate its practical performance.
Last updated:  2023-09-04
Communication Lower Bounds for Cryptographic Broadcast Protocols
Erica Blum, Elette Boyle, Ran Cohen, and Chen-Da Liu-Zhang
Broadcast protocols enable a set of $n$ parties to agree on the input of a designated sender, even facing attacks by malicious parties. In the honest-majority setting, a fruitful line of work harnessed randomization and cryptography to achieve low-communication broadcast protocols with sub-quadratic total communication and with "balanced" sub-linear communication cost per party. However, comparatively little is known in the dishonest-majority setting. Here, the most communication-efficient constructions are based on the protocol of Dolev and Strong (SICOMP '83), and sub-quadratic broadcast has not been achieved even using randomization and cryptography. On the other hand, the only nontrivial $\omega(n)$ communication lower bounds are restricted to deterministic protocols, or against strong adaptive adversaries that can perform "after the fact" removal of messages. We provide new communication lower bounds in this space, which hold against arbitrary cryptography and setup assumptions, as well as a simple protocol showing near tightness of our first bound. 1) We demonstrate a tradeoff between resiliency and communication for randomized protocols secure against $n-o(n)$ static corruptions. For example, $\Omega(n\cdot {\sf polylog}(n))$ messages are needed when the number of honest parties is $n/{\sf polylog}(n)$; $\Omega(n\sqrt{n})$ messages are needed for $O(\sqrt{n})$ honest parties; and $\Omega(n^2)$ messages are needed for $O(1)$ honest parties. Complementarily, we demonstrate broadcast with $O(n\cdot{\sf polylog}(n))$ total communication facing any constant fraction of static corruptions. 2) Our second bound considers $n/2 + k$ corruptions and a weakly adaptive adversary that cannot remove messages "after the fact." We show that any broadcast protocol within this setting can be attacked to force an arbitrary party to send messages to $k$ other parties. Our bound rules out, for example, broadcast facing $51\%$ corruptions, in which all non-sender parties have sublinear communication locality.
Last updated:  2023-09-08
LedgerLocks: A Security Framework for Blockchain Protocols Based on Adaptor Signatures
Erkan Tairi, Pedro Moreno-Sanchez, and Clara Schneidewind
The scalability and interoperability challenges in current cryptocurrencies have motivated the design of cryptographic protocols that enable efficient applications on top and across widely used cryptocurrencies such as Bitcoin or Ethereum. Examples of such protocols include (virtual) payment channels, atomic swaps, oracle-based contracts, deterministic wallets, and coin mixing services. Many of these protocols are built upon minimal core functionalities supported by a wide range of cryptocurrencies. Most prominently, adaptor signatures (AS) have emerged as a powerful tool for constructing blockchain protocols that are (mostly) agnostic to the specific logic of the underlying cryptocurrency. Even though AS-based protocols are built upon the same cryptographic principles, there exists no modular and faithful way for reasoning about their security. Instead, all the works analyzing such protocols focus on reproving how adaptor signatures are used to cryptographically link transactions while considering highly simplified blockchain models that do not capture security-relevant aspects of transaction execution in blockchain-based consensus. To help this, we present LedgerLocks, a framework for the secure design of AS-based blockchain applications in the presence of a realistic blockchain. LedgerLocks defines the concept of AS-locked transactions, transactions whose publication is bound to the knowledge of a cryptographic secret. We argue that AS-locked transactions are the common building block of AS-based blockchain protocols and we define $\mathcal{G}_{\mathsf{LedgerLocks}}$, a realistic ledger model in the Universal Composability framework with built-in support for AS-locked transactions. As LedgerLocks abstracts from the cryptographic realization of AS-locked transactions, it allows protocol designers to focus on the blockchain-specific security considerations instead.
Last updated:  2023-09-03
Cryptanalysis of HALFLOOP Block Ciphers: Destroying HALFLOOP-24
Gregor Leander, Shahram Rasoolzadeh, and Lukas Stennes
HALFLOOP is a family of tweakable block ciphers that are used for encrypting automatic link establishment (ALE) messages in high-frequency radio, a technology commonly used by the military, other government agencies, and industries that require high robustness in long-distance communications. Recently, it was shown in [DDLS22] that the smallest version of the cipher, HALFLOOP-24, can be attacked within a practical time and memory complexity. However, in the real-word ALE setting, it turns out that this attack requires waiting more than 500 years to collect the necessary amount of plaintext-tweak-ciphertext pairs fulfilling the conditions of the attack. In this paper, we present real-world practical attacks against HALFLOOP-24 which are based on a probability-one differential distinguisher. In our attacks, we significantly reduce the data complexity to three differential pairs in the chosen-plaintext (CPA) setting which is optimal in the sense that even a brute force attack needs at least six plaintext-tweak-ciphertext pairs to uniquely identify the correct key. Considering the same ALE setting as [DDLS22], this translates to a reduction from 541 years to 2 hours worth of intercepted traffic. Besides, we provide the first, non generic, public cryptanalysis of HALFLOOP-48 and HALFLOOP-96. More precisely, we present Demirci-Selçuk meet-in-the-middle attacks against full-round HALFLOOP-48 and round-reduced HALFLOOP-96 to recover the complete master key in a CPA setting. However, unlike the attacks on HALFLOOP-24, our attacks on the larger versions are only theoretical. Moreover, for HALFLOOP-96 the known generic time-memory trade-off attack, based on a flawed tweak handling, remains the strongest attack vector. In conclusion, we iterate what was already stated in [DDLS22]: HALFLOOP does not provide adequate protection and should not be used.
Last updated:  2023-09-03
Hashing into quadratic residues modulo a safe prime composite
Sietse Ringers
For $n = pq$ a product of two safe primes, we construct and prove security of a cryptographic hash function $H$ mapping into the square residues $QR_n \subset (\mathbb{Z}/n\mathbb{Z})^*$, by squaring the output of an ordinary cryptographic hash function $H$ of sufficiently long output.
Last updated:  2023-10-21
Efficient Multiplicative-to-Additive Function from Joye-Libert Cryptosystem and Its Application to Threshold ECDSA
Haiyang Xue, Man Ho Au, Mengling Liu, Kwan Yin Chan, Handong Cui, Xiang Xie, Tsz Hon Yuen, and Chengru Zhang
Threshold ECDSA receives interest lately due to its widespread adoption in blockchain applications. A common building block of all leading constructions involves a secure conversion of multiplicative shares into additive ones, which is called the multiplicative-to-additive (MtA) function. MtA dominates the overall complexity of all existing threshold ECDSA constructions. Specifically, $O(n^2)$ invocations of MtA are required in the case of $n$ active signers. Hence, improvement of MtA leads directly to significant improvements for all state-of-the-art threshold ECDSA schemes. In this paper, we design a novel MtA by revisiting the Joye-Libert (JL) cryptosystem. Specifically, we revisit JL encryption and propose a JL-based commitment, then give efficient zero-knowledge proofs for JL cryptosystem which are the first to have standard soundness. Our new MtA offers the best time-space complexity trade-off among all existing MtA constructions. It outperforms state-of-the-art constructions from Paillier by a factor of $1.85$ to $2$ in bandwidth and $1.2$ to $1.7$ in computation. It is $7\times$ faster than those based on Castagnos-Laguillaumie encryption only at the cost of $2\times$ more bandwidth. While our MtA is slower than OT-based constructions, it saves $18.7\times$ in bandwidth requirement. In addition, we also design a batch version of MtA to further reduce the amotised time and space cost by another $25$%.
Last updated:  2023-09-03
Are continuous stop-and-go mixnets provably secure?
Debajyoti Das, Claudia Diaz, Aggelos Kiayias, and Thomas Zacharias
This work formally analyzes the anonymity guarantees of continuous stop-and-go mixnets and attempts to answer the above question. Existing mixnet based anonymous communication protocols that aim to provide provable anonymity guarantees rely on round-based communication models --- which requires synchronization among all the nodes and clients, and difficult to achieve in practice. Continuous stop-and-go mixnets (e.g., Loopix and Nym) provide a nice alternative by adding a random delay for each message on every hop independent of all other hops and all other messages. The core anonymization technique of continuous mixnets combined with the fact that the messages are sent by the clients to the mixnet at different times makes it a difficult problem to formally prove security for such mixnet protocols; all existing analyses for such designs provide only experimental evaluations for anonymity. We are the first to close that gap and provide a formal analysis. We provide two indistinguishability based definitions (of sender anonymity), namely pairwise unlinkability and user unlinkability, tuned specifically for continuous stop-and-go mixnets. We derive the adversarial advantage as a function of the protocol parameters for the two definitions. We show that there is a fundamental lower bound on the adversarial advantage $\delta$ for pairwise unlinkability; however, strong user unlinkability (negligible adversarial advantage) can be achieved if the users message rate ($\lambda_u$) is proportional to message processing rate ($\lambda$) on the nodes.
Last updated:  2024-06-18
FHEDA: Efficient Circuit Synthesis with Reduced Bootstrapping for Torus FHE
Animesh Singh, Smita Das, Anirban Chakraborty, Rajat Sadhukhan, Ayantika Chatterjee, and Debdeep Mukhopadhyay
Fully Homomorphic Encryption (FHE) schemes are widely used cryptographic primitives for performing arbitrary computations on encrypted data. However, FHE incorporates a computationally intensive mechanism called bootstrapping, that resets the noise in the ciphertext to a lower level allowing the computation on circuits of arbitrary depth. This process can take significant time, ranging from several minutes to hours. To address the above issue, in this work, we propose an Electronic Design Automation (EDA) framework FHEDA that generates efficient Boolean representations of circuits compatible with the Torus-FHE (ASIACRYPT 2020) scheme. To the best of our knowledge, this is the first work in the EDA domain of FHE. We integrate logic synthesis and gate optimization techniques into our FHEDA framework for reducing the total number of bootstrapping operations in a Boolean circuit, which leads to a significant (up to 50%) reduction in homomorphic computation time. Our FHEDA is built upon the observation that in Torus-FHE two consecutive Boolean gate evaluations over fresh encryptions require only one bootstrapping instead of two, based on appropriate parameter choices. By integrating this observation with logic replacement techniques into FHEDA, we could reduce the total number of bootstrapping operations along with the circuit depth. This eventually reduces the homomorphic evaluation time of Boolean circuits. In order to verify the efficacy of our approach, we assess the performance of the proposed EDA flow on a diverse set of representative benchmarks including privacypreserving machine learning and different symmetric key block ciphers.
Last updated:  2023-09-02
A Lattice-based Publish-Subscribe Communication Protocol using Accelerated Homomorphic Encryption Primitives
Anes Abdennebi and Erkay Savaş
Key-policy attribute-based encryption scheme (KP-ABE) uses a set of attributes as public keys for encryption. It allows homomorphic evaluation of ciphertext into another ciphertext of the same message, which can be decrypted if a certain access policy based on the attributes is satisfied. A lattice-based KP-ABE scheme is reported in several works in the literature, and its software implementation is available in an open-source library called PALISADE. However, as the cryptographic primitives in KP-ABE are overly involved, non-trivial hardware acceleration is needed for its adoption in practical applications. In this work, we provide GPU-based algorithms for accelerating KP-ABE encryption and homomorphic evaluation functions seamlessly integrated into the open-source library with minor additional build changes needed to run the GPU kernels. Using GPU algorithms, we perform both homomorphic encryption and homomorphic evaluation operations 2.1× and 13.2× faster than the CPU implementations reported in the literature on an Intel i9, respectively. Furthermore, our implementation supports up to 128 attributes for encryption and homomorphic evaluation with fixed and changing access policies. Unlike the reported GPU-based homomorphic operations in the literature, which support only up to 32 attributes and give estimations for a higher number of attributes. We also propose a GPU-based KP-ABE scheme for publish/subscribe messaging applications, in which end-to-end security of the messages is guaranteed. Here, while the exchanged messages are encrypted with as many as 128 attributes by publishers, fewer attributes are needed for homomorphic evaluation. Our fast and memory-efficient GPU implementations of KP-ABE encryption and homomorphic evaluation operations demonstrate that the KP-ABE scheme can be used for practicable publish/subscribe messaging applications.
Last updated:  2024-05-21
How to Recover a Cryptographic Secret From the Cloud
David Adei, Chris Orsini, Alessandra Scafuro, and Tanner Verber
Clouds have replaced most local backup systems as they offer strong availability and reliability guarantees. Clouds, however, are not (and should not be) used as backup for cryptographic secrets. Cryptographic secrets might control financial assets (e.g., crypto wallets), hence, storing such secrets on the cloud corresponds to sharing ownership of the financial assets with the cloud, and makes the cloud a more attractive target for insider attacks. Can we have the best of the two worlds, where a user, Alice, can conveniently store a copy of her cryptographic secrets on the cloud and she is the only one who can recover them (without trusting any entity)? Can she do so even when she loses her devices and forgets all credentials, while at the same time retaining full ownership of her secrets? In this paper, we provide a cloud-based secret-recovery mechanism where confidentiality is always guaranteed when Alice has not lost her credentials, even in the presence of a malicious cloud. If Alice loses all her credentials, she can still recover her secrets (in most circumstances). This is in contrast with all previous work that relies on the assumption that Alice remembers some authentication secret. We prove our system secure in the Universally Composable framework. Further, we implement our protocols and evaluate their performance.
Last updated:  2023-09-01
Constant-Round Private Decision Tree Evaluation for Secret Shared Data
Nan Cheng, Naman Gupta, Aikaterini Mitrokotsa, Hiraku Morita, and Kazunari Tozawa
Decision tree evaluation is extensively used in machine learning to construct accurate classification models. Often in the cloud-assisted communication paradigm cloud servers execute remote evaluations of classification models using clients’ data. In this setting, the need for private decision tree evaluation (PDTE) has emerged to guarantee no leakage of information for the client’s input nor the service provider’s trained model i.e., decision tree. In this paper, we propose a private decision tree evaluation protocol based on the three-party replicated secret sharing (RSS) scheme. This enables us to securely classify inputs without any leakage of the provided input or the trained decision tree model. Our protocol only requires constant rounds of communication among servers, which is useful in a network with longer delays. Ma et al. (NDSS 2021) presented a lightweight PDTE protocol with sublinear communication cost with linear round complexity in the size of the input data. This protocol works well in the low latency network such as LAN while its total execution time is unfavourably increased in the WAN setting. In contrast, Tsuchida et al. (ProvSec 2020) constructed a constant round PDTE protocol at the cost of communication complexity, which works well in the WAN setting. Although their construction still requires 25 rounds, it showed a possible direction on how to make constant round PDTE protocols. Ji et al. (IEEE Transactions on Dependable and Secure Computing) presented a simplified PDTE with constant rounds using the function secret sharing (FSS) at the cost of communication complexity. Our proposed protocol only requires five rounds among the employed three servers executing secret sharing schemes, which is comparable to previously proposed protocols that are based on garbled circuits and homomorphic encryption. To further demonstrate the efficiency of our protocol, we evaluated it using real-world classification datasets. The evaluation results indicate that our protocol provides better concrete performance in the WAN setting that has a large network delay.
Last updated:  2024-09-02
Single-query Quantum Hidden Shift Attacks
Xavier Bonnetain and André Schrottenloher
Quantum attacks using superposition queries are known to break many classically secure modes of operation. While these attacks do not necessarily threaten the security of the modes themselves, since they rely on a strong adversary model, they help us to draw limits on the provable security of these modes. Typically these attacks use the structure of the mode (stream cipher, MAC or authenticated encryption scheme) to embed a period-finding problem, which can be solved with a dedicated quantum algorithm. The hidden period can be recovered with a few superposition queries (e.g., $O(n)$ for Simon's algorithm), leading to state or key-recovery attacks. However, this strategy breaks down if the period changes at each query, e.g., if it depends on a nonce. In this paper, we focus on this case and give dedicated state-recovery attacks on the authenticated encryption schemes Rocca, Rocca-S, Tiaoxin-346 and AEGIS-128L. These attacks rely on a procedure to find a Boolean hidden shift with a single superposition query, which overcomes the change of nonce at each query. As they crucially depend on such queries, we stress that they do not break any security claim of the authors, and do not threaten the schemes if the adversary only makes classical queries.
Last updated:  2023-09-01
About “$k$-bit security” of MACs based on hash function Streebog
Vitaly Kiryukhin
Various message authentication codes (MACs), including HMAC-Streebog and Streebog-K, are based on the keyless hash function Streebog. Under the assumption that the compression function of Streebog is resistant to the related key attacks, the security proofs of these algorithms were recently presented at CTCrypt 2022. We carefully detail the resources of the adversary in the related key settings, revisit the proof, and obtain tight security bounds. Let $n$ be the bit length of the hash function state. If the amount of processed data is less than about $2^{n-k}$ blocks, then for HMAC-Streebog-512 and Streebog-K, the only effective method of forgery (or distinguishing) is guessing the $k$-bit secret key or the tag if it is shorter than the key. So, we can speak about ``$k$-bit security'' without specifying the amount of material, if the key length is no longer than half of a state. The bound for HMAC-Streebog-256 is worse and equal to $2^{\frac{n}{2}-k}$ blocks.
Last updated:  2023-10-10
Homomorphic polynomial evaluation using Galois structure and applications to BFV bootstrapping
Hiroki Okada, Rachel Player, and Simon Pohmann
BGV and BFV are among the most widely used fully homomorphic encryption (FHE) schemes. Both schemes have a common plaintext space, with a rich algebraic structure. Our main contribution is to show how this structure can be exploited to more efficiently homomorphically evaluate polynomials. Namely, using Galois automorphisms, we present an algorithm to homomorphically evaluate a polynomial of degree $d$ in only $3\log(d)$ (in some cases only $2\log(d)$) many ciphertext-ciphertext multiplications and automorphism evaluations, where $d$ is bounded by the ring degree. In other words, as long as the degree of the polynomial is bounded, we achieve an exponential speedup compared to the state of the art. In particular, the approach also improves on the theoretical lower bound of $2\sqrt{d}$ many ciphertext-ciphertext multiplications, which would apply if automorphisms were not available. We investigate how to apply our improved polynomial evaluation to the bootstrapping procedure for BFV, and show that we are able to significantly improve its performance. We demonstrate this by providing an implementation of our improved BFV bootstrapping using the Microsoft SEAL library. More concretely, we obtain a $1.6\times$ speed up compared to the prior implementation given by Chen and Han (Eurocrypt 2018). The techniques are independent of, and can be combined with, the more recent optimisations presented by Geelen et al. (Eurocrypt 2023). As an additional contribution, we show how the bootstrapping approach used in schemes such as FHEW and TFHE can be applied in the BFV context. In particular, we demonstrate that programmable bootstrapping can be achieved for BFV. Moreover, we show how this bootstrapping approach can be improved in the BFV context to make better use of the Galois structure. However, we estimate that its complexity is around three orders of magnitude slower than the classical approach to BFV bootstrapping.
Last updated:  2023-09-01
On security aspects of CRISP
Vitaly Kiryukhin
Using the provable security approach, we analyze CRISP – a standardized Russian cryptographic protocol that aims to ensure confidentiality, integrity of transmitted messages, as well as protection against replay attacks. The protocol is considered as a specific mode of authenticated encryption with associated data (AEAD). We take into account that one key can be used by many protocol's participants and in different cipher suites. We impose requirements for the set of the cipher suites used in the protocol and show that the existing ones meet them. Estimates of the maximum allowable amount of data processed using a single key are also given.
Last updated:  2023-09-01
Revisiting the Differential Meet-In-The-Middle Cryptanalysis
Ling Song, Qianqian Yang, and Huimin Liu
The differential meet-in-the-middle (MITM) attack is a new cryptanalysis technique proposed at Crypto 2023 recently. It led to greatly improved attacks on round-reduced SKINNY-128-384 and AES-256. In this paper, we revisit the differential MITM attack and propose several variants by absorbing techniques widely used in the classical differential attack. In particular, we present a new differential MITM attack that generalizes the basic differential MITM attack in several aspects. As for applications, we make refinements to the 24-round attack on SKINNY-128-384; on 12-round AES-256, we show that the classical differential attack and the generalized differential MITM attack perform better than the basic differential MITM attack.
Last updated:  2023-12-28
Short Paper: Accountable Safety Implies Finality
Joachim Neu, Ertem Nusret Tas, and David Tse
Motivated by proof-of-stake (PoS) blockchains such as Ethereum, two key desiderata have recently been studied for Byzantine-fault tolerant (BFT) state-machine replication (SMR) consensus protocols: Finality means that the protocol retains consistency, as long as less than a certain fraction of validators are malicious, even in partially-synchronous environments that allow for temporary violations of assumed network delay bounds. Accountable safety means that in any case of inconsistency, a certain fraction of validators can be identified to have provably violated the protocol. Earlier works have developed impossibility results and protocol constructions for these properties separately. We show that accountable safety implies finality, thereby unifying earlier results.
Last updated:  2023-08-31
Device-Oriented Group Messaging: A Formal Cryptographic Analysis of Matrix’ Core
Martin R. Albrecht, Benjamin Dowling, and Daniel Jones
Focusing on its cryptographic core, we provide the first formal description of the Matrix secure group messaging protocol. Observing that no existing secure messaging model in the literature captures the relationships (and shared state) between users, their devices and the groups they are a part of, we introduce the Device-Oriented Group Messaging model to capture these key characteristics of the Matrix protocol. Utilising our new formalism, we determine that Matrix achieves the basic security notions of confidentiality and authentication, provided it introduces authenticated group membership. On the other hand, while the state sharing functionality in Matrix conflicts with advanced security notions in the literature – forward and post-compromise security – it enables features such as history sharing and account recovery, provoking broader questions about how such security notions should be conceptualised.
Last updated:  2023-08-31
A New RSA Variant Based on Elliptic Curves
Maher Boudabra and Abderrahmane Nitaj
We propose a new scheme based on ephemeral elliptic curves over the ring $\mathbb{Z}/n\mathbb{Z}$ where $n=pq$ is an RSA modulus with $p=u_p^2+v_p^2$, $q=u_q^2+v_q^2$, $u_p\equiv u_q\equiv 3\pmod 4$. The new scheme is a variant of both the RSA and the KMOV cryptosystems. The scheme can be used for both signature and encryption. We study the security of the new scheme and show that is immune against factorization attacks, discrete logarithm problem attacks, sum of two squares attacks, sum of four squares attacks, isomorphism attacks, and homomorphism attacks. Moreover, we show that the private exponents can be much smaller than the ordinary exponents for RSA and KMOV, which makes the decryption phase in the new scheme more efficient.
Last updated:  2023-08-31
NEV: Faster and Smaller NTRU Encryption using Vector Decoding
Jiang Zhang, Dengguo Feng, and Di Yan
In this paper, we present NEV -- a faster and smaller NTRU Encryption using Vector decoding, which is provably IND-CPA secure in the standard model under the decisional NTRU and RLWE assumptions over the cyclotomic ring $R_q = \mathbb{Z}_q[X]/(X^n+1)$. Our main technique is a novel and non-trivial way to integrate a previously known plaintext encoding and decoding mechanism into the provably IND-CPA secure NTRU variant by Stehl\'e and Steinfeld (Eurocrypt 2011). Unlike the original NTRU encryption and its variants which encode the plaintext into the least significant bits of the coefficients of a message polynomial, we encode each plaintext bit into the most significant bits of multiple coefficients of the message polynomial, so that we can use a vector of noised coefficients to decode each plaintext bit in decryption, and significantly reduce the size of $q$ with a reasonably negligible decryption failure. Concretely, we can use $q = 769$ to obtain public keys and ciphertexts of 615 bytes with decryption failure $\leq 2^{-138}$ at NIST level 1 security, and 1229 bytes with decryption failure $\leq 2^{-152}$ at NIST level 5 security. By applying the Fujisaki-Okamoto transformation in a standard way, we obtain an IND-CCA secure KEM from our basic PKE scheme. Compared to NTRU and Kyber in the NIST Round 3 finalists at the same security levels, our KEM is 33-48% more compact and 5.03-29.94X faster than NTRU in the round-trip time of ephemeral key exchange, and is 21% more compact and 1.42-1.74X faster than Kyber. We also give an optimized encryption scheme NEV' with better noise tolerance (and slightly better efficiency) based on a variant of the RLWE problem, called Subset-Sum Parity RLWE problem, which we show is polynomially equivalent to the standard decisional RLWE problem (with different parameters), and maybe of independent interest.
Last updated:  2023-08-31
Entropic Quasigroup Based Secret Agreement Using Large Order Automorphisms
Daniel Nager
In this paper a method to build Secret Agreement algorithms is pre- sented, which only requires an abelian group and at least one automor- phism of the operator of this group. An example of such an algorithm is also presented. Knowledge of entropic quasigroups and Bruck-Murdoch- Toyoda theorem on how to build a quasigroup with these two elements is assumed.
Last updated:  2023-08-31
A note on ``blockchain-assisted authentication and key agreement scheme for fog-based smart grid''
Zhengjun Cao and Lihua Liu
We show that the scheme [Clust. Comput. 25(1): 451-468, 2022] fails to keep anonymity, not as claimed. The scheme simply acknowledges that user anonymity is equivalent to protecting the target user's identity against exposure, while its long-term pseudo-identity can be exposed. We want to clarify that the true anonymity means that an adversary cannot attribute different sessions to different target users, even though the adversary cannot recover the true identifier from the long-term pseudo-identifier. We also clarify some misunderstandings in the scheme.
Last updated:  2023-08-31
Towards Minimizing Non-linearity in Type-II Generalized Feistel Networks
Yuqing Zhao, Chun Guo, and Weijia Wang
Recent works have revisited blockcipher structures to achieve MPC- and ZKP-friendly designs. In particular, Albrecht et al. (EUROCRYPT 2015) first pioneered using a novel structure SP networks with partial non-linear layers (P-SPNs) and then (ESORICS 2019) repopularized using multi-line generalized Feistel networks (GFNs). In this paper, we persist in exploring symmetric cryptographic constructions that are conducive to the applications such as MPC. In order to study the minimization of non-linearity in Type-II Generalized Feistel Networks, we generalize the (extended) GFN by replacing the bit-wise shuffle in a GFN with the stronger linear layer in P-SPN and introducing the key in each round. We call this scheme Generalized Extended Generalized Feistel Network (GEGFN). When the block-functions (or S-boxes) are public random permutations or (domain-preserving) functions, we prove CCA security for the 5-round GEGFN. Our results also hold when the block-functions are over the prime fields F_p, yielding blockcipher constructions over (F_p)^*.
Last updated:  2023-08-29
PrivMail: A Privacy-Preserving Framework for Secure Emails
Gowri R Chandran, Raine Nieminen, Thomas Schneider, and Ajith Suresh
Emails have improved our workplace efficiency and communication. However, they are often processed unencrypted by mail servers, leaving them open to data breaches on a single service provider. Public-key based solutions for end-to-end secured email, such as Pretty Good Privacy (PGP) and Secure/Multipurpose Internet Mail Extensions (S/MIME), are available but are not widely adopted due to usability obstacles and also hinder processing of encrypted emails. We propose PrivMail, a novel approach to secure emails using secret sharing methods. Our framework utilizes Secure Multi-Party Computation techniques to relay emails through multiple service providers, thereby preventing any of them from accessing the content in plaintext. Additionally, PrivMail supports private server-side email processing similar to IMAP SEARCH, and eliminates the need for cryptographic certificates, resulting in better usability than public-key based solutions. An important aspect of our framework is its capability to enable third-party searches on user emails while maintaining the privacy of both the email and the query used to conduct the search. We integrate PrivMail into the current email infrastructure and provide a Thunderbird plugin to enhance user-friendliness. To evaluate our solution, we benchmarked transfer and search operations using the Enron Email Dataset and demonstrate that PrivMail is an effective solution for enhancing email security.
Last updated:  2023-08-29
Applications of Finite non-Abelian Simple Groups to Cryptography in the Quantum Era
María Isabel González Vasco, Delaram Kahrobaei, and Eilidh McKemmie
The theory of finite simple groups is a (rather unexplored) area likely to provide interesting computational problems and modelling tools useful in a cryptographic context. In this note, we review some applications of finite non-abelian simple groups to cryptography and discuss different scenarios in which this theory is clearly central, providing the relevant definitions to make the material accessible to both cryptographers and group theorists, in the hope of stimulating further interaction between these two (non-disjoint) communities. In particular, we look at constructions based on various group-theoretic factorization problems, review group theoretical hash functions, and discuss fully homomorphic encryption using simple groups. The Hidden Subgroup Problem is also briefly discussed in this context.
Last updated:  2023-08-29
Enhancing Data Security: A Study of Grain Cipher Encryption using Deep Learning Techniques
Payal, Pooja, and Girish Mishra
Data security has become a paramount concern in the age of data driven applications, necessitating the deployment of robust encryption techniques. This paper presents an in-depth investigation into the strength and randomness of the keystream generated by the Grain cipher, a widely employed stream cipher in secure communication systems. To achieve this objective, we propose the construction of sophisticated deep learning models for keystream prediction and evaluation. The implications of this research extend to the augmentation of our comprehension of the encryption robustness offered by the Grain cipher, accomplished by harnessing the power of deep learning models for cryptanalysis. The insights garnered from this study hold significant promise for guiding the development of more resilient encryption algorithms, thereby reinforcing the security of data transmission across diverse applications.
Last updated:  2023-08-29
On the Invalidity of LV16/Lin17 Obfuscation Schemes Revisited
Yupu Hu, Siyue Dong, Baocang Wang, and Xingting Dong
LV16/Lin17 IO schemes are famous progresses towards simplifying obfuscation mechanism. In fact, these two schemes only constructed two compact functional encryption (CFE) algorithms, while other things were taken to the AJ15 IO frame or BV15 IO frame. CFE algorithms are inserted into the AJ15 IO frame or BV15 IO frame to form a complete IO scheme. We stated the invalidity of LV16/Lin17 IO schemes. More detailedly, under reasonable assumption “real white box (RWB)” LV16/Lin17 CFE algorithms being inserted into AJ15 IO frame are insecure. In this paper, we continue to state the invalidity of LV16/Lin17 IO schemes. The conclusion of this paper is that LV16/Lin17 CFE algorithms being inserted into BV15 IO frame are insecure. The reasoning of this paper is composed of the following three steps. First, when LV16/Lin17 CFE algorithms are inserted into secret constants. Second, when all secret random numbers are changed into the BV15 IO frame, all secret random numbers must be changed into secret constants, component functions in LV16/Lin17 CFE algorithms are cryptologic weak functions, and shapes of these component functions can be easily obtained by chosen values of independent variables. Finally, the shapes of these component functions include parameters of original function, therefore the IO scheme is insecure.
Last updated:  2023-08-28
Comparative Analysis of ResNet and DenseNet for Differential Cryptanalysis of SPECK 32/64 Lightweight Block Cipher
Ayan Sajwan and Girish Mishra
This research paper explores the vulnerabilities of the lightweight block cipher SPECK 32/64 through the application of differential analysis and deep learning techniques. The primary objectives of the study are to investigate the cipher’s weaknesses and to compare the effectiveness of ResNet as used by Aron Gohr at Crypto2019 and DenseNet . The methodology involves conducting an analysis of differential characteristics to identify potential weaknesses in the cipher’s structure. Experimental results and analysis demonstrate the efficacy of both approaches in compromising the security of SPECK 32/64.
Last updated:  2023-08-28
Fully Tally-Hiding Verifiable E-Voting for Real-World Elections with Seat-Allocations
Carmen Wabartha, Julian Liedtke, Nicolas Huber, Daniel Rausch, and Ralf Kuesters
Modern e-voting systems provide what is called verifiability, i.e., voters are able to check that their votes have actually been counted despite potentially malicious servers and voting authorities. Some of these systems, called tally-hiding systems, provide increased privacy by revealing only the actual election result, e.g., the winner of the election, but no further information that is supposed to be kept secret. However, due to these very strong privacy guarantees, supporting complex voting methods at a real-world scale has proven to be very challenging for tally-hiding systems. A widespread class of elections, and at the same time, one of the most involved ones is parliamentary election with party-based seat-allocation. These elections are performed for millions of voters, dozens of parties, and hundreds of individual candidates competing for seats; they also use very sophisticated multi-step algorithms to compute the final assignment of seats to candidates based on, e.g., party lists, hundreds of electoral constituencies, possibly additional votes for individual candidates, overhang seats, and special exceptions for minorities. So far, it has not been investigated whether and in how far such elections can be performed in a verifiable tally-hiding manner. In this work, we design and implement the first verifiable (fully) tally-hiding e-voting system for an election from this class, namely, for the German parliament (Bundestag). As part of this effort, we propose several new tally-hiding building blocks that are of independent interest. We perform benchmarks based on actual election data, which show, perhaps surprisingly, that our proposed system is practical even at a real-world scale. Our work thus serves as a foundational feasibility study for this class of elections.
Last updated:  2023-08-28
An erf Analog for Discrete Gaussian Sampling
Nicolas Gama, Anand Kumar Narayanan, Ryder LiuLin, and Dongze Yue
Most of the current lattice-based cryptosystems rely on finding Gaussian Samples from a lattice that are close to a given target. To that end, two popular distributions have been historically defined and studied: the Rounded Gaussian distribution and the Discrete Gaussian distribution. The first one is nearly trivial to sample: simply round the coordinates of continuous Gaussian samples to their nearest integer. Unfortunately, the security of resulting cryptosystems are not as well understood. In the opposite, the second distribution is only implicitly defined by a restriction of the support of the continuous Gaussian distribution to the discrete lattice points. Thus, algorithms to achieve such distribution are more involved, even in dimension one. The justification for exerting this computational effort is that the resulting lattice-based cryptographic schemes are validated by rigorous security proofs, often by leveraging the fact that the distribution is radial and discrete Gaussians behave well under convolutions, enabling arithmetic between samples, as well as decomposition across dimensions. In this work, we unify both worlds. We construct out of infinite series, the cumulative density function of a new continuous distribution that acts as surrogate for the cumulative distribution of the discrete Gaussian. If $\mu$ is a center and $x$ a sample of this distribution, then rounding $\mu+x$ yields a faithful Discrete Gaussian sample. This new sampling algorithm naturally splits into a pre-processing/offline phase and a very efficient online phase. The online phase is simple and has a trivial constant time implementation. Modulo the offline phase, our algorithm offers both the efficiency of rounding and the security guarantees associated with discrete Gaussian sampling.
Last updated:  2024-02-29
To extend or not to extend: Agile Masking Instructions for PQC
Markus Krausz, Georg Land, Florian Stolz, Dennis Naujoks, Jan Richter-Brockmann, Tim Güneysu, and Lucie Kogelheide
Splitting up sensitive data into multiple shares – termed masking – has proven an effective countermeasure against various types of Side-Channel Analysis (SCA) on cryptographic implementations. However, in software this approach not only leads to dramatic performance overheads for non-linear operations, but also suffers from microarchitectural leakage, which is hard to avoid. Both problems can be addressed with one solution: masked hardware accelerators. In this context, Gao et al. [GGM+ 21] presented a RISC-V Instruction Set Extension (ISE) with masked Boolean and arithmetic instructions to accelerate masked software implementations of block ciphers. In this work, we demonstrate how this ISE can be applied to and extended for Post-Quantum Cryptography (PQC) components, forming a crypto-agile solution. We provide masked implementations based on three different ISE constellations for multiple highly relevant components, including Cumulative Distribution Table (CDT) sampling and polynomial rotation, which, to the best of our knowledge, have not been masked before. With the masked instructions, we measure speedups of more than one order of magnitude compared to sophisticated bitsliced implementations and even up to two orders of magnitude for non-bitsliced implementations. We assert the first-order security of our implementation with a practical evaluation.
Last updated:  2023-09-13
Quantum Attacks on Hash Constructions with Low Quantum Random Access Memory
Xiaoyang Dong, Shun Li, Phuong Pham, and Guoyan Zhang
At ASIACRYPT 2022, Benedikt, Fischlin, and Huppert proposed the quantum herding attacks on iterative hash functions for the first time. Their attack needs exponential quantum random access memory (qRAM), more precisely {$2^{0.43n}$} quantum accessible classical memory (QRACM). As the existence of large qRAM is questionable, Benedikt et al. leave an open question on building low-qRAM quantum herding attacks. In this paper, we answer this open question by building a quantum herding attack, where the time complexity is slightly increased from Benedikt et al.'s $2^{0.43n}$ to ours $2^{0.46n}$, but {it does not need qRAM anymore (abbreviated as no-qRAM)}. Besides, we also introduce various low-qRAM {or no-qRAM} quantum attacks on hash concatenation combiner, hash XOR combiner, Hash-Twice, and Zipper hash functions.
Last updated:  2023-10-18
Waffle: An Online Oblivious Datastore for Protecting Data Access Patterns
Sujaya Maiyya, Sharath Vemula, Divyakant Agrawal, Amr El Abbadi, and Florian Kerschbaum
We present Waffle, a datastore that protects an application’s data access patterns from a passive persistent adversary. Waffle achieves this without prior knowledge of the input data access distribution, making it the first of its kind to adaptively handle input sequences under a passive persistent adversary. Waffle maintains a constant bandwidth and client-side storage overhead, which can be adjusted to suit the application owner’s preferences. This flexibility allows the owner to fine-tune system parameters and strike a balance between security and performance. Our evaluation, utilizing the Yahoo! Cloud Serving Benchmark (YCSB) benchmark and Redis as the backend storage, demonstrates promising results. The insecure baseline outperforms Waffle by a mere 5-6x, whereas Waffle outperforms Pancake—a state-of-the-art oblivious datastore under passive persistent adversaries—by 45-57%, and a concurrent ORAM system, TaoStore, by 102x.
Last updated:  2025-02-20
Improving logarithmic derivative lookups using GKR
Shahar Papini and Ulrich Haböck
In this informal note, we instantiate the Goldwasser-Kalai-Rothblum (GKR) protocol to prove fractional sumchecks as present in lookup arguments based on logarithmic derivatives, with the following impact on the prover cost of logUp (IACR eprint 2022/1530): When looking up $M\geq 1$ columns in a (for the sake of simplicity) single column table, the prover has to commit only to a single extra column, i.e. the multiplicities of the table entries. In order to carry over the GKR fractional sumcheck to the univariate setting, we furthermore introduce a simple, yet (as far as we know) novel transformation for turning a univariate polynomial commitment scheme into a multilinear one. The transformation complements existing approaches and might be of independent interest for its elegant way to prove arbitrary powers of the lexicographic shift over the Boolean hypercube.
Last updated:  2024-07-15
A Univariate Attack against the Limited-Data Instance of Ciminion
Augustin Bariant
With the increasing interest for advanced protocols for Multi Party Computation, Fully-Homomorphic Encryption or Zero Knowledge proofs, a need for cryptographic algorithms with new constraints has emerged. These algorithms, called Arithmetization-Oriented ciphers, seek to minimize the number of field multiplications in large finite fields $\mathbb{F}_{2^n}$ or $\mathbb{F}_{p}$. Among them, Ciminion is an encryption algorithm proposed by Dobraunig et al. in Eurocrypt 2021. In this paper, we show a new univariate modelization on a variant of Ciminion proposed by the designers. This instance restricts the attacker to at most $2^{s/2}$ data, where $s$ is the security level. Because the designers chose to reduce the number of rounds in that specific attacker model, we are able to attack the cipher for large security levels. We also propose some slight modifications of Ciminion that would overcome this vulnerability.
Last updated:  2024-07-11
Proof-Carrying Data from Multi-folding Schemes
Zibo Zhou, Zongyang Zhang, Zhiyu Zhang, and Jin Dong
Proof-carrying data (PCD) is a cryptographic primitive enabling mutually distrustful parties to perform distributed computations on directed acyclic graphs with efficient and incremental verification. Key performance metrics include the prover cost at each step and the recursion overhead, which measures the additional cost beyond proving the original computation. Despite substantial advancements in constructing efficient PCD schemes, these metrics continue to be bottlenecks hindering their widespread application. In this paper, we advance the research by constructing a new PCD scheme based on a new generalized construction of multi-folding schemes. Compared with the state-of-the-art PCD scheme by Bünz et al. (CRYPTO'21), our scheme reduces the prover cost at each step from $4r+6$ multi-scalar multiplications (MSMs) of size $O(|C|)$ to $1$ MSM of the same size, and the recursion overhead from $6$ MSMs of size $2r-1$, $1$ MSM of size $6r-3$ to $1$ MSM of size $2r-1$, where $r$ is the number of incoming edges at certain step and $|C|$ is the proving computation size. Additionally, our PCD scheme supports a more expressive constraint system for encoding computations, namely the customizable constraint system, which supports high-degree constraints, in contrast to the rank-1 constraint system adopted by existing PCD schemes that only supports quadratic constraints. We implement our PCD scheme and report the concrete recursion overhead and practical efficiency for different values of $r$ and $|C|$. Compared with Bünz et al. (CRYPTO'21), our PCD scheme achieves a $2.5$ times lower recursion overhead when $r=2$ and $|C|=2^{20}$. Additionally, when $r=2$ and a proving computation size (excluding recursion overhead) of $2^{24}$, it takes $49$ seconds to generate a PCD proof at each step. Using a SNARK to compress the proof reduces the proof size from $1031$ MB to $13$ KB, with a tradeoff in the verifier time, which increases from $11$ seconds to $22$ seconds.
Last updated:  2023-08-25
Leveraging Machine Learning for Bidding Strategies in Miner Extractable Value (MEV) Auctions
Christoffer Raun, Benjamin Estermann, Liyi Zhou, Kaihua Qin, Roger Wattenhofer, Arthur Gervais, and Ye Wang
The emergence of blockchain technologies as central components of financial frameworks has amplified the extraction of market inefficiencies, such as arbitrage, through Miner Extractable Value (MEV) from Decentralized Finance smart contracts. Exploiting these opportunities often requires fee payment to miners and validators, colloquially termed as bribes. The recent development of centralized MEV relayers has led to these payments shifting from the public transaction pool to private channels, with the objective of mitigating information leakage and curtailing execution risk. This transition instigates highly competitive first-price auctions for MEV. However, effective bidding strategies for these auctions remain unclear. This paper examines the bidding behavior of MEV bots using Flashbots' private channels, shedding light on the opaque dynamics of these auctions. We gather and analyze transaction data for the entire operational period of Flashbots, providing an extensive view of the current Ethereum MEV extraction landscape. Additionally, we engineer machine learning models that forecast winning bids whilst increasing profitability, capitalizing on our comprehensive transaction data analysis. Given our unique status as an adaptive entity, the findings reveal that our machine learning models can secure victory in more than 50% of Flashbots auctions, consequently yielding superior returns in comparison to current bidding strategies in arbitrage MEV auctions. Furthermore, the study highlights the relative advantages of adaptive constant bidding strategies in sandwich MEV auctions.
Last updated:  2023-08-31
Quantum Security of TNT
Shuping Mao, Zhiyu Zhang, Lei Hu, Luying Li, and Peng Wang
Many classical secure structures are broken by quantum attacks. Evaluating the quantum security of a structure and providing a tight security bound is a challenging research area. As a tweakable block cipher structure based on block ciphers, $\mathsf{TNT}$ was proven to have $O(2^{3n/4})$ CPA and $O(2^{n/2})$ CCA security in the classical setting. We prove that $\mathsf{TNT}$ is a quantum-secure tweakable block cipher with a bound of $O(2^{n/6})$. In addition, we show the tight quantum PRF security bound of $O(2^{n/3})$ when $\mathsf{TNT}$ is based on random functions, which is better than $O(2^{n/4})$ given by Bhaumik et al. and solves their open problem. Our proof uses the recording standard oracle with errors technique of Hosoyamada and Iwata based on Zhandry’s compressed oracle technique.
Last updated:  2023-08-29
General Non-interactive Quantum Commitments Are Compatible with Quantum Rewinding
Jun Yan
In this work, we show that general non-interactive quantum commitments (allowing quantum computation and communication) to classical messages are compatible with current-known quantum-rewinding techniques. Specifically, we first propose a definition of collapse-binding of quantum commitments which generalizes from its post-quantum counterpart and is shown to work well with quantum rewinding. Then we show that thus defined collapse-binding is equivalent to the conceivably minimal unique-message-binding. This in particular implies that canonical quantum bit commitments are collapse-binding and can be used to instantiate many cryptographic applications. Additionally, we rephrase the flavor conversion of canonical quantum bit commitments as a hardness conversion, which then can be used to establish a stronger quantum indistinguishability that works well with quantum rewinding just like in the post-quantum setting. Such indistinguishability allows us to establish the security of the Goldreich-Kahan construction of constant-round zero-knowledge proofs for NP instantiated with canonical quantum bit commitments. We thus for the first time construct a constant-round (actually, four-round) quantum computational zero-knowledge proof for NP based on the minimum complexity assumption that is needed for the complexity-based quantum cryptography.
Last updated:  2023-08-24
Compositional Formal Verification of Zero-Knowledge Circuits
Alessandro Coglio, Eric McCarthy, Eric Smith, Collin Chin, Pranav Gaddamadugu, and Michel Dellepere
We provide a preliminary report of our ongoing work in formally defining and verifying, in a compositional way, the R1CS gadgets generated by Aleo's snarkVM. The approach is applicable to other systems that generate gadgets in a similar manner, and that may use non-R1CS representations.
Last updated:  2023-10-23
Dually Computable Cryptographic Accumulators and Their Application to Attribute Based Encryption
Anaïs Barthoulot, Olivier Blazy, and Sébastien Canard
In 1993, Benaloh and De Mare introduced cryptographic accumulator, a primitive that allows the representation of a set of values by a short object (the accumulator) and offers the possibility to prove that some input values are in the accumulator. For this purpose, so-called asymmetric accumulators require the creation of an additional cryptographic object, called a witness. Through the years, several instantiations of accumulators were proposed either based on number theoretic assumptions, hash functions, bilinear pairings or more recently lattices. In this work, we present the first instantiation of an asymmetric cryptographic accumulator that allows private computation of the accumulator but public witness creation. This is obtained thanks to our unique combination of the pairing based accumulator of Nguyen with dual pairing vector spaces. We moreover introduce the new concept of dually computable cryptographic accumulators, in which we offer two ways to compute the representation of a set: either privately (using a dedicated secret key) or publicly (using only the scheme's public key), while there is a unique witness creation for both cases. All our constructions of accumulators have constant size accumulated value and witness, and satisfy the accumulator security property of collision resistance, meaning that it is not possible to forge a witness for an element that is not in the accumulated set. As a second contribution, we show how our new concept of dually computable cryptographic accumulator can be used to build a Ciphertext Policy Attribute Based Encryption (CP-ABE). Our resulting scheme permits policies expressed as disjunctions of conjunctions (without ``NO'' gates), and is adaptively secure in the standard model. This is the first CP-ABE scheme having both constant-size user secret keys and ciphertexts (i.e. independent of the number of attributes in the scheme, or the policy size). For the first time, we provide a way to use cryptographic accumulators for both key management and encryption process.
Last updated:  2023-08-24
Witness Authenticating NIZKs and Applications
Hanwen Feng and Qiang Tang
We initiate the study of witness authenticating NIZK proof systems (waNIZKs), in which one can use a witness $w$ of a statement $x$ to identify whether a valid proof for $x$ is indeed generated using $w$. Such a new identification functionality enables more diverse applications, and it also puts new requirements on soundness that: (1) no adversary can generate a valid proof that will not be identified by any witness; (2) or forge a proof using some valid witness to frame others. To work around the obvious obstacle towards conventional zero-knowledgeness, we define entropic zero-knowledgeness that requires the proof to leak no partial information, if the witness has sufficient computational entropy. We give a formal treatment of this new primitive. The modeling turns out to be quite involved and multiple subtle points arise and particular cares are required. We present general constructions from standard assumptions. We also demonstrate three applications in non-malleable (perfectly one-way) hash, group signatures with verifier-local revocations and plaintext-checkable public-key encryption. Our waNIZK provides a new tool to advance the state of the art in all these applications.
Last updated:  2024-09-23
Post-Quantum Asynchronous Remote Key Generation for FIDO2 Account Recovery
Jacqueline Brendel, Sebastian Clermont, and Marc Fischlin
The Fast IDentity Online (FIDO) Alliance has developed the widely adopted FIDO2 protocol suite that allows for passwordless online authentication. Cryptographic keys stored on a user's device (e.g. their smartphone) are used as credentials to authenticate to services by performing a challenge-response protocol. Yet, this approach leaves users unable to access their accounts in case their authenticator is lost. The device manufacturer Yubico thus proposed a FIDO2-compliant mechanism that allows to easily create backup authenticators. Frymann et al. (CCS 2020) have first analyzed the cryptographic core of this proposal by introducing the new primitive of Asynchronous Remote Key Generation (ARKG) and accompanying security definitions. Later works instantiated ARKG both from classical and post-quantum assumptions (ACNS 2023, EuroS&P 2023). As we will point out in this paper, the security definitions put forward and used in these papers do not adequately capture the desired security requirements in FIDO2-based authentication and recovery. This issue was also identified in independent and concurrent work by Stebila and Wilson (AsiaCCS 2024), who proposed a new framework for the analysis of account recovery mechanisms, along with a secure post-quantum instantiation from KEMs and key-blinding signature schemes. In this work, we propose alternative security definitions for the primitive ARKG when used inside an account recovery mechanism in FIDO2. We give a secure instantiation from KEMs and standard signature schemes, which may in particular provide post-quantum security. Our solution strikes a middle ground between the compact, but (for this particular use case) inadequate security notions put forward by Frymann et al., and the secure, but more involved and highly tailored model introduced by Stebila and Wilson.
Last updated:  2023-08-24
ACABELLA: Automated (Crypt)analysis of Attribute-Based Encryption Leveraging Linear Algebra
Antonio de la Piedra, Marloes Venema, and Greg Alpár
Attribute-based encryption (ABE) is a popular type of public-key encryption that enforces access control cryptographically, and has spurred the proposal of many use cases. To satisfy the requirements of the setting, tailor-made schemes are often introduced. However, designing secure schemes---as well as verifying that they are secure---is notoriously hard. Several of these schemes have turned out to be broken, making them dangerous to deploy in practice. To overcome these shortcomings, we introduce ACABELLA. ACABELLA simplifies generating and verifying security proofs for pairing-based ABE schemes. It consists of a framework for security proofs that are easy to verify manually and an automated tool that efficiently generates these security proofs. Creating such security proofs generally takes no more than a few seconds. The output is easy to understand, and the proofs can be verified manually. In particular, the verification of a security proof generated by ACABELLA boils down to performing simple linear algebra. The ACABELLA tool is open source and also available via a web interface. With its help, experts can simplify their proof process by verifying or refuting the security claims of their schemes and practitioners can get an assurance that the ABE scheme of their choice is secure.
Last updated:  2023-11-24
Fait Accompli Committee Selection: Improving the Size-Security Tradeoff of Stake-Based Committees
Peter Gaži, Aggelos Kiayias, and Alexander Russell
We study the problem of committee selection in the context of proof-of-stake consensus mechanisms or distributed ledgers. These settings determine a family of participating parties---each of which has been assigned a non-negative "stake"---and are subject to an adversary that may corrupt a subset of the parties. The challenge is to select a committee of participants that accurately reflects the proportion of corrupt and honest parties, as measured by stake, in the full population. The trade-off between committee size and the probability of selecting a committee that over-represents the corrupt parties is a fundamental factor in both security and efficiency of proof-of-stake consensus, as well as committee-run layer-two protocols. We propose and analyze several new committee selection schemes that improve upon existing techniques by adopting low-variance assignment of certain committee members that hold significant stake. These schemes provide notable improvements to the size--security trade-off arising from the stake distributions of many deployed ledgers.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.