All papers in 2022 (Page 6 of 1781 results)
LARP: A Lightweight Auto-Refreshing Pseudonym Protocol for V2X
Vehicle-to-everything (V2X) communication is the key enabler for emerging intelligent transportation systems. Applications built on top of V2X require both authentication and privacy protection for the vehicles. The common approach to meet both requirements is to use pseudonyms which are short-term identities. However, both industrial standards and state-of-the-art research are not designed for resource-constrained environments. In addition, they make a strong assumption about the security of the vehicle's on-board computation units.
In this paper, we propose a lightweight auto-refreshing pseudonym protocol LARP for V2X. LARP supports efficient operations for resource-constrained devices, and provides security even when parts of the vehicle are compromised. We provide formal security proof showing that the protocol is secure. We conduct experiments on a Raspberry Pi 4. The results demonstrate that LARP is feasible and practical.
Group Time-based One-time Passwords and its Application to Efficient Privacy-Preserving Proof of Location
Time-based One-Time Password (TOTP) provides a strong second factor for user authentication. In TOTP, a prover authenticates to a verifier by using the current time and a secret key to generate an authentication token (or password) which is valid for a short time period. Our goal is to extend TOTP to the group setting, and to provide both authentication and privacy. To this end, we introduce a new authentication scheme, called Group TOTP (GTOTP), that allows the prover to prove that it is a member of an authenticated group without revealing its identity. We propose a novel construction that transforms any asymmetric TOTP scheme into a GTOTP scheme. Our approach combines Merkle tree and Bloom filter to reduce the verifier's states to constant sizes.
As a promising application of GTOTP, we show that GTOTP can be used to construct an efficient privacy-preserving Proof of Location (PoL) scheme. We utilize a commitment protocol, a privacy-preserving location proximity scheme, and our GTOTP scheme to build the PoL scheme, in which GTOTP is used not only for user authentication but also as a tool to glue up other building blocks. In the PoL scheme, with the help of some witnesses, a user can prove its location to a verifier, while ensuring the identity and location privacy of both the prover and witnesses. Our PoL scheme outperforms the alternatives based on group digital signatures. We evaluate our schemes on Raspberry Pi hardware, and demonstrate that they achieve practical performance. In particular, the password generation and verification time are in the order of microseconds and milliseconds, respectively, while the computation time of proof generation is less than $1$ second.
Last updated: 2023-04-17
Improved Neural Distinguishers with Multi-Round and Multi-Splicing Construction
In CRYPTO 2019, Gohr successfully applied deep learning to differential cryptanalysis against the NSA block cipher Speck32/64, achieving higher accuracy than traditional differential distinguishers. Until now, the improvement of neural differential distinguishers is a mainstream research direction in neuralaided cryptanalysis. But the current development of training data formats for neural distinguishers forms barriers: (1) The source of data features is limited to linear combinations of ciphertexts, which does not provide more learnable features to the training samples for improving the neural distinguishers. (2) Lacking breakthroughs in constructing data format for network training from the deep learning perspective. In this paper, considering both the domain knowledge about deep learning and information on differential cryptanalysis, we use the output features of the penultimate round to proposing a two-dimensional and non-realistic input data generation method of neural differential distinguishers. Then, we validate that the proposed new input data format has excellent features through experiments and theoretical analysis. Moreover, combining the idea of multiple ciphertext pairs, we generate two specific models for data input construction: MRMSP(Multiple Rounds Multiple Splicing Pairs) and MRMSD(Multiple Rounds Multiple Splicing Differences) and then build new neural distinguishers against Speck and Simon family, which effectively improve the performance compared with the previous works. To the best of our knowledge, our neural distinguishers achieve the longest rounds and the higher accuracy for NSA block ciphers Speck and Simon.
Fast Evaluation of S-boxes with Garbled Circuits
Garbling schemes are vital primitives for privacy-preserving protocols and secure two-party computation. In projective garbling schemes, $n$ values are assigned to each wire in the circuit. Current state-of-the-art schemes project two values.
This paper presents a projective garbling scheme that assigns $2^n$ values to wires in a circuit comprising XOR and unary projection gates. A generalization of FreeXOR allows the XOR of wires with $2^n$ values to be very efficient. We then analyze the performance of our scheme by evaluating substitution-permutation ciphers. Using our proposal, we measure high-speed evaluation of the ciphers with a moderately increased cost in garbling and bandwidth. Theoretical analysis suggests that for evaluating the nine examined ciphers, one can expect a 4- to 70-fold improvement in evaluation performance with, at most, a 4-fold increase in garbling cost and, at most, an 8-fold increase in communication cost compared to state-of-the-art garbling schemes. In an offline/online setting, such as secure function evaluation as a service, the circuit garbling and communication to the evaluator can proceed before the input phase. Thus, our scheme offers a fast online phase. Furthermore, we present efficient Boolean circuits for the S-boxes of TWINE and Midori64 ciphers. To our knowledge, our formulas give the smallest number of AND gates for the S-boxes of these two ciphers.
Compact GF(2) systemizer and optimized constant-time hardware sorters for Key Generation in Classic McEliece
Classic McEliece is a code-based quantum-resistant public-key scheme characterized with relative high encapsulation/decapsulation speed and small cipher- texts, with an in-depth analysis on its security. However, slow key generation with large public key size make it hard for wider applications. Based on this observation, a high-throughput key generator in hardware, is proposed to accelerate the key generation in Classic McEliece based on algorithm-hardware co-design. Meanwhile the storage overhead caused by large-size keys is also minimized. First, compact large-size GF(2) Gauss elimination is presented by adopting naive processing array, singular matrix detection-based early abort, and memory-friendly scheduling strategy. Second, an optimized constant-time hardware sorter is proposed to support regular memory accesses with less comparators and storage. Third, algorithm-level pipeline is enabled for high-throughput processing, allowing for concurrent key generation based on decoupling between data access and computation.
Second-Order Low-Randomness $d+1$ Hardware Sharing of the AES
In this paper, we introduce a second-order masking of the AES using the minimal number of shares and a total of 1268 bits of randomness including the sharing of the plaintext and key. The masking of the S-box is based on the tower field decomposition of the inversion over bytes where the changing of the guards technique is used in order to re-mask the middle branch of the decomposition. The sharing of the S-box is carefully crafted such that it achieves first-order probing security without the use of randomness and such that the sharing of its output is uniform. Multi-round security is achieved by re-masking the state where we use a theoretical analysis based on the propagation of probed information to reduce the demand for fresh randomness per round. The result is a second-order masked AES which competes with the state-of-the-art in terms of latency and area, but reduces the randomness complexity over eight times over the previous known works. In addition to the corresponding theoretical analysis and proofs for the security of our masked design, it has been implemented on FPGA and evaluated via lab analysis.
DiAE: Re-rolling the DiSE
The notion of distributed authenticated encryption was formally introduced by Agrawal et al. in ACM CCS 2018. In their work, they propose the DiSE construction building upon a distributed PRF (DPRF), a commitment scheme and a PRG. We show that most of their constructions do not meet some of the claimed security guarantees. In fact, all the concrete instantiations of DiSE, as well as multiple follow-up papers (one accepted at ACM CCS 2021), fail to satisfy their strongly-secure definitions. We give simple fixes for these constructions and prove their security. We also propose a new construction DiAE using an encryptment instead of a commitment. This modification dispenses with the need to buffer the entire message throughout the encryption protocol, which in turn enables implementations with constant RAM footprint and online message encryption. This is particularly interesting for constrained IoT devices. Finally, we implement and benchmark DiAE and show that it performs similarly to the original DiSE construction.
Self Masking for Hardering Inversions
The question whether one way functions (i.e., functions that are easy to compute but hard to invert) exist is arguably one of the central problems in complexity theory, both from theoretical and practical aspects. While proving that such functions exist could be hard, there were quite a few attempts to provide functions which are one way "in practice", namely, they are easy to compute, but there are no known polynomial time algorithms that compute their (generalized) inverse (or that computing their inverse is as hard as notoriously difficult tasks, like factoring very large integers).
In this paper we study a different approach. We provide a simple heuristic, called self masking, which converts a given polynomial time computable function $f$ into a self masked version $[{f}]$, which satisfies the following: for a random input $x$, $[{f}]^{-1}([{f}](x))=f^{-1}(f(x))$ w.h.p., but a part of $f(x)$, which is essential for computing $f^{-1}(f(x))$ is masked in $[{f}](x)$. Intuitively, this masking makes it hard to convert an efficient algorithm which computes $f^{-1}$ to an efficient algorithm which computes $[{f}]^{-1}$, since the masked parts are available to $f$ but not to $[{f}]$.
We apply this technique on variants of the subset sum problem which were studied in the context of one way functions, and obtain functions which, to the best of our knowledge, cannot be inverted in polynomial time by published techniques.
A Conjecture From a Failed Cryptanalysis
This note describes an observation discovered during a failed cryptanalysis attempt.
Let $P(x,y)$ be a bivariate polynomial with coefficients in $\mathbb{C}$. Form the $n\times n$ matrices $L(n)$ whose elements are defined by $P(i,j)$. Define the matrices $M(n)=L(n)-\mbox{ID}_n$.
It appears that $\mu(n)=(-1)^n\det(M_n)$ is a polynomial in $n$ that we did not characterize.
We provide a numerical example.
PPAD is as Hard as LWE and Iterated Squaring
One of the most fundamental results in game theory is that every finite strategic game has a Nash equilibrium, an assignment of (randomized) strategies to players with the stability property that no individual player can benefit from deviating from the assigned strategy. It is not known how to efficiently compute such a Nash equilibrium --- the computational complexity of this task is characterized by the class PPAD, but the relation of PPAD to other problems and well-known complexity classes is not precisely understood. In recent years there has been mounting evidence, based on cryptographic tools and techniques, showing the hardness of PPAD.
We continue this line of research by showing that PPAD is as hard as learning with errors (LWE) and the iterated squaring (IS) problem, two standard problems in cryptography. Our work improves over prior hardness results that relied either on (1) sub-exponential assumptions, or (2) relied on ``obfustopia,'' which can currently be based on a particular combination of three assumptions. Our work additionally establishes public-coin hardness for PPAD (computational hardness for a publicly sampleable distribution of instances) that seems out of reach of the obfustopia approach.
Following the work of Choudhuri et al. (STOC 2019) and subsequent works, our hardness result is obtained by constructing an unambiguous and incrementally-updatable succinct non-interactive argument for IS, whose soundness relies on polynomial hardness of LWE. The result also implies a verifiable delay function with unique proofs, which may be of independent interest.
Privacy-preserving Federated Singular Value Decomposition
Modern Singular Value Decomposition (SVD) computation dates back to the 1960s when the basis for the eigensystem package and linear algebra package routines was created. Since then, SVD has gained attraction and been widely applied in various scenarios, such as recommendation systems and principal component analyses. Federated SVD has recently emerged, where different parties could collaboratively compute SVD without exchanging raw data. Besides its inherited privacy protection, noise injection could be utilized to further increase the privacy guarantee of this privacy-friendly technique. This paper advances the state-of-science by improving an existing Federated SVD scheme with two-fold contributions. First, we revise its privacy guarantee in terms of Differential Privacy, the de-facto data privacy standard of the 21st century. Second, we increase its utility by reducing the added noise, which is achieved by employing Secure Aggregation, a cryptographic technique to prevent information leakage. Using a recommendation system use-case with real-world data, we demonstrate that our scheme outperforms the state-of-the-art Federated SVD solution.
Typing High-Speed Cryptography against Spectre v1
The current gold standard of cryptographic software is to write efficient libraries with systematic protections against timing attacks. In order to meet this goal, cryptographic engineers increasingly use high-assurance cryptography tools. These tools guide programmers and provide rigorous guarantees that can be verified independently by library users. However, high-assurance tools reason about overly simple execution models that elide micro-architectural leakage. Thus, implementations validated by high-assurance cryptography tools remain potentially vulnerable to micro-architectural attacks such as Spectre or Meltdown. Moreover, proposed countermeasures are not used in practice due to performance overhead.
We propose, analyze, implement and evaluate an approach for writing efficient cryptographic implementations that are protected against Spectre v1 attacks. Our approach ensures speculative constant-time, an information flow property which guarantees that programs are protected against Spectre v1. Speculative constant-time is enforced by means of a (value-dependent) information flow type system. The type system tracks security levels depending on whether execution is misspeculating.
We implement our approach in the Jasmin framework for high assurance cryptography, and use it for protecting all implementations of an experimental cryptographic library that includes highly optimized implementations of symmetric primitives, of elliptic-curve cryptography, and of Kyber, a lattice-based KEM recently selected by NIST for standardization. The performance impact of our protections is very low; for example, less than 1% for Kyber and essentially zero for X25519.
Collusion-Resistant Functional Encryption for RAMs
In recent years, functional encryption (FE) has established itself as one of the fundamental primitives in cryptography. The choice of model of computation to represent the functions associated with the functional keys plays a critical role in the complexity of the algorithms of an FE scheme. Historically, the functions are represented as circuits. However, this results in the decryption time of the FE scheme growing proportional to not only the worst case running time of the function but also the size of the input, which in many applications can be quite large.
In this work, we present the first construction of a public-key collusion-resistant FE scheme, where the functions, associated with the keys, are represented as random access machines (RAMs). We base the security of our construction on the existence of: (i) public-key collusion- resistant FE for circuits and, (ii) public-key doubly-efficient private-information retrieval [Boyle et al., Canetti et al., TCC 2017]. Our scheme enjoys many nice efficiency properties, including input-specific decryption time.
We also show how to achieve FE for RAMs in the bounded-key setting with weaker efficiency guarantees from laconic oblivious transfer, which can be based on standard cryptographic assumptions. En route to achieving our result, we present conceptually simpler constructions of succinct garbling for RAMs [Canetti et al., Chen et al., ITCS 2016] from weaker assumptions.
Cryptographic Role-Based Access Control, Reconsidered
A significant shortcoming of traditional access control mechanisms is their heavy reliance on reference monitors. Being single points of failure, monitors need to run in protected mode and have permanent online presence in order to handle all access requests. Cryptographic access control offers an alternative solution that provides better scalability and deployability. It relies on security guarantees of the underlying cryptographic primitives and the appropriate key distribution/management in the system. In order to rigorously study security guarantees that a cryptographic access control system can achieve, providing formal security definitions for the system is of great importance, since the security guarantee of the underlying cryptographic primitives cannot be directly translated into those of the system.
In this paper, we follow the line of the existing studies on the cryptographic enforcement of Role-Based Access Control (RBAC). Inspired by the study focusing on the relation between the existing security definitions for such systems, we identify two types of attacks not described in the existing works. Therefore, we propose two new security definitions with the goal of appropriately modeling cryptographic enforcement of Role-Based Access Control policies and studying the relation between our new definitions and the existing ones. In addition, we show that the cost of supporting dynamic policy updates is inherently expensive by presenting two lower bounds for such systems that guarantee correctness and secure access.
Last updated: 2022-11-20
High-precision Leveled Homomorphic Encryption with Batching
In most homomorphic encryption schemes based on the RLWE, the native plaintexts are represented as polynomials in a ring $Z_t[x]/x^N+1$ where $t$ is a plaintext modulus and $x^N+1$ is a cyclotomic polynomial with degree power of two. An encoding scheme should be used to transform some natural data types(such as integers and rational numbers) into polynomials in the ring. After a homomorphic computation on the polynomial is finished, the decoding procedure is invoked to obtain the result. However, conditions for decoding correctly are strict in a way. For example, the overflows of computation modulo both the plaintext modulus $t$ and the cyclotomic polynomial $x^N+1$ will result in a unexpected result for decoding. The reason is that decoding the part which is discarded by modular reduction is not 0.
We combine number theory transformation with Hensel Codes to construct a scheme. Intuitively, decoding the discarded part will yield 0 so the limitations are overcome naturally in our scheme. On the other hand, rational numbers can be handled with high precision in parallel.
Asymptotically Free Broadcast in Constant Expected Time via Packed VSS
Broadcast is an essential primitive for secure computation. We focus in this paper on optimal resilience (i.e., when the number of corrupted parties $t$ is less than a third of the computing parties $n$), and with no setup or cryptographic assumptions.
While broadcast with worst case $t$ rounds is impossible, it has been shown [Feldman and Micali STOC'88, Katz and Koo CRYPTO'06] how to construct protocols with expected constant number of rounds in the private channel model. However, those constructions have large communication complexity, specifically $\mathcal{O}(n^2L+n^6\log n)$ expected number of bits transmitted for broadcasting a message of length $L$. This leads to a significant communication blowup in secure computation protocols in this setting.
In this paper, we substantially improve the communication complexity of broadcast in constant expected time. Specifically, the expected communication complexity of our protocol is $\mathcal{O}(nL+n^4\log n)$. For messages of length $L=\Omega(n^3 \log n)$, our broadcast has no asymptotic overhead (up to expectation), as each party has to send or receive $\mathcal{O}(n^3 \log n)$ bits. We also consider parallel broadcast, where $n$ parties wish to broadcast $L$ bit messages in parallel. Our protocol has no asymptotic overhead for $L=\Omega(n^2\log n)$, which is a common communication pattern in perfectly secure MPC protocols. For instance, it is common that all parties share their inputs simultaneously at the same round, and verifiable secret sharing protocols require the dealer to broadcast a total of $\mathcal{O}(n^2\log n)$ bits.
As an independent interest, our broadcast is achieved by a packed verifiable secret sharing, a new notion that we introduce. We show a protocol that verifies $\mathcal{O}(n)$ secrets simultaneously with the same cost of verifying just a single secret. This improves by a factor of $n$ the state-of-the-art.
Universal Ring Signatures in the Standard Model
Ring signatures allow a user to sign messages on behalf of an ad hoc set of users - a ring - while hiding her identity. The original motivation for ring signatures was whistleblowing [Rivest et al. ASIACRYPT'01]: a high government employee can anonymously leak sensitive information while certifying that it comes from a reliable source, namely by signing the leak. However, essentially all known ring signature schemes require the members of the ring to publish a structured verification key that is compatible with the scheme. This creates somewhat of a paradox since, if a user does not want to be framed for whistleblowing, they will stay clear of signature schemes that support ring signatures.
In this work, we formalize the concept of universal ring signatures (URS). A URS enables a user to issue a ring signature with respect to a ring of users, independently of the signature schemes they are using. In particular, none of the verification keys in the ring need to come from the same scheme. Thus, in principle, URS presents an effective solution for whistleblowing.
The main goal of this work is to study the feasibility of URS, especially in the standard model (i.e. no random oracles or common reference strings). We present several constructions of URS, offering different trade-offs between assumptions required, the level of security achieved, and the size of signatures:
* Our first construction is based on superpolynomial hardness assumptions of standard primitives. It achieves compact signatures. That means the size of a signature depends only logarithmically on the size of the ring and on the number of signature schemes involved.
* We then proceed to study the feasibility of constructing URS from standard polynomially-hard assumptions only. We construct a non-compact URS from witness encryption and additional standard assumptions.
* Finally, we show how to modify the non-compact construction into a compact one by relying on indistinguishability obfuscation.
Rotatable Zero Knowledge Sets: Post Compromise Secure Auditable Dictionaries with application to Key Transparency
Key Transparency (KT) systems allow end-to-end encrypted service providers (messaging, calls, etc.) to maintain an auditable directory of their users’ public keys, producing proofs that all participants have a consistent view of those keys, and allowing each user to check updates to their own keys. KT has lately received a lot of attention, in particular its privacy preserving variants, which also ensure that users and auditors do not learn anything beyond what is necessary to use the service and keep the service provider accountable.
Abstractly, the problem of building such systems reduces to constructing so-called append-only Zero-Knowledge Sets (aZKS). Unfortunately, existing aZKS (and KT) solutions do not allow to adequately restore the privacy guarantees after a server compromise, a form of Post-Compromise Security (PCS), while maintaining the auditability properties. In this work we address this concern through the formalization of an extension of aZKS called Rotatable ZKS (RZKS). In addition to providing PCS, our notion of RZKS has several other attractive features, such as a stronger (extractable) soundness notion, and the ability for a communication party with out-of-date data to efficiently “catch up” to the current epoch while ensuring that the server did not erase any of the past data.
Of independent interest, we also introduce a new primitive called a Rotatable Verifiable Random Function (VRF), and show how to build RZKS in a modular fashion from a rotatable VRF, ordered accumulator, and append-only vector commitment schemes.
Steganography-Free Zero-Knowledge
We revisit the well-studied problem of preventing steganographic communication in multi-party communications. While this is known to be a provably impossible task, we propose a new model that allows circumventing this impossibility. In our model, the parties first publish a single message during an honest non-interactive pre-processing phase and then later interact in an execution phase. We show that in this model, it is indeed possible to prevent any steganographic communication in zero-knowledge protocols. Our solutions rely on standard cryptographic assumptions.
Vectorized Batch Private Information Retrieval
This paper studies Batch Private Information Retrieval (BatchPIR), a variant of private information retrieval (PIR) where the client wants to retrieve multiple entries from the server in one batch.
BatchPIR matches the use case of many practical applications and holds the potential for substantial efficiency improvements over PIR in terms of amortized cost per query.
Existing BatchPIR schemes have achieved decent computation efficiency but have not been able to improve communication efficiency at all.
Using vectorized homomorphic encryption, we present the first BatchPIR protocol that is efficient in both computation and communication for a variety of database configurations. Specifically, to retrieve a batch of 256 entries from a database with one million entries of 256 bytes each, the communication cost of our scheme is 7.5x to 98.5x better than state-of-the-art solutions.
Breaking RSA Generically is Equivalent to Factoring, with Preprocessing
We investigate the relationship between the classical RSA and factoring problems when preprocessing is considered. In such a model, adversaries can use an unbounded amount of precomputation to produce an “advice” string to then use during the online phase, when a problem instance becomes known. Previous work (e.g., [Bernstein, Lange ASI- ACRYPT ’13]) has shown that preprocessing attacks significantly im- prove the runtime of the best-known factoring algorithms. Due to these improvements, we ask whether the relationship between factoring and RSA fundamentally changes when preprocessing is allowed. Specifically, we investigate whether there is a superpolynomial gap between the run- time of the best attack on RSA with preprocessing and on factoring with preprocessing.
Our main result rules this out with respect to algorithms in a careful adaptation of the generic ring model [Aggarwal and Maurer, Eurocrypt 2009] to the preprocessing setting. In particular, in this setting we show the existence of a factoring algorithm with polynomially related parameters, for any setting of RSA parameters.
On Committing Authenticated Encryption
We provide a strong definition for committing authenticated-encryption (cAE), as well as a framework that encompasses earlier and weaker definitions. The framework attends not only to what is committed but also the extent to which the adversary knows or controls keys. We slot into our framework strengthened cAE-attacks on GCM and OCB. Our main result is a simple and efficient construction, CTX, that makes a nonce-based AE (nAE) scheme committing. The transformed scheme achieves the strongest security notion in our framework. Just the same, the added computational cost (on top of the nAE scheme's cost) is a single hash over a short string, a cost independent of the plaintext's length. And there is no increase in ciphertext length compared to the base nAE scheme. That such a thing is possible, let alone easy, upends the (incorrect) intuition that you can't commit to a plaintext or ciphertext without hashing one or the other. And it motivates a simple and practical tweak to AE-schemes to make them committing.
Horizontal racewalking using radical isogenies
We address three main open problems concerning the use of radical isogenies, as presented by Castryck, Decru and Vercauteren at Asiacrypt 2020, in the computation of long chains of isogenies of fixed, small degree between elliptic curves over finite fields. Firstly, we present an interpolation method for finding radical isogeny formulae in a given degree $N$, which by-passes the need for factoring division polynomials over large function fields. Using this method, we are able to push the range for which we have formulae at our disposal from $N \leq 13$ to $N \leq 37$ (where in the range $18 \leq N \leq 37$ we have restricted our attention to prime powers). Secondly, using a combination of known techniques and ad-hoc manipulations, we derive optimized versions of these formulae for $N \leq 19$, with some instances performing more than twice as fast as their counterparts from 2020. Thirdly, we solve the problem of understanding the correct choice of radical when walking along the surface between supersingular elliptic curves over $\mathbb{F}_p$ with $p \equiv 7 \bmod 8$; this is non-trivial for even $N$ and was settled for $N = 2$ and $N = 4$ only, in the latter case by Onuki and Moriya at PKC 2022. We give a conjectural statement for all even $N$ and prove it for $N \leq 14$. The speed-ups obtained from these techniques are substantial: using $16$-isogenies, the computation of long chains of $2$-isogenies over $512$-bit prime fields can be accelerated by a factor $3$, and the previous implementation of CSIDH using radical isogenies can be sped up by about $12\%$.
Tightly Secure Chameleon Hash Functions in the Multi-User Setting and Their Applications
We define the security notion of (strong) collision resistance for chameleon hash functions in the multi-user setting ((S-)MU-CR security). We also present three constructions, CHF_dl, CHF_rsa and CHF_fac, and prove their tight S-MU-CR security based on the discrete logarithm, RSA and factoring assumptions, respectively. In applications, our tightly S-MU-CR secure chameleon hash functions help us to lift a signature scheme from (weak) unforgeability to strong unforgeability in the multi-user setting, and the security reduction is tightness preserving. Furthermore, they can also be used to construct tightly secure online/offline signatures, chameleon signatures and proxy signatures, etc., in the multi-user setting.
One-Time Programs from Commodity Hardware
One-time programs, originally formulated by Goldwasser et al. [CRYPTO'08], are a powerful cryptographic primitive with compelling applications. Known solutions for one-time programs, however, require specialized secure hardware that is not widely available (or, alternatively, access to blockchains and very strong cryptographic tools).
In this work we investigate the possibility of realizing one-time programs from a recent and now more commonly available hardware functionality: the counter lockbox. A counter lockbox is a stateful functionality that protects an encryption key under a user-specified password, and enforces a limited number of incorrect guesses. Counter lockboxes have become widely available in consumer devices and cloud platforms.
We show that counter lockboxes can be used to realize one-time programs for general functionalities. We develop a number of techniques to reduce the number of counter lockboxes required for our
constructions, that may be of independent interest.
EvalRound Algorithm in CKKS Bootstrapping
Homomorphic encryption (HE) has opened an entirely new world up in the privacy-preserving use of sensitive data by conducting computations on encrypted data. Amongst many HE schemes targeting computation in various contexts, Cheon--Kim--Kim--Song (CKKS) scheme is distinguished since it allows computations for encrypted real number data, which have greater impact in real-world applications.
CKKS scheme is a levelled homomorphic encryption scheme, consuming one level for each homomorphic multiplication. When the level runs out, a special computational circuit called bootstrapping is required in order to conduct further multiplications. The algorithm proposed by Cheon et al. has been regarded as a standard way to do bootstrapping in the CKKS scheme, and it consists of the following four steps: ModRaise, CoeffToSlot, EvalMod and SlotToCoeff. However, the steps consume a number of levels themselves, and thus optimizing this extra consumption has been a major focus of the series of recent research.
Among the total levels consumed in the bootstrapping steps, about a half of them is spent in CoeffToSlot and SlotToCoeff steps to scale up the real number components of DFT matrices and round them to the nearest integers. Each scale-up factor is very large so that it takes up one level to rescale it down. Scale-up factors can be taken smaller to save levels, but the error of rounding would be transmitted to EvalMod and eventually corrupt the accuracy of bootstrapping.
EvalMod aims to get rid of the superfluous $qI$ term from a plaintext $pt + qI$ resulting from ModRaise, where $q$ is the bottom modulus and $I$ is a polynomial with small integer coefficients. EvalRound is referred to as its opposite, obtaining $qI$. We introduce a novel bootstrapping algorithm consisting of ModRaise, CoeffToSlot, EvalRound and SlotToCoeff, which yields taking smaller scale-up factors without the damage of rounding errors.
PLUME: An ECDSA Nullifier Scheme for Unique Pseudonymity within Zero Knowledge Proofs
ZK-SNARKs (Zero Knowledge Succinct Noninteractive ARguments of Knowledge) are one of the most promising new applied cryptography tools: proofs allow anyone to prove a property about some data, without revealing that data. Largely spurred by the adoption of cryptographic primitives in blockchain systems, ZK-SNARKs are rapidly becoming computationally practical in real-world settings, shown by i.e. tornado.cash and rollups. These have enabled ideation for new identity applications based on anonymous proof-of-ownership. One of the primary technologies that would enable the jump from existing apps to such systems is the development of deterministic nullifiers.
Nullifiers are used as a public commitment to a specific anonymous account, to forbid actions like double spending, or allow a consistent identity between anonymous actions. We identify a new deterministic signature algorithm that both uniquely identifies the keypair, and keeps the account identity secret. In this work, we will define the full DDH-VRF construction, and prove uniqueness, secrecy, and existential unforgeability. We will also demonstrate a proof of concept of our Pseudonymously Linked Unique Message Entity (PLUME) scheme.
Protecting the most significant bits in scalar multiplication algorithms
The Montgomery Ladder is widely used for implementing the scalar multiplication in elliptic curve cryptographic designs. This algorithm is efficient and provides a natural robustness against (simple) side-channel attacks. Previous works however showed that implementations of the Montgomery Ladder using Lopez-Dahab projective coordinates easily leak the value of the most significant bits of the secret scalar, which led to a full key recovery in an attack known as LadderLeak. In light of such leakage, we analyse further popular methods for implementing the Montgomery Ladder. We first consider open source software implementations of the X25519 protocol which implement the Montgomery Ladder based on the ladderstep algorithm from Düll et al. [15]. We confirm via power measurements that these implementations also easily leak the most significant scalar bits, even when implementing Z-coordinate ran- domisations. We thus propose simple modifications of the algorithm and its handling of the most significant bits and show the effectiveness of our modifications via experimental results. Particularly, our re-designs of the algorithm do not incurring significant efficiency penalties. As a second case study, we consider open source hardware implementations of the Montgomery Ladder based on the complete addition formulas for prime order elliptic curves, where we observe the exact same leakage. As we explain, the most significant bits in implementations of the complete addition formulas can be protected in an analogous way as we do for Curve25519 in our first case study.
A Modular Approach to the Incompressibility of Block-Cipher-Based AEADs
Incompressibility is one of the most fundamental security goals in white-box cryptography.
Given recent advances in the design of efficient and incompressible block ciphers such as SPACE, SPNbox and WhiteBlock,
we demonstrate the feasibility of reducing incompressible AEAD modes to incompressible block ciphers.
We first observe that several existing AEAD modes of operation, including CCM, GCM(-SIV), and OCB, would be all insecure against white-box adversaries even when used with an incompressble block cipher.
This motivates us to revisit and formalize incompressibility-based security definitions for AEAD schemes and for block ciphers, so that we become able to design modes and reduce their security to that of the underlying ciphers.
Our new security notion for AEAD, which we name whPRI, is an extension of the pseudo-random injection security in the black-box setting.
Similar security notions are also defined for other cryptosystems such as privacy-only encryption schemes.
We emphasize that whPRI ensures quite strong authenticity against white-box adversaries:
existential unforgeability beyond leakage.
This contrasts sharply with previous notions which have ensured either no authenticity or only universal unforgeability.
For the underlying ciphers we introduce a new notion of whPRP, which extends that of PRP in the black-box setting.
Interestingly, our incompressibility reductions follow from a variant of public indifferentiability.
In particular, we show that a practical whPRI-secure AEAD mode can be built from a whPRP-secure block cipher: We present a SIV-like composition of the sponge construction (utilizing a block cipher as its underlying primitive) with the counter mode and prove that such a construction is (in the variant sense) public indifferentiable from a random injection.
To instantiate such an AEAD scheme, we propose a 256-bit variant of SPACE, based on our conjecture that SPACE should be a whPRP-secure cipher.
Functional Encryption with Secure Key Leasing
Secure software leasing is a quantum cryptographic primitive that enables us to lease software to a user by encoding it into a quantum state. Secure software leasing has a mechanism that verifies whether a returned software is valid or not. The security notion guarantees that once a user returns a software in a valid form, the user no longer uses the software.
In this work, we introduce the notion of secret-key functional encryption (SKFE) with secure key leasing, where a decryption key can be securely leased in the sense of secure software leasing. We also instantiate it with standard cryptographic assumptions. More specifically, our contribution is as follows.
- We define the syntax and security definitions for SKFE with secure key leasing.
- We achieve a transformation from standard SKFE into SKFE with secure key leasing without using additional assumptions. Especially, we obtain bounded collusion-resistant SKFE for $\mathsf{P/poly}$ with secure key leasing based on post-quantum one-way functions since we can instantiate bounded collusion-resistant SKFE for $\mathsf{P/poly}$ with the assumption.
Some previous secure software leasing schemes capture only pirate software that runs on an honest evaluation algorithm (on a legitimate platform). However, our secure key leasing notion captures arbitrary attack strategies and does not have such a limitation.
As an additional contribution, we introduce the notion of single-decryptor FE (SDFE), where each functional decryption key is copy-protected. Since copy-protection is a stronger primitive than secure software leasing, this notion can be seen as a stronger cryptographic primitive than FE with secure key leasing. More specifically, our additional contribution is as follows.
- We define the syntax and security definitions for SDFE.
- We achieve collusion-resistant single-decryptor PKFE for $\mathsf{P/poly}$ from post-quantum indistinguishability obfuscation and quantum hardness of the learning with errors problem.
Flashproofs: Efficient Zero-Knowledge Arguments of Range and Polynomial Evaluation with Transparent Setup
We propose Flashproofs, a new type of efficient special honest verifier zero-knowledge arguments with a transparent setup in the discrete logarithm (DL) setting. First, we put forth gas-efficient range arguments that achieve $O(N^{\frac{2}{3}})$ communication cost, and involve $O(N^{\frac{2}{3}})$ group exponentiations for verification and a slightly sub-linear number of group exponentiations for proving with respect to the range $[0, 2^N-1]$, where $N$ is the bit length of the range. For typical confidential transactions on blockchain platforms supporting smart contracts, verifying our range arguments consumes only 234K and 315K gas for 32-bit and 64-bit ranges, which are comparable to 220K gas incurred by verifying the most efficient zkSNARK with a trusted setup (EUROCRYPT 16) at present. Besides, the aggregation of multiple arguments can yield further efficiency improvement. Second, we present polynomial evaluation arguments based on the techniques of Bayer & Groth (EUROCRYPT 13). We provide two zero-knowledge arguments, which are optimised for lower-degree ($D \in [3, 2^9]$) and higher-degree ($D > 2^9$) polynomials, where $D$ is the polynomial degree. Our arguments yield a non-trivial improvement in the overall efficiency. Notably, the number of group exponentiations for proving drops from $8\log D$ to $3(\log D+\sqrt{\log D})$. The communication cost and the number of group exponentiations for verification decrease from $7\log D$ to $(\log D + 3\sqrt{\log D})$. To the best of our knowledge, our arguments instantiate the most communication-efficient arguments of membership and non-membership in the DL setting among those not requiring trusted setups. More importantly, our techniques enable a significantly asymptotic improvement in the efficiency of communication and verification (group exponentiations) from $O(\log D)$ to $O(\sqrt{\log D})$ when multiple arguments satisfying different polynomials with the same degree and inputs are aggregated.
Eureka: A General Framework for Black-box Differential Privacy Estimators
Differential privacy (DP) is a key tool in privacy-preserving data analysis. Yet it remains challenging for non-privacy-experts to prove the DP of their algorithms. We propose a methodology for domain experts with limited data privacy background to empirically estimate the privacy of an arbitrary mechanism. Our Eureka moment is a new link---which we prove---between the problems of DP parameter-estimation and Bayes optimal classifiers in ML, which we believe can be of independent interest. Our estimator uses this link to achieve two desirable properties: (1) black-box, i.e., it does not require knowledge of the underlying mechanism, and (2) it has a theoretically-proven accuracy, depending on the underlying classifier used, allowing plug-and-play use of different classifiers.
More concretely, motivated by the impossibility of the above task for unrestricted input domains (which we prove), we introduce a natural, application-inspired relaxation of DP which we term relative DP. Intuitively, relative DP defines a mechanism's privacy relative to an input set $T$, circumventing the above impossibility when $T$ is finite. Importantly, it preserves the key intuitive privacy guarantee of DP while enjoying a number of desirable DP properties---scalability, composition, and robustness to post-processing. We then devise a black-box poly-time $(\epsilon,\delta)$-relative DP estimator for any poly-size $T$---the first privacy estimator to support mechanisms with large output spaces while having tight accuracy bounds. As a result of independent interest, we generalize our theory to develop the first Distributional Differential Privacy (DDP) estimator.
We benchmark our estimator in a proof-of-concept implementation. First, using kNN as the classifier we show that our method (1) produces a tight, analytically computed $(\epsilon, \delta)$-DP trade-off of low-dimensional Laplace and Gaussian mechanisms---the first to do so, (2) accurately estimates the privacy spectrum of DDP mechanisms, and (3) can verify a DP mechanism's implementations, e.g., Sparse Vector Technique, Noisy Histogram, and Noisy max. Our implementation and experiments demonstrate the potential of our framework, and highlight its computational bottlenecks in estimating DP, e.g., in terms of the size of $\delta$ and the data dimensionality. Our second, neural-network-based instantiation makes a first step in showing that our method can be extended to mechanisms with high-dimensional outputs.
On Rejection Sampling in Lyubashevsky's Signature Scheme
Lyubashevsky’s signatures are based on the Fiat-Shamir with
aborts paradigm, whose central ingredient is the use of rejection sampling
to transform secret-dependent signature samples into samples from (or
close to) a secret-independent target distribution. Several choices for the
underlying distributions and for the rejection sampling strategy can be
considered. In this work, we study Lyubashevsky’s signatures through
the lens of rejection sampling, and aim to minimize signature size given
signing runtime requirements. Several of our results concern rejection
sampling itself and could have other applications.
We prove lower bounds for compactness of signatures given signing run-
time requirements, and for expected runtime of perfect rejection sampling
strategies. We also propose a Rényi-divergence-based analysis of Lyuba-
shevsky’s signatures which allows for larger deviations from the target
distribution, and show hyperball uniforms to be a good choice of distri-
butions: they asymptotically reach our compactness lower bounds and
offer interesting features for practical deployment. Finally, we propose
a different rejection sampling strategy which circumvents the expected
runtime lower bound and provides a worst-case runtime guarantee.
Fully-Secure MPC with Minimal Trust
The task of achieving full security (with guaranteed output delivery) in secure multiparty computation (MPC) is a long-studied problem. Known impossibility results (Cleve, STOC 86) rule out general solutions in the dishonest majority setting. In this work, we consider solutions that use an external trusted party (TP) to bypass the impossibility results, and study the minimal requirements needed from this trusted party. In particular, we restrict ourselves to the extreme setting where the size of the TP is independent of the size of the functionality to be computed (called “small” TP) and this TP is invoked only once during the protocol execution. We present several positive and negative results for fully-secure MPC in this setting.
-- For a natural class of protocols, specifically, those with a universal output decoder, we show that the size of the TP must necessarily be exponential in the number of parties. This result holds irrespective of the computational assumptions used in the protocol. The class of protocols to which our lower bound applies is broad enough to capture prior results in the area, implying that the prior techniques necessitate the use of an exponential-sized TP. We additionally rule out the possibility of achieving information-theoretic full security (without the restriction of using a universal output decoder) using a “small” TP in the plain model (i.e., without any setup).
-- In order to get around the above negative result, we consider protocols without a universal output decoder. The main positive result in our work is a construction of such a fully-secure MPC protocol assuming the existence of a succinct Functional Encryption scheme. We also give evidence that such an assumption is likely to be necessary for fully-secure MPC in certain restricted settings.
-- Finally, we explore the possibility of achieving full-security with a semi-honest TP that could collude with other malicious parties (which form a dishonest majority). In this setting, we show that even fairness is impossible to achieve regardless of the “small TP” requirement.
Peek into the Black-Box: Interpretable Neural Network using SAT Equations in Side-Channel Analysis
Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remain mostly black-box algorithms that are very difficult to interpret. Benamira \textit{et al.} recently proposed an interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), which is both expressive and easier to interpret. In particular, a TT-DCNN has a transparent inner structure that can entirely be transformed into SAT equations after training.
In this work, we analyze the SAT equations extracted from a TT-DCNN when applied in SCA context, eventually obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We validate our approach first on simulated traces for higher-order masking. However, applying TT-DCNN on real traces is not straightforward. We propose a method to adapt TT-DCNN for application on real SCA traces containing thousands of sample points. Experimental validation is performed on software-based ASCADv1 and hardware-based AES\_HD\_ext datasets. In addition, TT-DCNN is shown to be able to learn the exact countermeasure in a best-case setting.
Identity-Based Matchmaking Encryption from Standard Assumptions
In this work, we propose the first identity-based matchmaking encryption (IB-ME) scheme under the standard assumptions in the standard model. This scheme is proven to be secure under the symmetric external Diffie-Hellman (SXDH) assumption in prime order bilinear pairing groups. In our IB-ME scheme, all parameters have constant number of group elements and are simpler than those of previous constructions. Previous works are either in the random oracle model or based on the q-type assumptions, while ours is built directly in the standard model and based on static assumptions, and does not rely on other crypto tools.
More concretely, our IB-ME is constructed from a variant of two-level anonymous IBE. We observed that this two-level IBE with anonymity and unforgeability satisfies the same functionality of IB-ME, and its security properties cleverly meet the two requirements of IB-ME (Privacy and Authenticity). The privacy property of IB-ME relies on the anonymity of this two-level IBE, while the authenticity property is corresponding to the unforgeability in the 2nd level. This variant of two-level IBE is built from dual pairing vector spaces, and both security reductions rely on dual system encryption.
On Generalizations of the Lai-Massey Scheme
In this paper, we re-investigate the Lai-Massey scheme, originally proposed in the cipher IDEA. Due to the similarity with the Feistel networks, and due to the existence of invariant subspace attacks as originally pointed out by Vaudenay at FSE 1999, the Lai-Massey scheme has received only little attention by the community. As first contribution, we propose two new generalizations of such scheme that are not (extended) affine equivalent to any generalized Feistel network proposed in the literature so far. Then, inspired by the recent Horst construction, we propose the Amaryllises structure as a generalization of the Lai-Massey scheme, in which the linear combination in the Lai-Massey scheme can be replaced by a non-linear one. Besides proposing concrete examples of the Amaryllises construction, we analyze its cryptographic properties, and we compare them with the ones of other existing schemes/constructions published in the literature. Our results show that the Amaryllises construction could have concrete advantages especially in the context of MPC-/FHE-/ZK-friendly primitives.
A Modular Approach to the Security Analysis of Two-Permutation Constructions
Constructions based on two public permutation calls are very common in today’s cryptographic community. However, each time a new construction is introduced, a dedicated proof must be carried out to study the security of the construction. In this work, we propose a new tool to analyze the security of these constructions in a modular way. This tool is built on the idea of the classical mirror theory for block cipher based constructions, such that it can be used for security proofs in the ideal permutation model. We present different variants of this public permutation mirror theory such that it is suitable for different security notions.
We also present a framework to use the new techniques, which provides the bad events that need to be excluded in order to apply the public permutation mirror theory. Furthermore, we showcase the new technique on three examples: the Tweakable Even-Mansour cipher by Cogliati et al. (CRYPTO ’15), the two permutation variant of the pEDM PRF by Dutta et al. (ToSC ’21(2)), and the two permutation variant of the nEHtM\(_p\) MAC algorithm by Dutta and Nandi (AFRICACRYPT ’20). With this new tool we prove the multi-user security of these constructions in a considerably simplified way.
Hybrid scalar/vector implementations of Keccak and SPHINCS+ on AArch64
This paper presents two new techniques for the fast implementation of the Keccak permutation on the A-profile of the Arm architecture: First, the elimination of explicit rotations in the Keccak permutation through Barrel shifting, applicable to scalar AArch64 implementations of Keccak-f1600. Second, the construction of hybrid implementations concurrently leveraging both the scalar and the Neon instruction sets of AArch64. The resulting performance improvements are demonstrated in the example of the hash-based signature scheme SPHINCS+, one of the recently announced winners of the NIST post-quantum cryptography project: We achieve up to 1.89× performance improvements compared to the state of the art. Our implementations target the Arm Cortex-{A55,A510,A78,A710,X1,X2} processors common in client devices such as mobile phones.
Data Protection Law and Multi-Party Computation: Applications to Information Exchange between Law Enforcement Agencies
Pushes for increased power of Law Enforcement (LE) for data retention and centralized storage result in legal challenges with data protection law and courts - and possible violations of the right to privacy. This is motivated by a desire for better cooperation and exchange between LE Agencies (LEAs), which is difficult due to data protection regulations, was identified as a main factor of major public security failures, and is a frequent criticism of LE.
Secure Multi-Party Computation (MPC) is often seen as a technological means to solve privacy conflicts where actors want to exchange and analyze data that needs to be protected due to data protection laws. In this interdisciplinary work, we investigate the problem of private information exchange between LEAs from both a legal and technical angle. We give a legal analysis of secret-sharing based MPC techniques in general and, as a particular application scenario, consider the case of matching LE databases for lawful information exchange between LEAs. We propose a system for lawful information exchange between LEAs using MPC and private set intersection and show its feasibility by giving a legal analysis for data protection and a technical analysis for workload complexity. Towards practicality, we present insights from qualitative feedback gathered within exchanges with a major European LEA.
Continued Fractions Applied to a Family of RSA-like Cryptosystems
Let $N=pq$ be the product of two balanced prime numbers $p$ and $q$. Murru and Saettone presented in 2017 an interesting RSA-like cryptosystem that uses the key equation $ed - k (p^2+p+1)(q^2+q+1) = 1$, instead of the classical RSA key equation $ed - k (p-1)(q-1) = 1$. The authors claimed that their scheme is immune to Wiener's continued fraction attack. Unfortunately, Nitaj \emph{et. al.} developed exactly such an attack. In this paper, we introduce a family of RSA-like encryption schemes that uses the key equation $ed - k [(p^n-1)(q^n-1)]/[(p-1)(q-1)] = 1$, where $n>1$ is an integer. Then, we show that regardless of the choice of $n$, there exists an attack based on continued fractions that recovers the secret exponent.
Sherlock Holmes Zero-Knowledge Protocols
We present two simple zero knowledge interactive proofs that can be instantiated with many of the standard decisional or computational hardness assumptions. Compared with traditional zero knowledge proofs, in our protocols the verifiers starts first, by emitting a challenge, and then the prover answers the challenge.
Improving Bounds on Elliptic Curve Hidden Number Problem for ECDH Key Exchange
Elliptic Curve Hidden Number Problem (EC-HNP) was first introduced by Boneh, Halevi and Howgrave-Graham at Asiacrypt 2001. To rigorously assess the bit security of the Diffie--Hellman key exchange with elliptic curves (ECDH), the Diffie--Hellman variant of EC-HNP, regarded as an elliptic curve analogy of the Hidden Number Problem (HNP), was presented at PKC 2017. This variant can also be used for practical cryptanalysis of ECDH key exchange in the situation of side-channel attacks.
In this paper, we revisit the Coppersmith method for solving the involved modular multivariate polynomials in the Diffie--Hellman variant of EC-HNP and demonstrate that, for any given positive integer $d$, a given sufficiently large prime $p$, and a fixed elliptic curve over the prime field $\mathbb{F}_p$, if there is an oracle that outputs about $\frac{1}{d+1}$ of the most (least) significant bits of the $x$-coordinate of the ECDH key, then one can give a heuristic algorithm to compute all the bits within polynomial time in $\log_2 p$. When $d>1$, the heuristic result $\frac{1}{d+1}$ significantly outperforms both the rigorous bound $\frac{5}{6}$ and heuristic bound $\frac{1}{2}$. Due to the heuristics involved in the Coppersmith method, we do not get the ECDH bit security on a fixed curve. However, we experimentally verify the effectiveness of the heuristics on NIST curves for small dimension lattices.
Secure Quantum Bit Commitment
Bit commitment (BC) is one of the most important fundamental protocols in secure multi-party computation. However, it is generally believed that unconditionally secure bit commitment is impossible even with quantum resources. In this paper, we design a secure non-interactive bit commitment protocol by exploiting the no-communication theorem of the quantum entangled states, whose security relies on the indistinguishability of whether the Bell states are measured or not. The proposed quantum bit commitment (QBC) is secure against classical adversaries with unlimited computing power, and the probability of a successful attack by quantum adversaries decreases exponentially as $n$ (the number of qubits in a group) increases.
On the Worst-Case Inefficiency of CGKA
Continuous Group Key Agreement (CGKA) is the basis of modern Secure Group Messaging (SGM) protocols. At a high level, a CGKA protocol enables a group of users to continuously compute a shared (evolving) secret while members of the group add new members, remove other existing members, and perform state updates. The state updates allow CGKA to offer desirable security features such as forward secrecy and post-compromise security.
CGKA is regarded as a practical primitive in the real-world. Indeed, there is an IETF Messaging Layer Security (MLS) working group devoted to developing a standard for SGM protocols, including the CGKA protocol at their core. Though known CGKA protocols seem to perform relatively well when considering natural sequences of performed group operations, there are no formal guarantees on their efficiency, other than the $O(n)$ bound which can be achieved by trivial protocols, where $n$ is the number of group numbers. In this context, we ask the following questions and provide negative answers.
1. Can we have CGKA protocols that are efficient in the worst case?
We start by answering this basic question in the negative. First, we show that a natural primitive that we call Compact Key Exchange (CKE) is at the core of CGKA, and thus tightly captures CGKA's worst-case communication cost. Intuitively, CKE requires that: first, $n$ users non-interactively generate key pairs and broadcast their public keys, then, some other special user securely communicates to these $n$ users a shared key. Next, we show that CKE with communication cost $o(n)$ by the special user cannot be realized in a black-box manner from public-key encryption, thus implying the same for CGKA, where $n$ is the corresponding number of group members. Surprisingly, this impossibility holds even in an offline setting, where parties have access to the sequence of group operations in advance.
2. Can we realize one CGKA protocol that works as well as possible in all cases?
Here again, we present negative evidence showing that no such protocol based on black-box use of public-key encryption exists. Specifically, we show two distributions over sequences of group operations such that no CGKA protocol obtains optimal communication costs on both sequences.
Rate-1 Non-Interactive Arguments for Batch-NP and Applications
We present a rate-$1$ construction of a publicly verifiable non-interactive argument system for batch-$\mathsf{NP}$ (also called a BARG), under the LWE assumption. Namely, a proof corresponding to a batch of $k$ NP statements each with an $m$-bit witness, has size $m + \mathsf{poly}(\lambda,\log k)$.
In contrast, prior work either relied on non-standard knowledge assumptions, or produced proofs of size $m \cdot \mathsf{poly}(\lambda,\log k)$ (Choudhuri, Jain, and Jin, STOC 2021, following Kalai, Paneth, and Yang 2019).
We show how to use our rate-$1$ BARG scheme to obtain the following results, all under the LWE assumption in the standard model:
- A multi-hop BARG scheme for $\mathsf{NP}$.
- A multi-hop aggregate signature scheme.
- An incrementally verifiable computation (IVC) scheme for arbitrary $T$-time
deterministic computations with proof size $\mathsf{poly}(\lambda,\log T)$.
Prior to this work, multi-hop BARGs were only known under non-standard knowledge assumptions or in the random oracle model; aggregate signatures were only known under indistinguishability obfuscation (and RSA) or in the random oracle model; IVC schemes with proofs of size $\mathsf{poly}(\lambda,T^{\epsilon})$ were known under a bilinear map assumption, and with proofs of size $\mathsf{poly}(\lambda,\log T)$ were only known under non-standard knowledge assumptions or in the random oracle model.
QCCA-Secure Generic Transformations in the Quantum Random Oracle Model
The post-quantum security of cryptographic schemes assumes that the quantum adversary only receives the classical result of computations with the secret key. Further, it is unknown whether the post-quantum secure schemes still remain secure if the adversary can obtain a superposition state of the results.
In this paper, we formalize one class of public-key encryption schemes named oracle-masked schemes. Then we define the plaintext extraction procedure for those schemes and this procedure simulates the quantum-accessible decryption oracle with a certain loss.
The construction of the plaintext extraction procedure does not need to take the secret key as input. Based on this property, we prove the IND-qCCA security of the Fujisaki-Okamoto (FO) transformation in the quantum random oracle model (QROM) and our security proof is tighter than the proof given by Zhandry (Crypto 2019). We also give the first IND-qCCA security proof of the REACT transformation in the QROM.
Furthermore, our formalization can be applied to prove the IND-qCCA security of key encapsulation mechanisms with explicit rejection. As an example, we present the IND-qCCA security proof of $\textsf{T}_{\textsf{CH}}$ transformation, proposed by Huguenin-Dumittan and Vaudenay (Eurocrypt 2022), in the QROM.
Towards Tight Security Bounds for OMAC, XCBC and TMAC
OMAC --- a single-keyed variant of CBC-MAC by Iwata and Kurosawa --- is a widely used and standardized (NIST FIPS 800-38B, ISO/IEC 29167-10:2017) message authentication code (MAC) algorithm. The best security bound for OMAC is due to Nandi who proved that OMAC's pseudorandom function (PRF) advantage is upper bounded by $ O(q^2\ell/2^n) $, where $ n $, $ q $, and $ \ell $, denote the block size of the underlying block cipher, the number of queries, and the maximum permissible query length (in terms of $ n $-bit blocks), respectively. In contrast, there is no attack with matching lower bound. Indeed, the best known attack on OMAC is the folklore birthday attack achieving a lower bound of $ \Omega(q^2/2^n) $. In this work, we close this gap for a large range of message lengths. Specifically, we show that OMAC's PRF security is upper bounded by $ O(q^2/2^n + q\ell^2/2^n)$. In practical terms, this means that for a $ 128 $-bit block cipher, and message lengths up to $ 64 $ Gigabyte, OMAC can process up to $ 2^{64} $ messages before rekeying (same as the birthday bound). In comparison, the previous bound only allows $ 2^{48} $ messages. As a side-effect of our proof technique, we also derive similar tight security bounds for XCBC (by Black and Rogaway) and TMAC (by Kurosawa and Iwata). As a direct consequence of this work, we have established tight security bounds (in a wide range of $\ell$) for all the CBC-MAC variants, except for the original CBC-MAC.
Forward-Secure Encryption with Fast Forwarding
Forward-secure encryption (FSE) allows communicating parties to refresh their keys across epochs, in a way that compromising the current secret key leaves all prior encrypted communication secure. We investigate a novel dimension in the design of FSE schemes: fast-forwarding (FF). This refers to the ability of a stale communication party, that is "stuck" in an old epoch, to efficiently "catch up" to the newest state, and frequently arises in practice. While this dimension was not explicitly considered in prior work, we observe that one can augment prior FSEs -- both in symmetric- and public-key settings -- to support fast-forwarding which is sublinear in the number of epochs. However, the resulting schemes have disadvantages: the symmetric-key scheme is a security parameter slower than any conventional stream cipher, while the public-key scheme inherits the inefficiencies of the HIBE-based forward-secure PKE.
To address these inefficiencies, we look at the common real-life situation which we call the bulletin board model, where communicating parties rely on some infrastructure -- such as an application provider -- to help them store and deliver ciphertexts to each other. We then define and construct FF-FSE in the bulletin board model, which addresses the above-mentioned disadvantages. In particular,
* Our FF-stream-cipher in the bulletin-board model has: (a) constant state size; (b) constant normal (no fast-forward) operation; and (c) logarithmic fast-forward property. This essentially matches the efficiency of non-fast-forwardable stream ciphers, at the cost of constant communication complexity with the bulletin board per update.
* Our public-key FF-FSE avoids HIBE-based techniques by instead using so-called updatable public-key encryption (UPKE), introduced in several recent works (and more efficient than public-key FSEs). Our UPKE-based scheme uses a novel type of "update graph" that we construct in this work. Our graph has constant in-degree, logarithmic diameter, and logarithmic "cut property" which is essential for the efficiency of our schemes. Combined with recent UPKE schemes, we get two FF-FSEs in the bulletin board model, under the DDH and the LWE assumptions.
The Abe-Okamoto Partially Blind Signature Scheme Revisited
Partially blind signatures, an extension of ordinary blind signatures, are a primitive with wide applications in e-cash and electronic voting. One of the most efficient schemes to date is the one by Abe and Okamoto (CRYPTO 2000), whose underlying idea - the OR-proof technique - has served as the basis for several works.
We point out several subtle flaws in the original proof of security, and provide a new detailed and rigorous proof, achieving similar bounds as the original work. We believe our insights on the proof strategy will find useful in the security analyses of other OR-proof-based schemes.
Continuously Non-Malleable Codes against Bounded-Depth Tampering
Non-malleable codes (Dziembowski, Pietrzak and Wichs, ICS 2010 & JACM 2018) allow protecting arbitrary cryptographic primitives against related-key attacks (RKAs). Even when using codes that are guaranteed to be non-malleable against a single tampering attempt, one obtains RKA security against poly-many tampering attacks at the price of assuming perfect memory erasures. In contrast, continuously non-malleable codes (Faust, Mukherjee, Nielsen and Venturi, TCC 2014) do not suffer from this limitation, as the non-malleability guarantee holds against poly-many tampering attempts.
Unfortunately, there are only a handful of constructions of continuously non-malleable codes, while standard non-malleable codes are known for a large variety of tampering families including, e.g., NC0 and decision-tree tampering, AC0, and recently even bounded polynomial-depth tampering. We change this state of affairs by providing the first constructions of continuously non-malleable codes in the following natural settings:
- Against decision-tree tampering, where, in each tampering attempt, every bit of the tampered codeword can be set arbitrarily after adaptively reading up to $d$ locations within the input codeword. Our scheme is in the plain model, can be instantiated assuming the existence of one-way functions, and tolerates tampering by decision trees of depth $d = O(n^{1/8})$, where $n$ is the length of the codeword. Notably, this class includes NC0.
- Against bounded polynomial-depth tampering, where in each tampering attempt the adversary can select any tampering function that can be computed by a circuit of bounded polynomial depth (and unbounded polynomial size). Our scheme is in the common reference string model, and can be instantiated assuming the existence of time-lock puzzles and simulation-extractable (succinct) non-interactive zero-knowledge proofs.
Group Action Key Encapsulation and Non-Interactive Key Exchange in the QROM
In the context of quantum-resistant cryptography, cryptographic group actions offer an abstraction of isogeny-based cryptography in the Commutative Supersingular Isogeny Diffie-Hellman (CSIDH) setting. In this work, we revisit the security of two previously proposed natural protocols: the Group Action Hashed ElGamal key encapsulation mechanism (GA-HEG KEM) and the Group Action Hashed Diffie-Hellman non-interactive key-exchange (GA-HDH NIKE) protocol. The latter protocol has already been considered to be used in practical protocols such as Post-Quantum WireGuard (S&P '21) and OPTLS (CCS '20).
We prove that active security of the two protocols in the Quantum Random Oracle Model (QROM) inherently relies on very strong variants of the Group Action Strong CDH problem, where the adversary is given arbitrary quantum access to a DDH oracle. That is, quantum accessible Strong CDH assumptions are not only sufficient but also necessary to prove active security of the GA-HEG KEM and the GA-HDH NIKE protocols.
Furthermore, we propose variants of the protocols with QROM security from the classical Strong CDH assumption, i.e., CDH with classical access to the DDH oracle. Our first variant uses key confirmation and can therefore only be applied in the KEM setting. Our second but considerably less efficient variant is based on the twinning technique by Cash et al. (EUROCRYPT '08) and in particular yields the first actively secure isogeny-based NIKE with QROM security from the standard CDH assumption.
Cumulatively All-Lossy-But-One Trapdoor Functions from Standard Assumptions
Chakraborty, Prabhakaran, and Wichs (PKC'20) recently introduced a new tag-based variant of lossy trapdoor functions, termed cumulatively all-lossy-but-one trapdoor functions (CALBO-TDFs). Informally, CALBO-TDFs allow defining a public tag-based function with a (computationally hidden) special tag, such that the function is lossy for all tags except when the special secret tag is used. In the latter case, the function becomes injective and efficiently invertible using a secret trapdoor. This notion has been used to obtain advanced constructions of signatures with strong guarantees against leakage and tampering, and also by Dodis, Vaikunthanathan, and Wichs (EUROCRYPT'20) to obtain constructions of randomness extractors with extractor-dependent sources. While these applications are motivated by practical considerations, the only known instantiation of CALBO-TDFs so far relies on the existence of indistinguishability obfuscation.
In this paper, we propose the first two instantiations of CALBO-TDFs based on standard assumptions. Our constructions are based on the LWE assumption with a sub-exponential approximation factor and on the DCR assumption, respectively, and circumvent the use of indistinguishability obfuscation by relying on lossy modes and trapdoor mechanisms enabled by these assumptions.
SCARF: A Low-Latency Block Cipher for Secure Cache-Randomization
Uncategorized
Uncategorized
Randomized cache architectures have proven to significantly
increase the complexity of contention-based cache side channel attacks
and therefore pre\-sent an important building block for side channel secure
microarchitectures. By
randomizing the address-to-cache-index mapping, attackers can
no longer trivially construct minimal eviction sets which are
fundamental for contention-based cache attacks. At the same time,
randomized caches maintain the flexibility of traditional caches,
making them broadly applicable across various CPU-types. This is
a major advantage over cache partitioning approaches.
A large variety of randomized cache architectures has been proposed.
However, the actual randomization function received little attention
and is often neglected in these proposals. Since the randomization operates
directly on the critical path of the cache lookup, the function needs
to have extremely low latency. At the same time, attackers must not be
able to bypass the randomization which would nullify the security benefit of the randomized mapping.
In this paper we propose \cipher (\underline{S}ecure \underline{CA}che \underline{R}andomization \underline{F}unction), the first dedicated cache randomization
cipher which achieves low latency and is cryptographically secure in the cache attacker model.
The design methodology for this dedicated cache cipher enters new territory in the field of block
ciphers with a small 10-bit block length and heavy key-dependency in few rounds.
How to Sample a Discrete Gaussian (and more) from a Random Oracle
The random oracle methodology is central to the design of many practical
cryptosystems. A common challenge faced in several systems is the need to have
a random oracle that outputs from a structured distribution $\mathcal{D}$, even though most
heuristic implementations such as SHA-3 are best suited for outputting bitstrings.
Our work explores the problem of sampling from discrete Gaussian (and related) distributions in a manner that they can be programmed into random oracles. We make the following contributions:
-We provide a definitional framework for our results. We say that a sampling algorithm $\mathsf{Sample}$ for a distribution is explainable if there exists an algorithm $\mathsf{Explain}$ where, for a $x$ in the domain, we have that $\mathsf{Explain}(x) \rightarrow r \in \{0,1\}^n$ such that $\mathsf{Sample}(r)=x$. Moreover, if $x$ is sampled from $\mathcal{D}$ the explained distribution is statistically close to choosing $r$ uniformly at random. We consider a variant of this definition that allows the statistical closeness to be a "precision parameter'' given to the $\mathsf{Explain}$ algorithm. We show that sampling algorithms which satisfy our `explainability' property can be programmed as a random oracle.
-We provide a simple algorithm for explaining \emph{any} sampling algorithm that works over distributions with polynomial sized ranges. This includes discrete Gaussians with small standard deviations.
-We show how to transform a (not necessarily explainable) sampling algorithm $\mathsf{Sample}$ for a distribution into a new $\mathsf{Sample}'$ that is explainable. The requirements for doing this is that (1) the probability density function is efficiently computable (2) it is possible to efficiently uniformly sample from all elements that have a probability density above a given threshold $p$, showing the equivalence of random oracles to these distributions and random oracles to uniform bitstrings. This includes a large class of distributions, including all discrete Gaussians.
-A potential drawback of the previous approach is that the transformation requires an additional computation of the density function. We provide a more customized approach that shows the Miccancio-Walter discrete Gaussian sampler is explainable as is. This suggests that other discrete Gaussian samplers in a similar vein might also be explainable as is.
Algebraic Relation of Three MinRank Algebraic Modelings
We give algebraic relations among equations of three algebraic modelings for MinRank problem: support minors modeling, Kipnis–Shamir modeling and minors modeling.
Hybrid Post-Quantum Signatures in Hardware Security Keys
Recent advances in quantum computing are increasingly jeopardizing the security of cryptosystems currently in widespread use, such as RSA or elliptic-curve signatures. To address this threat, researchers and standardization institutes have accelerated the transition to quantum-resistant cryptosystems, collectively known as Post-Quantum Cryptography (PQC). These PQC schemes present new challenges due to their larger memory and computational footprints and their higher chance of latent vulnerabilities.
In this work, we address these challenges by introducing a scheme to upgrade the digital signatures used by security keys to PQC. We introduce a hybrid digital signature scheme based on two building blocks: a classically-secure scheme, ECDSA, and a post-quantum secure one, Dilithium.
Our hybrid scheme maintains the guarantees of each underlying building block even if the other one is broken, thus being resistant to classical and quantum attacks.
We experimentally show that our hybrid signature scheme can successfully execute on current security keys, even though secure PQC schemes are known to require substantial resources.
We publish an open-source implementation of our scheme at https://github.com/google/OpenSK/releases/tag/hybrid-pqc so that other researchers can reproduce our results on a nRF52840 development kit.
From Plaintext-extractability to IND-CCA Security
We say a public-key encryption is plaintext-extractable in the random oracle model if there exists an algorithm that given access to all inputs/outputs queries to the random oracles can simulate the decryption oracle. We argue that plaintext-extractability is enough to show the indistinguishably under chosen ciphertext attack (IND-CCA) of OAEP+ transform (Shoup, Crypto 2001) when the underlying trapdoor permutation is one-way.
We extend the result to the quantum random oracle model (QROM) and show that OAEP+ is IND-CCA secure in QROM if the underlying trapdoor permutation is quantum one-way.
Efficient Proofs of Software Exploitability for Real-world Processors
We consider the problem of proving in zero-knowledge the existence of vulnerabilities in executables compiled to run on real-world processors. We demonstrate that it is practical to prove knowledge of real exploits for real-world processor architectures without the need for source code and without limiting our consideration to narrow vulnerability classes. To achieve this, we devise a novel circuit compiler and a toolchain that produces highly optimized, non-interactive zero-knowledge proofs for programs executed on the MSP430, an ISA commonly used in embedded hardware. Our toolchain employs a highly optimized circuit compiler and a number of novel optimizations to construct efficient proofs for program binaries. To demonstrate the capability of our system, we test our toolchain by constructing proofs for challenges in the Microcorruption capture the flag exercises.
Homomorphic Encryption on GPU
Homomorphic encryption (HE) is a cryptosystem that allows secure processing of encrypted data. One of the most popular HE schemes is the Brakerski-Fan-Vercauteren (BFV), which supports somewhat (SWHE) and fully homomorphic encryption (FHE). Since overly involved arithmetic operations of HE schemes are amenable to concurrent computation, GPU devices can be instrumental in facilitating the practical use of HE in real world applications thanks to their superior parallel processing capacity.
This paper presents an optimized and highly parallelized GPU library to accelerate the BFV scheme. This library includes state-of-the-art implementations of Number Theoretic Transform (NTT) and inverse NTT that minimize the GPU kernel function calls. It makes an efficient use of the GPU memory hierarchy and computes 128 NTT operations for ring dimension of $2^{14}$ only in $176.1~\mu s$ on RTX~3060Ti GPU. To the best of our knowlede, this is the fastest implementation in the literature. The library also improves the performance of the homomorphic operations of the BFV scheme. Although the library can be independently used, it is also fully integrated with the Microsoft SEAL library, which is a well-known HE library that also implements the BFV scheme. For one ciphertext multiplication, for the ring dimension $2^{14}$ and the modulus bit size of $438$, our GPU implementation offers $\mathbf{63.4}$ times speedup over the SEAL library running on a high-end CPU. The library compares favorably with other state-of-the-art GPU implementations of NTT and the BFV operations. Finally, we implement a privacy-preserving application that classifies encrpyted genome data for tumor types and achieve speedups of $42.98$ and $5.7$ over a CPU implementations using single and 16 threads, respectively. Our results indicate that GPU implementations can facilitate the deployment of homomorphic cryptographic libraries in real world privacy preserving applications.
Multi-User Security of the Sum of Truncated Random Permutations (Full Version)
For several decades, constructing pseudorandom functions from pseudorandom permutations, so-called Luby-Rackoff backward construction, has been a popular cryptographic problem. Two methods are well-known and comprehensively studied for this problem: summing two random permutations and truncating partial bits of the output from a random permutation. In this paper, by combining both summation and truncation, we propose new Luby-Rackoff backward constructions, dubbed SaT1 and SaT2, respectively. SaT2 is obtained by partially truncating output bits from the sum of two independent random permutations, and SaT1 is its single permutation-based variant using domain separation. The distinguishing advantage against SaT1 and SaT2 is upper bounded by O(\sqrt{\mu q_max}/2^{n-0.5m}) and O({\sqrt{\mu}q_max^1.5}/2^{2n-0.5m}), respectively, in the multi-user setting, where n is the size of the underlying permutation, m is the output size of the construction, \mu is the number of users, and q_max is the maximum number of queries per user. We also prove the distinguishing advantage against a variant of XORP[3]~(studied by Bhattacharya and Nandi at Asiacrypt 2021) using independent permutations, dubbed SoP3-2, is upper bounded by O(\sqrt{\mu} q_max^2}/2^{2.5n})$. In the multi-user setting with \mu = O(2^{n-m}), a truncated random permutation provides only the birthday bound security, while SaT1 and SaT2 are fully secure, i.e., allowing O(2^n) queries for each user. It is the same security level as XORP[3] using three permutation calls, while SaT1 and SaT2 need only two permutation calls.
Permissionless Clock Synchronization with Public Setup
The permissionless clock synchronization problem asks how it is possible for a population of parties to maintain a system-wide synchronized clock, while their participation rate fluctuates --- possibly very widely --- over time. The underlying assumption is that parties experience the passage of time with roughly the same speed, but however they may disengage and engage with the protocol following arbitrary (and even chosen adversarially) participation patterns. This (classical) problem has received renewed attention due to the advent of blockchain protocols, and recently it has been solved in the setting of proof of stake, i.e., when parties are assumed to have access to a trusted PKI setup [Badertscher et al., Eurocrypt ’21].
In this work, we present the first proof-of-work (PoW)-based permissionless clock synchronization protocol. Our construction assumes a public setup (e.g., a CRS) and relies on an honest majority of computational power that, for the first time, is described in a fine-grain timing model that does not utilize a global clock that exports the current time to all parties. As a secondary result of independent interest, our protocol gives rise to the first PoW-based ledger consensus protocol that does not rely on an external clock for the time-stamping of transactions and adjustment of the PoW difficulty.
Anonymous Random Allocation and Its Applications
Random Allocation -the random assignment of the data to the parties- is a well-studied topic in the analysis of medical or judicial data, and the context of resource distribution. Random allocation reduces the chance of bias or corruption in the relevant applications, which makes the results more reliable. This is done by preventing a special or pre-planned assignment of the data to accommodate the assessment toward the desired results. This paper provides the first formal syntax and security notion of a random allocation scheme. Based on our new security notions of anonymity, confidentiality, and data-integrity, random allocation can cover more applications such as the distributed audit system where the confidentiality of data and the anonymity of auditors are of paramount importance. Our protocol allows the parties to stay anonymous during the concurrent executions of the protocol even if they have revealed themselves at a certain execution. The revelation property gives the possibility to the parties to claim certain advantages/faults at the end of a protocol-execution (without breaking the data-privacy or anonymity in other protocol-executions). We instantiate our syntax and prove the security based on simple cryptographic components and assumptions such as the Diffie-Hellman assumption, in the random oracle model.
Stretching Cube Attacks: Improved Methods to Recover Massive Superpolies
Cube attacks exploit the algebraic properties of symmetric ciphers by recovering a special polynomial, the superpoly, and subsequently the secret key. When the algebraic normal forms of the corresponding Boolean functions are not available, the division property based approach allows to recover the exact superpoly in a clever way. However, the computational cost to recover the superpoly becomes prohibitive as the number of rounds of the cipher increases. For example, the nested monomial predictions (NMP) proposed at ASIACRYPT 2021 stuck at round 845 for Trivium. To alleviate the bottleneck of the NMP technique, i.e., the unsolvable model due to the excessive number of monomial trails, we shift our focus to the so-called valuable terms of a specific middle round that contribute to the superpoly. Two new techniques are introduced, namely, Non-zero Bit-based Division Property (NBDP) and Core Monomial Prediction (CMP), both of which result in a simpler MILP model compared to the MILP model of MP. It can be shown that the CMP technique offers a substantial improvement over the monomial prediction technique in terms of computational complexity of recovering valuable terms. Combining the divide-and-conquer strategy with these two new techniques, we catch the valuable terms more effectively and thus avoid wasting computational resources on intermediate terms contributing nothing to the superpoly. As an illustration of the power of our techniques, we apply our framework to Trivium, Grain, Kreyvium and Acorn. As a result, the computational cost of earlier attacks can be significantly reduced and the exact ANFs of the superpolies for 846-, 847- and 848-round Trivium, 192-round Grain, 895-round Kreyvium and 776-round Acorn can be recovered in practical time, even though the superpoly of 848-round Trivium contains over 500 million terms; this corresponds to respectively 3, 1, 1 and 1 rounds more than the previous best results. Moreover, by investigating the internal properties of Möbius transformation, we show how to perform key recovery using superpolies involving full key bits, which leads to the best key recovery attacks on the targeted ciphers.
Privacy-Preserving Authenticated Key Exchange in the Standard Model
Privacy-Preserving Authenticated Key Exchange (PPAKE) provides protection both for the session keys and the identity information of the involved parties. In this paper, we introduce the concept of robustness into PPAKE. Robustness enables each user to confirm whether itself is the target recipient of the first round message in the protocol. With the help of robustness, a PPAKE protocol can successfully avoid the heavy redundant communications and computations caused by the ambiguity of communicants in the existing PPAKE, especially in broadcast channels.
We propose a generic construction of robust PPAKE from key encapsulation mechanism (KEM), digital signature (SIG), message authentication code (MAC), pseudo-random generator (PRG) and symmetric encryption (SE). By instantiating KEM, MAC, PRG from the DDH assumption and SIG from the CDH assumption, we obtain a specific robust PPAKE scheme in the standard model, which enjoys forward security for session keys, explicit authentication and forward privacy for user identities. Thanks to the robustness of our PPAKE, the number of broadcast messages per run and the computational complexity per user are constant, and in particular, independent of the number of users in the system.
A summary on the FRI low degree test
This document is an informal summary on the FRI low degree test [BSBHR18a], [BSCI+20], and DEEP algebraic linking from [BSGKS20]. Based on its most recent soundness analysis [BSCI+20], we discuss parameter settings for practical security levels, how FRI is turned into a polynomial commitment scheme, and the soundness of DEEP sampling in the list decoding regime. In particular, we illustrate the DEEP method applied to proving satisfiability of algebraic intermediate representations and prove a soundness error bound which slightly improves the one in [Sta21].
Continuous Authentication in Secure Messaging
Secure messaging schemes such as the Signal protocol rely on out-of-band channels to verify the authenticity of long-running communication. Such out-of-band checks however are only rarely actually performed by users in practice.
In this paper, we propose a new method for performing continuous authentication during a secure messaging session, without the need for an out-of-band channel. Leveraging the users' long-term secrets, our Authentication Steps extension guarantees authenticity as long as long-term secrets are not compromised, strengthening Signal's post-compromise security. Our mechanism further allows to detect a potential compromise of long-term secrets after the fact via an out-of-band channel.
Our protocol comes with a novel, formal security definition capturing continuous authentication, a general construction for Signal-like protocols, and a security proof for the proposed instantiation. We further provide a prototype implementation which seamlessly integrates on top of the official Signal Java library, together with bandwidth and storage overhead benchmarks.
Updatable NIZKs from Non-Interactive Zaps
In ASIACRYPT 2016, Bellare, Fuchsbauer, and Scafuro studied the security of NIZK arguments under subverted Structured Reference String (SRS) and presented some positive and negative results. In their best positive result, they showed that by defining an SRS as a tuple of knowledge assumption in bilinear groups (e.g. $g^a, g^b, g^{ab}$), and then using a Non-Interactive (NI) zap to prove that either there is a witness for the statement $\mathsf{x}$ or one knows the trapdoor of SRS (e.g. $a$ or $b$), one can build NIZK arguments that can achieve soundness and $\textit{subversion zero-knowledge}$ (zero-knowledge without trusting a third party; Sub-ZK). In this paper, we expand their idea and use NI zaps (of knowledge) to build NIZK arguments (of knowledge) with $\textit{updatable}$, $\textit{universal}$, and $\textit{succinct}$ SRS. To this end, we first show that their proposed sound and Sub-ZK NIZK argument can also achieve $\textit{updatable}$ soundness, which is a more desired notion than the plain soundness. Updatable soundness allows the verifier to update the SRS one time and bypass the need for a trusted third party. Then, we show that using a similar OR language, given a NI zap (of knowledge) and a $\textit{key-updatable}$ signature scheme, one can build NIZK arguments that can achieve Sub-ZK and $\textit{updatable}$ simulation soundness (resp. $\textit{updatable}$ simulation extractability). The proposed constructions are the first NIZK arguments that have updatable and succinct SRS, and do not require a random oracle. Our instantiations show that in the resulting NIZK arguments the computational cost for the parties to verify/update the SRS is negligible, namely, a few exponentiations and pairing checks. The run times of the prover and verifier, as well as the size of the proof, are asymptotically the same as those of the underlying NI zap.
Nostradamus goes Quantum
In the Nostradamus attack, introduced by Kelsey and Kohno (Eurocrypt 2006), the adversary has to commit to a hash value y of an iterated hash function H such that, when later given a message prefix P, the adversary is able to find a suitable "suffix explanation" S with H(P||S)=y. Kelsey and Kohno show a herding attack with $2^{2n/3}$ evaluations of the compression function of H (with n bits output and state), locating the attack between preimage attacks and collision search in terms of complexity. Here we investigate the security of Nostradamus attacks for quantum adversaries. We present a quantum herding algorithm for the Nostradamus problem making approximately $\sqrt[3]{n}\cdot 2^{3n/7}$ compression function evaluations, significantly improving over the classical bound. We also prove that quantum herding attacks cannot do better than $2^{3n/7}$ evaluations for random compression functions, showing that our algorithm is (essentially) optimal. We also discuss a slightly less tight bound of roughly $2^{3n/7-s}$ for general Nostradamus attacks against random compression functions, where s is the maximal block length of the adversarially chosen suffix S.
VoteXX: A Solution to Improper Influence in Voter-Verifiable Elections
We solve a long-standing challenge to the integrity of votes cast without the supervision of a voting booth: “improper influence,” which we define as any combination of vote buying and voter coercion. Our approach allows each voter, or their trusted agent(s), to cancel their vote in a way that is unstoppable, irrevocable, and forever unattributable to the voter. In particular, our approach enhances security of online, remote, public-sector elections, for which there is a growing need and the threat of improper influence is most acute. In this extended abstract, we introduce the new approach, compare it with previous methods, and concisely summarize the protocols. In our full paper, we give detailed cryptographic protocols, show how they can be applied to several voting settings, describe our implementation in a full voting system called VoteXX, and provide UC proofs. Our system protects against the strongest adversary considered in prior related work and is suitable for widespread use in public elections.
Arithmetization of Functional Program Execution via Interaction Nets in Halo 2
We sketch a method for creating a zero-knowledge proof of knowledge for the correct execution of a program within a model of higher-order, recursive, purely functional programming by leveraging Halo 2. To our knowledge, this is the first ZKP for general purpose computation based on purely functional computation. This is an attractive alternative to using a von Neumann architecture based zero-knowledge virtual machine for verified computing of functional programs, as compilation will be more direct, making it more easily verifiable and potentially more efficient.
Interaction nets are a natural setting for recursive, higher-order functional programming where all computation steps are linear and local. Interaction nets are graphs and traces for such programs are hyper-graphs. Correctness of a trace is a simple syntactic check over the structure of the trace represented as a hyper-graph. We reformulate this syntactic check as a Halo 2 circuit which is universal over all traces.
On the Field-Based Division Property: Applications to MiMC, Feistel MiMC and GMiMC (Full Version)
Recent practical applications using advanced cryptographic protocols such as multi-party computations (MPC) and zero-knowledge proofs (ZKP) have prompted a range of novel symmetric primitives described over large finite fields, characterized as arithmetization-oriented AO ciphers. Such designs, aiming to minimize the number of multiplications over fields, have a high risk of being vulnerable to algebraic attacks, especially to the higher-order differential attack. Thus, it is significant to carefully evaluate the growth of their algebraic degree. However, the degree estimation for AO ciphers has been a challenge for cryptanalysts due to the lack of general and accurate methods.
In this paper, we extend the division property, a state-of-the-art framework for finding the upper bound of the algebraic degree over binary fields, to the scope of $\mathbb{F}_{2^n}$. It is a generic method to detect the algebraic degree for AO ciphers, even applicable to Feistel ciphers which have no better bounds than the trivial exponential one. In this general division property, our idea is to evaluate whether the polynomial representation of a block cipher contains some specific monomials. With a deep investigation of the arithmetical feature, we introduce the propagation rules of monomials for field-based operations, which can be efficiently modeled using the bit-vector theory of SMT. Then the new searching tool for degree estimation can be constructed due to the relationship between the algebraic degree and the exponents of monomials.
We apply our new framework to some important AO ciphers, including Feistel MiMC, GMiMC, and MiMC. For Feistel MiMC, we show that the algebraic degree grows significantly slower than the native exponential bound. For the first time, we present a secret-key higher-order differential distinguisher for up to 124 rounds, much better than the 83-round distinguisher for Feistel MiMC permutation proposed at CRYPTO 2020. We also exhibit a full-round zero-sum distinguisher with a data complexity of $2^{251}$. Our method can be further extended for the general Feistel structure with more branches and exhibit higher-order differential distinguishers against the practical instance of GMiMC for up to 50 rounds. For MiMC in SP-networks, our results correspond to the exact algebraic degree proved by Bouvier et al. We also point out that the number of rounds in MiMC's specification is not sufficient to guarantee the security against the higher-order differential attack for MiMC-like schemes with different exponents. The investigation of different exponents provides some guidance on the cipher design.
Puncturable Key Wrapping and Its Applications
We introduce puncturable key wrapping (PKW), a new cryptographic primitive that supports fine-grained forward security properties in symmetric key hierarchies. We develop syntax and security definitions, along with provably secure constructions for PKW from simpler components (AEAD schemes and puncturable PRFs). We show how PKW can be applied in two distinct scenarios. First, we show how to use PKW to achieve forward security for TLS 1.3 0-RTT session resumption, even when the server's long-term key for generating session tickets gets compromised. This extends and corrects a recent work of Aviram, Gellert, and Jager (Journal of Cryptology, 2021). Second, we show how to use PKW to build a protected file storage system with file shredding, wherein a client can outsource encrypted files to a potentially malicious or corrupted cloud server whilst achieving strong forward-security guarantees, relying only on local key updates.
Notes on Reusable Garbling
Garbling is a cryptographic primitive which has many applications. It is mainly used for scenes of limited authority, such as multi-party computation (MPC), attribute-based encryption (ABE), functional encryption (FE), indistinguishability obfuscation (IO), etc. Garbling schemes before 2013 are of one-time garbling. Goldwasser et al and Agrawal presented a reusable garbling scheme, which made use of a symmetric encryption scheme and an FE scheme as the components.
In this paper we discuss the validity and the efficiency of reusable garbling scheme. We present the following three notes on the scheme.
(1) Reusable garbling scheme does not provide new applications, and it is still a one-time garbling scheme.
(2) Even reusable garbling scheme is taken as a one-time garbling scheme, sometimes it is not usable. More detailedly, it can only be used for Basic Scene 2, and cannot be used for Basic Scene 1. For example, it cannot be used for MPC.
(3) Even reusable garbling scheme is taken as a one-time garbling scheme used for Basic Scene 2, there is no evidence to show that its efficiency is better than a former one-time garbling scheme.
Attaining GOD Beyond Honest Majority With Friends and Foes
In the classical notion of multiparty computation (MPC), an honest party learning private inputs of others, either as a part of protocol specification or due to a malicious party's unspecified messages, is not considered a potential breach. Several works in the literature exploit this seemingly minor loophole to achieve the strongest security of guaranteed output delivery via a trusted third party, which nullifies the purpose of MPC. Alon et al. (CRYPTO 2020) presented the notion of Friends and Foes ($\mathtt{FaF}$) security, which accounts for such undesired leakage towards honest parties by modelling them as semi-honest (friends) who do not collude with malicious parties (foes). With real-world applications in mind, it's more realistic to assume parties are semi-honest rather than completely honest, hence it is imperative to design efficient protocols conforming to the $\mathtt{FaF}$ security model.
Our contributions are not only motivated by the practical viewpoint, but also consider the theoretical aspects of $\mathtt{FaF}$ security. We prove the necessity of semi-honest oblivious transfer for $\mathtt{FaF}$-secure protocols with optimal resiliency. On the practical side, we present QuadSquad, a ring-based 4PC protocol, which achieves fairness and GOD in the $\mathtt{FaF}$ model, with an optimal corruption of $1$ malicious and $1$ semi-honest party. QuadSquad is, to the best of our knowledge, the first practically efficient $\mathtt{FaF}$ secure protocol with optimal resiliency. Its performance is comparable to the state-of-the-art dishonest majority protocols while improving the security guarantee from abort to fairness and GOD. Further, QuadSquad elevates the security by tackling a stronger adversarial model over the state-of-the-art honest-majority protocols, while offering a comparable performance for the input-dependent computation. We corroborate these claims by benchmarking the performance of QuadSquad. We also consider the application of liquidity matching that deals with highly sensitive financial transaction data, where $\mathtt{FaF}$ security is apt. We design a range of $\mathtt{FaF}$ secure building blocks to securely realize liquidity matching as well as other popular applications such as privacy-preserving machine learning (PPML). Inclusion of these blocks makes QuadSquad a comprehensive framework.
On the Optimal Communication Complexity of Error-Correcting Multi-Server PIR
An $\ell$-server Private Information Retrieval (PIR) scheme enables a client to retrieve a data item from a database replicated among $\ell$ servers while hiding the identity of the item. It is called $b$-error-correcting if a client can correctly compute the data item even in the presence of $b$ malicious servers. It is known that $b$-error correction is possible if and only if $\ell>2b$. In this paper, we first prove that if error correction is perfect, i.e., the client always corrects errors, the minimum communication cost of $b$-error-correcting $\ell$-server PIR is asymptotically equal to that of regular $(\ell-2b)$-server PIR as a function of the database size $n$. Secondly, we formalize a relaxed notion of statistical $b$-error-correcting PIR, which allows non-zero failure probability. We show that as a function of $n$, the minimum communication cost of statistical $b$-error-correcting $\ell$-server PIR is asymptotically equal to that of regular $(\ell-b)$-server one, which is at most that of $(\ell-2b)$-server one. Our main technical contribution is a generic construction of statistical $b$-error-correcting $\ell$-server PIR for any $\ell>2b$ from regular $(\ell-b)$-server PIR. We can therefore reduce the problem of determining the optimal communication complexity of error-correcting PIR to determining that of regular PIR. In particular, our construction instantiated with the state-of-the-art PIR schemes and the previous lower bound for single-server PIR result in a separation in terms of communication cost between perfect and statistical error correction for any $\ell>2b$.
Accountable Light Client Systems for PoS Blockchains
A major challenge for blockchain interoperability is having an on-chain light client protocol that is both efficient and secure. We present a protocol that provides short proofs about the state of a decentralised consensus protocol while being able to detect misbehaving parties. To do this naively, a verifier would need to maintain an updated list of all participants' public keys which makes the corresponding proofs long. In general, existing solutions either lack accountability or are not efficient. We define and design a committee key scheme with short proofs that do not include any of the individual participants' public keys in plain. Our committee key scheme, in turn, uses a custom designed SNARK which has a fast prover time. Moreover, using our committee key scheme, we define and design an accountable light client system as the main cryptographic core for building bridges between proof of stake blockchains. Finally, we implement a prototype of our custom SNARK for which we provide benchmarks.
The Pseudorandom Oracle Model and Ideal Obfuscation
Uncategorized
Uncategorized
We introduce a new idealized model of hash functions, which we refer to as the *pseudorandom oracle* (${\mathrm{Pr}\mathcal{O}}$) model. Intuitively, it allows us to model cryptosystems that use the code of an ideal hash function in a non-black-box way. Formally, we model hash functions via a combination of a pseudorandom function (PRF) family and an ideal oracle. A user can initialize the hash function by choosing a PRF key $k$ and mapping it to a public handle $h$ using the oracle. Given the handle $h$ and some input $x$, the oracle can also be called to evaluate the PRF at $x$ with the corresponding key $k$. A user who chooses the PRF key $k$ therefore has a complete description of the hash function and can use its code in non-black-box constructions, while an adversary, who just gets the handle $h$, only has black-box access to the hash function via the oracle.
As our main result, we show how to construct ideal obfuscation in the ${\mathrm{Pr}\mathcal{O}}$ model, starting from functional encryption (FE), which in turn can be based on well-studied polynomial hardness assumptions. In contrast, we know that ideal obfuscation cannot be instantiated in the basic random oracle model under any assumptions. We believe our result provides heuristic justification for the following: (1) most natural security goals implied by ideal obfuscation can be achieved in the real world; (2) obfuscation can be constructed from FE at polynomial security loss.
We also discuss how to interpret our result in the ${\mathrm{Pr}\mathcal{O}}$ model as a construction of ideal obfuscation using simple hardware tokens or as a way to bootstrap ideal obfuscation for PRFs to that for all functions.
On Module Unique-SVP and NTRU
The NTRU problem can be viewed as an instance of finding a short non-zero vector in a lattice, under the promise that it contains an exceptionally short vector. Further, the lattice under scope has the structure of a rank-2 module over the ring of integers of a number field. Let us refer to this problem as the module unique Shortest Vector Problem,or mod-uSVP for short. We exhibit two reductions that together provide evidence the NTRU problem is not just a particular case of mod-uSVP, but representative of it from a computational perspective.
First, we reduce worst-case mod-uSVP to worst-case NTRU. For this, we rely on an oracle for id-SVP, the problem of finding short non-zero vectors in ideal lattices. Using the worst-case id-SVP to worst-case NTRU reduction from Pellet-Mary and Stehlé [ASIACRYPT'21],this shows that worst-case NTRU is equivalent to worst-case mod-uSVP.
Second, we give a random self-reduction for mod-uSVP. We put forward a distribution D over mod-uSVP instances such that solving mod-uSVP with a non-negligible probability for samples from D allows to solve mod-uSVP in the worst-case. With the first result, this gives a reduction from worst-case mod-uSVP to an average-case version of NTRU where the NTRU instance distribution is inherited from D. This worst-case to average-case reduction requires an oracle for id-SVP.
Disorientation faults in CSIDH
We investigate a new class of fault-injection attacks against the CSIDH family of cryptographic group actions. Our disorientation attacks effectively flip the direction of some isogeny steps. We achieve this by faulting a specific subroutine, connected to the Legendre symbol or Elligator computations performed during the evaluation of the group action. These subroutines are present in almost all known CSIDH implementations. Post-processing a set of faulty samples allows us to infer constraints on the secret key. The details are implementation specific, but we show that in many cases, it is possible to recover the full secret key with only a modest number of successful fault injections and modest computational resources. We provide full details for attacking the original CSIDH proof-of-concept software as well as the CTIDH constant-time implementation. Finally, we present a set of lightweight countermeasures against the attack and discuss their security.
Leakage Certification Made Simple
Side channel evaluations benefit from sound characterisations of adversarial leakage models, which are the determining factor for attack success. Two questions are of interest: can we define and estimate a quantity that captures the ideal adversary (who knows all the distributions that are involved in an attack), and can we define and estimate a quantity that captures a concrete adversary (represented by a given leakage model)?
Existing work has led to a proliferation of custom quantities to measure both types of adversaries, which can be data intensive to estimate in the ideal case, even for discrete side channels and especially when the number of dimensions in the side channel traces grows.
In this paper, we show how to define the mutual information between carefully chosen variables of interest and how to instantiate a recently suggested mutual information estimator for practical estimation. We apply our results to real-world data sets and are the first to provide a mutual information-based characterisation of ideal and concrete adversaries utilising up to 30 data points.
SEEK: model extraction attack against hybrid secure inference protocols
Security concerns about a machine learning model used in a prediction-as-a-service include the privacy of the model, the query and the result. Secure inference solutions based on homomorphic encryption (HE) and/or multiparty computation (MPC) have been developed to protect all the sensitive information. One of the most efficient type of solution utilizes HE for linear layers, and MPC for non-linear layers. However, for such hybrid protocols with semi-honest security, an adversary can malleate the intermediate features in the inference process, and extract model information more effectively than methods against inference service in plaintext. In this paper, we propose SEEK, a general extraction method for hybrid secure inference services outputing only class labels. This method can extract each layer of the target model independently, and is not affected by the depth of the model. For ResNet-18, SEEK can extract a parameter with less than 50 queries on average, with average error less than $0.03\%$.
Structure Evaluation of AES-like Ciphers against Mixture Differential Cryptanalysis
In ASIACRYPT 2017, Rønjom et al. analyzed AES with yoyo attack. Inspired by their 4-round AES distinguisher, Grassi proposed the mixture differential cryptanalysis as well as a key recovery attack on 5-round AES, which was shown to be better than the classical square attack in computation complexity.
After that, Bardeh et al. combined the exchange attack with the 4-round mixture differential distinguisher of AES, leading to the first secret-key chosen plaintext distinguisher for 6-round AES. Unlike the attack on 5-round AES, the result of 6-round key-recovery attack on AES has extremely large complexity, which implies the weakness of mixture difference to a certain extent. Our work aims at evaluating the security of AES-like ciphers against mixture differential cryptanalysis. We propose a new structure called a boomerang struncture and illustrate that a
differential distinguisher of a boomerang struncture just corresponds to a mixture differential distinguisher for AES-like ciphers. Based on the boomerang structure, it is shown that the mixture differential cryptanalysis is not suitable to be applied
to AES-like ciphers with high round number. In specific, we associate the primitive index with our framework built on the boomerang structure and give the upper bound for the length of mixture differentials with probability 1 on AES-like ciphers. It can be directly deduced from our framework that there is no mixture differential distinguisher for 6-round AES.
To Be, or Not to Be Stateful: Post-Quantum Secure Boot using Hash-Based Signatures
While research in post-quantum cryptography (PQC) has gained
significant momentum, it is only slowly adopted for real-world
products. This is largely due to concerns about practicability and
maturity. The secure boot process of embedded devices is one s-
cenario where such restraints can result in fundamental security
problems. In this work, we present a flexible hardware/software
co-design for hash-based signature (HBS) schemes which enables
the move to a post-quantum secure boot today. These signature
schemes stand out due to their straightforward security proofs and
are on the fast track to standardisation. In contrast to previous
works, we exploit the performance intensive similarities of the s-
tateful LMS and XMSS schemes as well as the stateless SPHINCS+
scheme. Thus, we enable designers to use a stateful or stateless
scheme depending on the constraints of each individual application.
To show the feasibility of our approach, we compare our results
with hardware accelerated implementations of classical asymmetric
algorithms. Further, we lay out the usage of different HBS schemes
during the boot process. We compare different schemes, show the
importance of parameter choices, and demonstrate the performance
gain with different levels of hardware acceleration.
On Squaring Modulo Mersenne Numbers
During the design of a new primitive inspired by Squash we accidentally stumbled on the observation described in this note.
Let $n$ be a $k$-bit Mersenne number whose factors are unknown. Consider an $\ell$-bit secret number $x=2^{k/2}a+b$. We observe that there are parameter configurations where a chunk of the value $b^2$ is leaked even if $k<2\ell$.
This observation does not endanger any known scheme and in particular not Squash.
Embedded Identity Traceable Identity-Based IPFE from Pairings and Lattices
We present the first fully collusion resistant traitor tracing (TT) scheme for identity-based inner product functional encryption (IBIPFE) that directly traces user identities through an efficient tracing procedure. We name such a scheme as embedded identity traceable IBIPFE (EI-TIBIPFE), where secret keys and ciphertexts are computed for vectors u and v respectively. Additionally, each secret key is associated with a user identification information tuple (i , id, gid) that specifies user index i , user identity id and an identity gid of a group to which the user belongs. The ciphertexts are generated under a group identity gid′ so that decryption recovers the inner product between the vectors u and v if the user is a member of the group gid′, i.e., gid = gid′. Suppose some users linked to a particular group team up and create a pirate decoder that is capable of decrypting the content of the group, then the tracing algorithm extracts at least one id from the team given black-box access to the decoder.
In prior works, such TT schemes are built for usual public key encryptions. The only existing TIPFE scheme proposed by Do, Phan, and Pointcheval [CT-RSA’20] can trace user indices but not the actual identities. Moreover, their scheme achieves selective security and private traceability, meaning that it is only the trusted authority that is able to trace user indices. In this work, we present the following TT schemes with varying parameters and levels of security:
(1) We generically construct EI-TIBIPFE assuming the existence of IBIPFE. The scheme preserves the security level of the underlying IBIPFE.
(2) We build an adaptively secure EI-TIPFE scheme from bilinear maps. Note that EI-TIPFE is a particular case of EI-TIBIPFE, which does not consider group identities.
(3) Next, we construct a selectively secure EI-TIBIPFE from bilinear maps. As an intermediate step, we design the first IBIPFE scheme based on a target group assumption in the standard model.
(4) Finally, we provide a generic construction of selectively secure EI-TIBIPFE from lattices, namely under the standard Learning With Errors assumption.
Our pairing-based schemes support public traceability and the ciphertext size grows with $\sqrt{n}$, whereas in the IBIPFE and lattice-based ones, it grows linearly with n. The main technical difficulty is designing such an advanced TT scheme for an IBIPFE that is beyond IPFE and more suitable for real-life applications.
A Deep Neural Differential Distinguisher for ARX based Block Cipher
Over the last few years, deep learning is becoming the most
trending topic for the classical cryptanalysis of block ciphers. Differential
cryptanalysis is one of the primary and potent attacks on block ciphers.
Here we apply deep learning techniques to model differential cryptanaly-
sis more easily. In this paper, we report a generic tool called NDDT1, us-
ing deep neural classifier that assists to find differential distinguishers for
symmetric block ciphers with reduced round. We apply this approach for
the differential cryptanalysis of ARX-based encryption schemes HIGHT,
LEA, SPARX and SAND. To the best of our knowledge, this is the
first deep learning-based distinguisher for the mentioned ciphers. The
result shows that our deep learning based distinguishers work with high
accuracy for 14-round HIGHT, 13-Round LEA, 11-round SPARX and
14-round SAND128. The relationship between the hamming weight of
input difference of a neural distinguisher and the corresponding maxi-
mum round number of the cipher has been justified through exhaustive
experimentation. The lower bounds of data complexity for differential
cryptanalysis have also been improved.
Multi-Authority ABE from Lattices without Random Oracles
Attribute-based encryption (ABE) extends public-key encryption to enable fine-grained control to encrypted data. However, this comes at the cost of needing a central trusted authority to issue decryption keys. A multi-authority ABE (MA-ABE) scheme decentralizes ABE and allows anyone to serve as an authority. Existing constructions of MA-ABE only achieve security in the random oracle model.
In this work, we develop new techniques for constructing MA-ABE for the class of subset policies (which captures policies such as conjunctions and DNF formulas) whose security can be based in the plain model without random oracles. We achieve this by relying on the recently-proposed "evasive" learning with errors (LWE) assumption by Wee (EUROCRYPT 2022) and Tsabury (CRYPTO 2022).
Along the way, we also provide a modular view of the MA-ABE scheme for DNF formulas by Datta et al. (EUROCRYPT 2021) in the random oracle model. We formalize this via a general version of a related-trapdoor LWE assumption by Brakerski and Vaikuntanathan (ITCS 2022), which can in turn be reduced to the plain LWE assumption. As a corollary, we also obtain an MA-ABE scheme for subset policies from plain LWE with a polynomial modulus-to-noise ratio in the random oracle model. This improves upon the Datta et al. construction which relied on LWE with a sub-exponential modulus-to-noise ratio. Moreover, we are optimistic that the generalized related-trapdoor LWE assumption will also be useful for analyzing the security of other lattice-based constructions.
Knowledge Encryption and Its Applications to Simulatable Protocols With Low Round-Complexity
We introduce a new notion of public key encryption, knowledge encryption, for which its ciphertexts can be reduced to the public-key, i.e., any algorithm that can break the ciphertext indistinguishability can be used to extract the (partial) secret key. We show that knowledge encryption can be built solely on any two-round oblivious transfer with game-based security, which are known based on various standard (polynomial-hardness) assumptions, such as the DDH, the Quadratic($N^{th}$) Residuosity or the LWE assumption.
We use knowledge encryption to construct the first three-round (weakly) simulatable oblivious transfer. This protocol satisfies (fully) simulatable security for the receiver, and weakly simulatable security ($(T, \epsilon)$-simulatability) for the sender in the following sense: for any polynomial $T$ and any inverse polynomial $\epsilon$, there exists an efficient simulator such that the distinguishing gap of any distinguisher of size less than $T$ is at most $\epsilon$.
Equipped with these tools, we construct a variety of fundamental cryptographic protocols with low round-complexity, assuming only the existence of two-round oblivious transfer with game-based security. These protocols include three-round delayed-input weak zero knowledge argument, three-round weakly secure two-party computation, three-round concurrent weak zero knowledge in the BPK model, and a two-round commitment with weak security under selective opening attack. These results improve upon the assumptions required by the previous constructions. Furthermore, all our protocols enjoy the above $(T, \epsilon)$-simulatability (stronger than the distinguisher-dependent simulatability), and are
quasi-polynomial time simulatable under the same (polynomial hardness) assumption.
(Augmented) Broadcast Encryption from Identity Based Encryption with Wildcard
Several broadcast encryption (BE) constructions have been proposed since Fiat and Naor introduced the concept, some achieving short parameters size while others achieve better security. Since 1994, a lot of alternatives to BE have moreover been additionally proposed, such as the broadcast and trace (BT) primitive which is a combination of broadcast encryption and traitor tracing. Among the other variants of BE, the notion of augmented BE (AugBE), introduced by Boneh and Waters in 2006, corresponds to a BE scheme with the particularity that the encryption algorithm takes an index as an additional parameter. If an AugBE scheme is both message and index hiding, it has been proved that it can generically be used to construct a secure BT scheme. Hence, any new result related to the former gives an improvement to the latter. In this paper, we first show that both BE and AugBE can be obtained by using an identity-based encryption scheme with wildcard (WIBE). We also introduce the new notion of anonymous AugBE, where the used users set is hidden, and prove that it implies index hiding. We then provide two different WIBE constructions. The first one has constant size ciphertext and used to construct a new constant size ciphertext BE scheme with adaptive CPA security, in the standard model (under the $\SXDH{}$ assumption). The second WIBE provides pattern-hiding, a new definition we introduced, and serves as a basis for the first anonymous AugBE scheme (and subsequently a BT scheme since our scheme is also index hiding by nature) in the literature, with adaptive security in the standard model (under the $\XDLin{}$ assumption).
A New Framework for Quantum Oblivious Transfer
We present a new template for building oblivious transfer from quantum information that we call the ``fixed basis'' framework. Our framework departs from prior work (eg., Crepeau and Kilian, FOCS '88) by fixing the correct choice of measurement basis used by each player, except for some hidden trap qubits that are intentionally measured in a conjugate basis.
We instantiate this template in the quantum random oracle model (QROM) to obtain simple protocols that implement, with security against malicious adversaries:
- Non-interactive random-input bit OT in a model where parties share EPR pairs a priori.
- Two-round random-input bit OT without setup, obtained by showing that the protocol above remains secure even if the (potentially malicious) OT receiver sets up the EPR pairs.
- Three-round chosen-input string OT from BB84 states without entanglement or setup. This improves upon natural variations of the CK88 template that require at least five rounds.
Along the way, we develop technical tools that may be of independent interest. We prove that natural functions like XOR enable seedless randomness extraction from certain quantum sources of entropy. We also use idealized (i.e. extractable and equivocal) bit commitments, which we obtain by proving security of simple and efficient constructions in the QROM.
Statistical Security in Two-Party Computation Revisited
We present a new framework for building round-optimal one-sided statistically secure two party computation (2PC) protocols in the plain model. We demonstrate that a relatively weak notion of oblivious transfer (OT), namely a three round elementary oblivious transfer $\textsf{eOT}$ with statistical receiver privacy, along with a non-interactive commitment scheme suffices to build a one-sided statistically secure two party computation protocol with black-box simulation. Our framework enables the first instantiations of round-optimal one-sided statistically secure 2PC protocols from the CDH assumption and certain families of isogeny-based assumptions.
As part of our compiler, we introduce the following new one-sided statistically secure primitives in the pre-processing model that might also be of independent interest:
1. Three round statistically sender private random-OT where only the last OT message depends on the receiver's choice bit and the sender receives random outputs generated by the protocol.
2. Four round delayed-input statistically sender private conditional disclosure of secrets where the first two rounds of the protocol are independent of the inputs of the parties.
The above primitives are directly constructed from $\textsf{eOT}$ and hence we obtain their instantiations from the same set of assumptions as our 2PC.
CSI-SharK: CSI-FiSh with Sharing-friendly Keys
CSI-FiSh is one of the most efficient isogeny-based signature schemes, which is proven to be secure in the Quantum Random Oracle Model (QROM). However, there is a bottleneck in CSI-FiSh in the threshold setting, which is that its public key needs to be generated by using $k-1$ secret keys. This leads to very inefficient threshold key generation protocols and also forces the parties to store $k-1$ secret shares. We present CSI-SharK, a new variant of $\textit{CSI}$-FiSh that has more $\textit{Shar}$ing-friendly $\textit{K}$eys and is as efficient as the original scheme. This is accomplished by modifying the public key of the ID protocol, used in the original CSI-FiSh, to the equal length Structured Public Key (SPK), generated by a $\textit{single}$ secret key, and then proving that the modified ID protocol and the resulting signature scheme remain secure in the QROM. We translate existing CSI-FiSh-based threshold signatures and Distributed Key Generation (DKG) protocols to the CSI-SharK setting. We find that DKG schemes based on CSI-SharK outperform the state-of-the-art actively secure DKG protocols from the literature by a factor of about $3$, while also strongly reducing the communication cost between the parties. We also uncover and discuss a flaw in the key generation of the actively secure CSI-FiSh based threshold signature $\textit{Sashimi}$, that can prevent parties from signing. Finally, we discuss how (distributed) key generation and signature schemes in the isogeny setting are strongly parallelizable and we show that by using $C$ independent CPU threads, the total runtime of such schemes can basically be reduced by a factor $C$. As multiple threads are standard in modern CPU architecture, this parallelizability is a strong incentive towards using isogeny-based (distributed) key generation and signature schemes in practical scenarios.
High-order masking of NTRU
The main protection against side-channel attacks consists in computing every function with multiple shares via the masking countermeasure. While the masking countermeasure was originally developed for securing block-ciphers such as AES, the protection of lattice-based cryptosystems is often more challenging, because of the diversity of the underlying algorithms. In this paper, we introduce new gadgets for the high-order masking of the NTRU cryptosystem, with security proofs in the classical ISW probing model. We then describe the first fully masked implementation of the NTRU Key Encapsulation Mechanism submitted to NIST, including the key generation. To assess the practicality of our countermeasures, we provide a concrete implementation on ARM Cortex-M3 architecture, and eventually a t-test leakage evaluation.
Strongly Anonymous Ratcheted Key Exchange
Anonymity is an (abstract) security goal that is especially important to threatened user groups. Therefore, widely deployed communication protocols implement various measures to hide different types of information (i.e., metadata) about their users. Before actually defining anonymity, we consider an attack vector about which targeted user groups can feel concerned: continuous, temporary exposure of their secrets. Examples for this attack vector include intentionally planted viruses on victims' devices, as well as physical access when their users are detained.
Inspired by Signal's Double-Ratchet Algorithm, Ratcheted (or Continuous) Key Exchange (RKE) is a novel class of protocols that increase confidentiality and authenticity guarantees against temporary exposure of user secrets. For this, an RKE regularly renews user secrets such that the damage due to past and future exposures is minimized; this is called Post-Compromise Security and Forward-Secrecy, respectively.
With this work, we are the first to leverage the strength of RKE for achieving strong anonymity guarantees under temporary exposure of user secrets. We extend existing definitions for RKE to capture attacks that interrelate ciphertexts, seen on the network, with secrets, exposed from users' devices. Although, at first glance, strong authenticity (and confidentiality) conflicts with strong anonymity, our anonymity definition is as strong as possible without diminishing other goals.
We build strongly anonymity-, authenticity-, and confidentiality-preserving RKE and, along the way, develop new tools with applicability beyond our specific use-case: Updatable and Randomizable Signatures as well as Updatable and Randomizable Public Key Encryption. For both new primitives, we build efficient constructions.
Adversarial Correctness and Privacy for Probabilistic Data Structures
We study the security of Probabilistic Data Structures (PDS) for
handling Approximate Membership Queries (AMQ); prominent
examples of AMQ-PDS are Bloom and Cuckoo filters. AMQ-PDS
are increasingly being deployed in environments where adversaries
can gain benefit from carefully selecting inputs, for example to
increase the false positive rate of an AMQ-PDS. They are also being
used in settings where the inputs are sensitive and should remain
private in the face of adversaries who can access an AMQ-PDS
through an API or who can learn its internal state by compromising
the system running the AMQ-PDS.
We develop simulation-based security definitions that speak to
correctness and privacy of AMQ-PDS. Our definitions are general
and apply to a broad range of adversarial settings. We use our defi-
nitions to analyse the behaviour of both Bloom filters and insertion-
only Cuckoo filters. We show that these AMQ-PDS can be provably
protected through replacement or composition of hash functions
with keyed pseudorandom functions in their construction. We also
examine the practical impact on storage size and computation of
providing secure instances of Bloom and insertion-only Cuckoo
filters.
PEA: Practical private epistasis analysis using MPC
Due to the significant drop in prices for genome sequencing in the last decade, genome databases were constantly growing. This enabled genome analyses such as Genome-Wide Association Studies (GWAS) that study associations between a gene and a disease and allow to improve medical treatment. However, GWAS fails at the analysis of complex diseases caused by non-linear gene-gene interactions such as sporadic breast cancer or type 2 diabetes. Epistasis Analysis (EA) is a more powerful approach that complements GWAS and considers non-linear interactions between multiple parts of the genome and environment.
Statistical genome analyses require large, well-curated genomic datasets, which are difficult to obtain. Hence, the aggregation of multiple databases is often necessary, but the sharing of genomic data raises severe privacy concerns and is subject to extensive regulations (e.g., GDPR or HIPAA), requiring further privacy protection for collaborative analyses.
Although there has been work on private GWAS, there was a lack of attention to Private EA (PEA). In this work, we design the first secure and accurate PEA protocol, with security against passive adversaries. Our efficient PEA protocol consists of two subprotocols: (1) (optional) feature selection for filtering noisy features to reduce the input size for better efficiency and (2) finding relevant associations. For feature selection, we design two protocols based on Secure Multi-Party Computation (MPC) for Relief-F and TuRF. For finding associations, we design an MPC protocol for Multifactor Dimensionality Reduction (MDR).
Our private MDR protocol is based on two novel, efficient building blocks, arithmetic greater than and arithmetic swap, which may be of independent interest. This approach omits the need for expensive conversions between sharing types in private MDR and reduces the communication by two orders of magnitude compared to a naïve design using garbled circuits. Our private MDR protocol runs in (extrapolated) three days on a practical database with 10,000 features for all two mutually combined features, i.e., considering about 50 million combinations.
On digital signatures based on group actions: QROM security and ring signatures
Group action based cryptography was formally proposed in the seminal paper of Brassard and Yung (Crypto 1990). Based on oneway group action, there is a well-known digital signature design based on the Goldreich–Micali–Widgerson (GMW) zero-knowledge protocol for the graph isomorphism problem and the Fiat–Shamir (FS) transformation. Recently, there is a revival of activities on group action based cryptography and the GMW-FS design, as witnessed by the schemes SeaSign (Eurocrypt 2019), CSI-FiSh (Asiacrypt 2019), LESS (Africacrypt 2020), ATFE (Eurocrypt 2022), and MEDS (Africacrypt 2023).
The contributions of this paper are two-fold: the first is about the GMW-FS design in general, and the second is on the ATFE-GMW-FS scheme.
First, we study the QROM security and ring signatures of the GMW-FS design in the group action framework. We distil properties of the underlying group action for the GMW-FS design to be secure in the quantum random oracle model (QROM). We also show that this design supports a linkable ring signature construction following the work of Beullens, Katsumata and Pintore (Asiacrypt 2020).
Second, we apply the above results to prove the security of the ATFE-GMW-FS scheme in the QROM model. We then describe a linkable ring signature scheme based on it, and provide an implementation of the ring signature scheme. Preliminary experiments suggest that our scheme is competitive among existing post-quantum ring signatures.
Fast and Efficient Hardware Implementation of HQC
This work presents a hardware design for constant-time implementation of the HQC (Hamming Quasi-Cyclic) code-based key encapsulation mechanism. HQC has been selected for the fourth round of NIST's Post-Quantum Cryptography standardization process and this work presents the first, hand-optimized design of HQC key generation, encapsulation, and decapsulation written in Verilog targeting implementation on FPGAs. The three modules further share a common SHAKE256 hash module to reduce area overhead. All the hardware modules are parametrizable at compile time so that designs for the different security levels can be easily generated. The design currently outperforms the other hardware designs for HQC, and many of the fourth-round Post-Quantum Cryptography standardization process, with one of the best time-area products as well. For the combined HighSpeed design targeting the lowest security level, we show that the HQC design can perform key generation in 0.09ms, encapsulation in 0.13ms, and decapsulation in 0.21ms when synthesized for an Xilinx Artix 7 FPGA. Our work shows that when hardware performance is compared, HQC can be a competitive alternative candidate from the fourth round of the NIST PQC competition.
Machine-Checked Proofs of Privacy Against Malicious Boards for Selene & Co
Privacy is a notoriously difficult property to achieve in complicated systems and especially in electronic voting schemes. Moreover, electronic voting schemes is a class of systems that require very high assurance. The literature contains a number of ballot privacy definitions along with security proofs for common systems. Some machine-checked security proofs have also appeared. We define a new ballot privacy notion that captures a larger class of voting schemes. This notion improves on the state of the art by taking into account that verification in many schemes will happen or must happen after the tally has been published, not before as in previous definitions.
As a case study we give a machine-checked proof of privacy for Selene, which is a remote electronic voting scheme which offers an attractive mix of security properties and usability. Prior to our work, the computational privacy of Selene has never been formally verified. Finally, we also prove that MiniVoting and Belenios satisfies our definition.