All papers in 2023 (Page 11 of 1971 results)
Security of Hybrid Key Establishment using Concatenation
In a hybrid key establishment system, multiple independent key establishment schemes are combined in a manner that also combines their security properties. Such constructions can combine systems that are secure in different settings and achieve the combined security of all systems. For example, classical and post-quantum systems can be combined in order to secure communication against current threats as well as future quantum adversaries. This paper describes machine-checked proofs of security for a commonly-used hybrid key establishment system that concatenates the secrets produced by other key establishment systems. Practical interpretation of these results is also provided in order to guide the use of this system in applications and standards.
Defining and Controlling Information Leakage in US Equities Trading
We present a new framework for defining information leakage in the setting of US equities trading, and construct methods for deriving trading schedules that stay within specified information leakage bounds. Our approach treats the stock market as an interactive protocol performed in the presence of an adversary, and draws inspiration from the related disciplines of differential privacy as well as quantitative information flow. We apply a linear programming solver using examples from historical trade and quote (TAQ) data for US equities and describe how this framework can inform actual algorithmic trading strategies.
A Note on Non-Interactive Zero-Knowledge from CDH
We build non-interactive zero-knowledge (NIZK) and ZAP arguments for all $\mathsf{NP}$ where soundness holds for infinitely-many security parameters, and against uniform adversaries, assuming the subexponential hardness of the Computational Diffie-Hellman (CDH) assumption. We additionally prove the existence of NIZK arguments with these same properties assuming the polynomial hardness of both CDH and the Learning Parity with Noise (LPN) assumption. In both cases, the CDH assumption does not require a group equipped with a pairing.
Infinitely-often uniform security is a standard byproduct of commonly used non-black-box techniques that build on disjunction arguments on the (in)security of some primitive. In the course of proving our results, we develop a new variant of this non-black-box technique that yields improved guarantees: we obtain explicit constructions (previous works generally only obtained existential results) where security holds for a relatively dense set of security parameters (as opposed to an arbitrary infinite set of security parameters). We demonstrate that our technique can have applications beyond our main results.
Revisiting the Nova Proof System on a Cycle of Curves
Nova is an efficient recursive proof system built from an elegant folding scheme for (relaxed) R1CS statements. The original Nova paper (CRYPTO'22) presented Nova using a single elliptic curve group of order $p$. However, for improved efficiency, the implementation of Nova alters the scheme to use a 2-cycle of elliptic curves. This altered scheme is only described in the code and has not been proven secure. In this work, we point out a soundness vulnerability in the original implementation of the 2-cycle Nova system. To demonstrate this vulnerability, we construct a convincing Nova proof for the correct evaluation of $2^{75}$ rounds of the Minroot VDF in only 1.46 seconds. We then present a modification of the 2-cycle Nova system and formally prove its security. The modified system also happens to be more efficient than the original implementation. In particular, the modification eliminates an R1CS instance-witness pair from the recursive proof. The implementation of Nova has now been updated to use our optimized and secure system. We also show that Nova's IVC proofs are malleable and discuss several mitigations.
SALSA VERDE: a machine learning attack on Learning with Errors with sparse small secrets
Learning with Errors (LWE) is a hard math problem used in post-quantum cryptography. Homomorphic Encryption (HE) schemes rely on the hardness of the LWE problem for their security, and two LWE-based cryptosystems were recently standardized by NIST for digital signatures and key exchange (KEM). Thus, it is critical to continue assessing the security of LWE and specific parameter choices. For example, HE uses secrets with small entries, and the HE community has considered standardizing small sparse secrets to improve efficiency and functionality. However, prior work, SALSA and PICANTE, showed that ML attacks can recover sparse binary secrets. Building on these, we propose VERDE, an improved ML attack that can recover sparse binary, ternary, and narrow Gaussian secrets. Using improved preprocessing and secret recovery techniques, VERDE can attack LWE with larger dimensions ($n=512$) and smaller moduli ($\log_2 q=12$ for $n=256$), using less time and power. We propose novel architectures for scaling. Finally, we develop a theory that explains the success of ML LWE attacks.
SoK: Data Sovereignty
Society appears to be on the verge of recognizing the need for control over sensitive data in modern web applications. Recently, many systems claim to give control to individuals, promising the preeminent goal of data sovereignty. However, despite recent attention, research and industry efforts are fragmented and lack a holistic system overview. In this paper, we provide the first transecting systematization of data sovereignty by drawing from a dispersed body of knowledge. We clarify the field by identifying its three main areas: (i) decentralized identity, (ii) decentralized access control and (iii) policy-compliant decentralized computation. We find that literature lacks a cohesive set of formal definitions. Each area is considered in isolation, and priorities in industry and academia are not aligned due to a lack of clarity regarding user control. To solve this issue, we propose formal definitions for each sub-area. By highlighting that data sovereignty transcends the domain of decentralized identity, we aim to guide future works to embrace a broader perspective on user control. In each section, we augment our definition with security and privacy properties, discuss the state of the art and proceed to identify open challenges. We conclude by highlighting synergies between areas, emphasizing the real-world benefit obtained by further developing data sovereign systems.
eLIMInate: a Leakage-focused ISE for Masked Implementation
Even given a state-of-the-art masking scheme, masked software implementation of some cryptography functionality can pose significant challenges stemming, e.g., from simultaneous requirements for efficiency and security. In this paper we design an Instruction Set Extension (ISE) to address a specific element of said challenge, namely the elimination of leakage stemming from architectural and micro-architectural overwriting. Conceptually, the ISE allows a leakage-focused behavioural hint to be communicated from software to the micro-architecture: using it informs how computation is realised when applied to masking-specific data, which then offers an opportunity to eliminate associated leakage. We develop prototype, latency- and area-optimised implementations of the ISE design based on the RISC-V Ibex core. Using them, we demonstrate that use of the ISE can close the gap between assumptions about and actual behaviour of a device and thereby deliver an improved security guarantee.
Post-Quantum Secure Over-the-Air Update of Automotive Systems
With the announcement of the first winners of the NIST Post-Quantum Cryptography (PQC) competition in 2022, the industry has now a confirmed foundation to revisit established cryptographic algorithms applied in automotive use cases and replace them with quantum-safe alternatives. In this paper, we investigate the application of the NIST competition winner CRYSTALS-Dilithium to protect the integrity and authenticity of over-the-air update packages. We show how this post-quantum secure digital signature algorithm can be integrated in AUTOSAR Adaptive Platform Update and Configuration Management framework and evaluate our approach practically using the NXP S32G vehicle network processor. We discuss two implementation variants with respect to performance and resilience against relevant attacks, and conclude that PQC has little impact on the update process as a whole.
Lightweight Authentication of Web Data via Garble-Then-Prove
Transport Layer Security (TLS) establishes an authenticated and confidential channel to deliver data for almost all Internet applications. A recent work (Zhang et al., CCS'20) proposed a protocol to prove the TLS payload to a third party, without any modification of TLS servers, while ensuring the privacy and originality of the data in the presence of malicious adversaries. However, it required maliciously secure Two-Party Computation (2PC) for generic circuits, leading to significant computational and communication overhead.
This paper proposes the garble-then-prove technique to achieve the same security requirement without using any heavy mechanism like generic malicious 2PC. Our end-to-end implementation shows 14$\times$ improvement in communication and an order of magnitude improvement in computation over the state-of-the-art protocol. We also show worldwide performance when using our protocol to authenticate payload data from Coinbase and Twitter APIs. Finally, we propose an efficient gadget to privately convert the above authenticated TLS payload to additively homomorphic commitments so that the properties of the payload can be proven efficiently using zkSNARKs.
An invariant of the round function of QARMAv2-64
This note shows that there exists a nontrivial invariant for the unkeyed round function of QARMAv2-64. It is invariant under translation by a set of $2^{32}$ constants. The invariant does not extend over all rounds of QARMAv2-64 and probably does not lead to full-round attacks. Nevertheless, it might be of interest as it can be expected to give meaningful weak-key attacks on round-reduced instances when combined with other techniques such as integral cryptanalysis.
Access structures induced by polymatroids with extreme rank function
In this paper we consider multipartite access structures obtained from polymatroids with extreme rank function. They are proved to be ideal and partially hierarchical. It turns out that the family of structures induced by polymatroids with minimal rank function is a natural generalization of the class of disjunctive access structure considered by Simmons and the class of conjunctive access structures introduced by Tassa. The results are based on the connections between multipartite access structures and polymatroids discovered by Farràs, Martí-Farré and Padró.
Testudo: Linear Time Prover SNARKs with Constant Size Proofs and Square Root Size Universal Setup
We present $\mathsf{Testudo}$, a new FFT-less SNARK with a near linear-time prover, constant-time verifier, constant-size proofs and a square-root-size universal setup. $\mathsf{Testudo}$ is based on a variant of Spartan~\cite{C:Setty20}—and hence does not require FFTs—as well as a new, fast multivariate polynomial commitment scheme (PCS) with a square-root-sized trusted setup that is derived from PST (TCC 2013) and IPPs (Asiacrypt 2021). To achieve constant-size SNARK proofs in $\mathsf{Testudo}$ we then combine our PCS openings proofs recursively with a Groth16 SNARK. We also evaluate our construction and its building blocks: to compute a PCS opening proof for a polynomial of size $2^{25}$, our new scheme opening procedure achieves a 110x speed-up compared to PST and 3x compared to Gemini (Eurocrypt 2022), since opening computations are heavily parallelizable and operate on smaller polynomials. Furthermore, a $\mathsf{Testudo}$ proof for a witness of size $2^{30} (\approx 1\,GB)$ requires a setup of size only $2^{15}$ ($\approx$ tens of kilobytes). Finally, we show that a $\mathsf{Testudo}$ variant for proving data-parallel computations is almost 10x faster at verifying $2^{10}$ Poseidon-based Merkle tree opening proofs than the regular version.
Beyond-Full-Round Integral Distinguisher of NIST Lightweight Cryptography Competition Finalist TinyJAMBU
TinyJAMBU is one of the ten finalists of the NIST lightweight cryptography competition, announced in March 2021. It proposes a lightweight authenticated encryption scheme based on a lightweight 128-bit keyed permutation. TinyJAMBU supports three key lengths 128, 192, and 256 denoted by TinyJambu-128, TinyJambu192, and TinyJambu-256, respectively. The scheme as well as the permutation is well studied by the designers and third parties. The most relevant work to ours is the full-round zero-sum distinguisher under the known-key setting assumption published at Indocrypt 2022. In this work, we show that even without the known-key setting assumption, there are integral distinguishers not only for full-round versions of the permutations of TinyJambu-128 and TinyJambu-192 but also for round-increased versions of them up to 1273 rounds.
Randomness Recoverable Secret Sharing Schemes
It is well-known that randomness is essential for secure cryptography. The randomness used in cryptographic primitives is not necessarily recoverable even by the party who can, e.g., decrypt or recover the underlying secret/message. Several cryptographic primitives that support randomness recovery have turned out useful in various applications. In this paper, we study randomness recoverable secret sharing schemes (RR-SSS), in both information-theoretic and computational settings and provide two results. First, we show that while every access structure admits a perfect RR-SSS, there are very simple access structures (e.g., in monotone $\mathsf{AC}^0$) that do not admit efficient perfect (or even statistical) RR-SSS. Second, we show that the existence of efficient computational RR-SSS for certain access structures in monotone $\mathsf{AC}^0$ implies the existence of one-way functions. This stands in sharp contrast to (non-RR) SSS schemes for which no such results are known.
RR-SSS plays a key role in making advanced attributed-based encryption schemes randomness recoverable, which in turn have applications in the context of designated-verifier non-interactive zero knowledge.
Faster TFHE Bootstrapping with Block Binary Keys
Fully Homomorphic Encryption over the Torus (TFHE) is a homomorphic encryption scheme which supports efficient Boolean operations over encrypted bits. TFHE has a unique feature in that the evaluation of each binary gate is followed by a bootstrapping procedure to refresh the noise of a ciphertext. In particular, this gate bootstrapping involves two algorithms called the blind rotation and key-switching.
In this work, we introduce several optimization techniques for the TFHE bootstrapping. We first define a new key distribution, called the block binary distribution, where the secret key can be expressed as a concatenation of several vectors of Hamming weight at most one. We analyze the hardness of (Ring) LWE with a block binary secret and provide candidate parameter sets which are secure against the best-known attacks. Then, we use the block key structure to simplify the inner working of blind rotation and reduce its complexity. We also modify the RLWE key generation and the gadget decomposition method to improve the performance of the key-switching algorithm in terms of complexity and noise growth.
Finally, we use the TFHE library to implement our algorithms and demonstrate their benchmarks.
Our experimentation shows that the execution time of TFHE bootstrapping is reduced from 10.5ms down to 6.4ms under the same security level, and the size of the bootstrapping key decreases from 109MB to 60MB.
BASS: Boolean Automorphisms Signature Scheme
We offer a digital signature scheme using Boolean automorphisms of a multivariate polynomial algebra over integers. Verification part of this scheme is based on the approximation of the number of zeros of a multivariate Boolean function.
Speculative Denial-of-Service Attacks in Ethereum
Transaction fees compensate actors for resources expended on transactions and can only be charged from transactions included in blocks. But, the expressiveness of Turing-complete contracts implies that verifying if transactions can be included requires executing them on the current blockchain state.
In this work, we show that adversaries can craft malicious transactions that decouple the work imposed on blockchain actors from the compensation offered in return. We introduce three attacks: (i) ConditionalExhaust, a conditional resource-exhaustion attack against blockchain actors. (ii) MemPurge, an attack for evicting transactions from actors' mempools. (iii) GhostTX, an attack on the reputation system used in Ethereum's proposer-builder separation ecosystem.
We evaluate our attacks on an Ethereum testnet and find that by combining ConditionalExhaust and MemPurge, adversaries can simultaneously burden victims' computational resources and clog their mempools to the point where victims are unable to include transactions in blocks. Thus, victims create empty blocks, thereby hurting the system's liveness. The attack's expected cost is $376, but becomes cheaper if adversaries are validators. For other attackers, costs decrease if censorship is prevalent in the network.
ConditionalExhaust and MemPurge are made possible by inherent features of Turing-complete blockchains, and potential mitigations may result in reducing a ledger's scalability.
Succinct Computational Secret Sharing
A secret-sharing scheme enables a dealer to share a secret $s$ among $n$ parties such that only authorized subsets of parties, specified by a monotone access structure $f:\{0,1\}^n\to\{0,1\}$, can reconstruct $s$ from their shares. Other subsets of parties learn nothing about $s$.
The question of minimizing the (largest) share size for a given $f$ has been the subject of a large body of work. However, in most existing constructions for general access structures $f$, the share size is not much smaller than the size of some natural computational representation of $f$, a fact that has often been referred to as the ``representation size barrier'' in secret sharing.
In this work, we initiate a systematic study of succinct computational secret sharing (SCSS), where the secrecy requirement is computational and the goal is to substantially beat the representation size barrier. We obtain the following main results.
(1) SCSS via Projective PRGs. We introduce the notion of a *projective PRG*, a pseudorandom generator for which any subset of the output bits can be revealed while keeping the other output bits hidden, using a *short* projective seed. We construct projective PRGs with different levels of succinctness under a variety of computational assumptions, and apply them towards constructing SCSS for graph access structures, monotone CNF formulas, and (less succinctly) useful subclasses of monotone circuits and branching programs. Most notably, under the sub-exponential RSA assumption, we obtain a SCSS scheme that, given an arbitrary access structure $f$, represented by a truth table of size $N=2^n$, produces shares of size polylog(N)=\poly(n) in time $\tilde O(N)$. For comparison, the share size of the best known information-theoretic schemes is $O(N^{0.58})$.
(2) SCSS via One-way Functions. Under the (minimal) assumption that one-way functions exist, we obtain a near-quadratic separation between the total share size of computational and information-theoretic secret sharing. This is the strongest separation one can hope for, given the state of the art in secret sharing lower bounds. We also construct SCSS schemes from one-way functions for useful classes of access structures, including forbidden graphs and monotone DNF formulas. This leads to constructions of fully-decomposable conditional disclosure of secrets (also known as privacy-free garbled circuits) for general functions, represented by a truth table of size $N=2^n$, with share size polylog(N) and computation time $\tilde O(N)$, assuming sub-exponentially secure one-way functions.
Zombies and Ghosts: Optimal Byzantine Agreement in the Presence of Omission Faults
Studying the feasibility of Byzantine Agreement (BA) in realistic fault models is an important question in the area of distributed computing and cryptography. In this work, we revisit the mixed fault model with Byzantine (malicious) faults and omission faults put forth by Hauser, Maurer, and Zikas (TCC 2009), who showed that BA (and MPC) is possible with $t$ Byzantine faults, $s$ send faults (whose outgoing messages may be dropped) and $r$ receive faults (whose incoming messages may be lost) if $n>3t+r+s$. We generalize their techniques and results by showing that BA is possible if $n>2t+r+s$, given the availability of a cryptographic setup. Our protocol is the first to match the recent lower bound of Eldefrawy, Loss, and Terner (ACNS 2022) for this setting.
Towards Generic MPC Compilers via Variable Instruction Set Architectures (VISAs)
In MPC, we usually represent programs as circuits. This is a poor fit for programs that use complex control flow, as it is costly to compile control flow to circuits. This motivated prior work to emulate CPUs inside MPC. Emulated CPUs can run complex programs, but they introduce high overhead due to the need to evaluate not just the program, but also the machinery of the CPU, including fetching, decoding, and executing instructions, accessing RAM, etc.
Thus, both circuits and CPU emulation seem a poor fit for general MPC. The former cannot scale to arbitrary programs; the latter incurs high per-operation overhead.
We propose variable instruction set architectures (VISAs), an approach that inherits the best features of both circuits and CPU emulation. Unlike a CPU, a VISA machine repeatedly executes entire program fragments, not individual instructions. By considering larger building blocks, we avoid most of the machinery associated with CPU emulation: we directly handle each fragment as a circuit.
We instantiated a VISA machine via garbled circuits (GC), yielding constant-round 2PC for arbitrary assembly programs. We use improved branching (Stacked Garbling, Heath and Kolesnikov, Crypto 2020) and recent Garbled RAM (GRAM) (Heath et al., Eurocrypt 2022). Composing these securely and efficiently is intricate, and is one of our main contributions.
We implemented our approach and ran it on common programs, including Dijkstra's and Knuth-Morris-Pratt. Our 2PC VISA machine executes assembly instructions at $300$Hz to $4000$Hz, depending on the target program. We significantly outperform the state-of-the-art CPU-based approach (Wang et al., ESORICS 2016, whose tool we re-benchmarked on our setup). We run in constant rounds, use $6\times$ less bandwidth, and run more than $40\times$ faster on a low-latency network. With $50$ms (resp. $100$ms) latency, we are $898\times$ (resp. $1585\times$) faster on the same setup.
While our focus is MPC, the VISA model also benefits CPU-emulation-based Zero-Knowledge proof compilers, such as ZEE and EZEE (Heath et al., Oakland'21 and Yang et al., EuroS&P'22).
Limits on Adaptive Security for Attribute-Based Encryption
This work addresses the long quest for proving full (adaptive) security for attribute-based encryption (ABE). We show that in order to prove full security in a black-box manner, the scheme must be ``irregular'' in the sense that it is impossible to ``validate'' secret keys to ascertain consistent decryption of ciphertexts. This extends a result of Lewko and Waters (Eurocrypt 2014) that was only applicable to straight-line proofs (without rewinding). Our work, therefore, establishes that it is impossible to circumvent the irregularity property using creative proof techniques, so long as the adversary is used in a black-box manner.
As a consequence, our work provides an explanation as to why some lattice-based ABE schemes cannot be proven fully secure, even though no known adaptive attacks exist.
Latency-First Smart Contract: Overclock the Blockchain for a while
Blockchain systems can become overwhelmed by a large number of transactions, leading to increased latency. As a consequence, latency-sensitive users must bid against each other and pay higher fees to ensure that their transactions are processed in priority. However, most of the time of a blockchain system (78% in Ethereum), there is still a lot of unused computational power, with few users sending transactions. To address this issue and reduce latency for users, we propose the latency-first smart contract model in this paper, which optimistically accepts committed transactions. This allows users to submit a commitment during times of high demand, and then complete the rest of the work at a lower priority. From the perspective of the blockchain, this temporarily “overclocks” the system. We have developed a programming tool for our model, and our experiments show that the proposed latency-first smart contract model can greatly reduce latency during the periods of high demand.
A new approach based on quadratic forms to attack the McEliece cryptosystem
We introduce a novel algebraic approach for attacking the McEliece cryptosystem which is currently at the $4$-th round of the NIST competition. The contributions of the article are twofold.
(1) We present a new distinguisher on alternant and Goppa codes working in a much broader range of parameters than \cite{FGOPT11}.
(2) With this approach we also provide a polynomial--time key recovery attack on alternant codes which are distinguishable with the distinguisher \cite{FGOPT11}.
These results are obtained by introducing a subspace of matrices representing quadratic forms. Those are associated with quadratic relations for the component-wise product in the dual of the Goppa (or alternant) code of the cryptosystem. It turns out that this subspace of matrices contains matrices of unusually small rank in the case of alternant or Goppa codes ($2$ or $3$ depending on the field characteristic) revealing the secret polynomial structure of the code.
MinRank solvers can then be used to recover the secret key of the scheme. We devise a dedicated algebraic modeling in characteristic $2$ where the Gröbner basis techniques to solve it can be analyzed.
This computation behaves differently when applied to the matrix space associated with a random code rather than with a Goppa or an alternant code. This gives a distinguisher of the latter code families, which contrarily to the one proposed in \cite{FGOPT11} working only in a tiny parameter regime is now able to work for code rates above $\frac{2}{3}$. It applies to most of the instantiations of the McEliece cryptosystem in the literature. It coincides with the one of \cite{FGOPT11}
when the latter can be applied (and is therefore of polynomial complexity in this case). However, its complexity increases significantly when \cite{FGOPT11} does not apply anymore, but stays subexponential as long as the co-dimension of the code is sublinear in the length (with an asymptotic exponent which is below those of all known key recovery or message attacks). For the concrete parameters of the McEliece NIST submission \cite{ABCCGLMMMNPPPSSSTW20}, its complexity is way too complex to threaten the cryptosystem, but is smaller than known key recovery attacks for most of the parameters of the submission. This subspace of quadratic forms can also be used in a different manner to give a polynomial time attack of the McEliece cryptosystem based on generic alternant codes or Goppa codes provided that these codes are distinguishable by the method of \cite{FGOPT11}, and in the Goppa case we need the additional assumption that its degree is less than $q-1$, where $q$ is the alphabet size of the code.
Generalized word-oriented feedback shift registers
The word-oriented feedback shift registers (WFSRs) possess very attractive properties as they take advantage of modern word-based processors and thus increase the throughput. We provide a generalized form of the feedback function of WFSR along with some special cases. Then, a necessary and sufficient condition for nonsingular WFSR is discussed. We
study different word-based cascade systems and the period of sequences produced by these cascade systems is derived. We provide experimental results on avalanche property on states of cascade systems and statistical results of sequences produced by them. Finally, we present a crypt-analytic attack on cascade systems and suggest its countermeasure.
Compact Circuits for Efficient Mobius Transform
The Mobius transform is a linear circuit used to compute the evaluations of a Boolean function over all points on its input domain. The operation is very useful in finding the solution of a system of polynomial equations over GF(2) for obvious reasons. However the operation, although linear, needs exponential number of logic operations (around $n\cdot 2^{n-1}$ bit xors) for an $n$-variable Boolean function. As such, the only known hardware circuit to efficiently compute the Mobius transform requires silicon area that is exponential in $n$. For Boolean functions whose algebraic degree is bound by some parameter $d$, recursive definitions of the Mobius transform exist that requires only $O(n^{d+1})$ space in software. However converting the mathematical definition of this space-efficient algorithm into a hardware architecture is a non-trivial task, primarily because the recursion calls notionally lead to a depth-first search in a transition graph that requires context switches at each recursion call for which straightforward mapping to hardware is difficult. In this paper we look to overcome these very challenges in an engineering sense. We propose a space efficient sequential hardware circuit for the Mobius transform that requires only polynomial circuit area (i.e. $O(n^{d+1})$) provided the algebraic degree of the Boolean function is limited to $d$. We show how this circuit can be used as a component to efficiently solve polynomial equations of degree at most $d$ by using fast exhaustive search. We propose three different circuit architectures for this, each of which uses the Mobius transform circuit as a core component. We show that asymptotically, all the solutions of a system of $m$ polynomials in $n$ unknowns and algebraic degree $d$ over GF(2) can be found using a circuit of silicon area proportional to $m\cdot n^{d+1}$ and circuit depth proportional to $2\cdot\log_2(n-d)$. In the second part of the paper we introduce a fourth hardware solver that additionally aims to achieve energy efficiency. The main idea is to reduce the solution space to a small enough value by parallel application of Mobius transform circuits over the first few equations of the system. This is done so that one can check individually whether the vectors of this reduced solution space satisfy each of the remaining equations of the system using lower power consumption. The new circuit has area also bound by $m\cdot n^{d+1}$ and has circuit depth proportional to $d\cdot \log_2 n$. We also show that further optimizations with respect to energy consumption may be obtained by using depth-bound Mobius circuits that exponentially decrease run time at the cost of additional logic area and depth.
Concrete Security from Worst-Case to Average-Case Lattice Reductions
A famous reduction by Regev shows that random instances of the Learning With Errors (LWE) problem are asymptotically at least as hard as a worst-case lattice problem. As such, by assuming that standard lattice problems are hard to solve, the asymptotic security of cryptosystems based on the LWE problem is guaranteed. However, it has not been clear to which extent, if any, this reduction provides support for the security of present concrete parametrizations.
In this work we therefore use Regev's reduction to parametrize a cryptosystem, providing a reference as to what parameters are required to actually claim security from this reduction. This requires us to account for the concrete performance of this reduction, allowing the first parametrization of a cryptosystem that is provably secure based only on a conservative hardness estimate for a standard lattice problem. Even though we attempt to optimize the reduction, our system still requires significantly larger parameters than typical LWE-based cryptosystems, highlighting the significant gap between parameters that are used in practice and those for which worst-case reductions actually are applicable.
Last updated: 2025-01-08
Compressing Encrypted Data Over Small Fields
$\newcommand{\gen}{\mathsf{Gen}}\newcommand{\enc}{\mathsf{Enc}}\newcommand{\dec}{\mathsf{Dec}}$
A recent work of Fleischhacker, Larsen, and Simkin (Eurocrypt 2023) shows how to efficiently compress encrypted sparse vectors.
Subsequently, Fleischhacker, Larsen, Obremski, and Simkin (Eprint 2023) improve upon their work and provide more efficient constructions solving the same problem.
Being able to efficiently compress such vectors is very useful in a variety of applications, such as private information retrieval, searchable encryption, and oblivious message retrieval.
Concretely, assume one is given a vector $(m_1, \dots, m_n)$ with at most $t$ distinct indices $i \in [n]$, such that $m_i \neq 0$ and assume $(\gen,\enc, \dec)$ is an additively homomorphic encryption scheme.
The authors show that one can compress $(\enc(k, m_1), \dots, \enc(k, m_n))$, without knowing the secret key $k$, into a digest with size dependent on the upper bound on the sparsity $t$, but not on the total vector length $n$.
Unfortunately, all existing constructions either only work for encryption schemes that have sufficiently large plaintext spaces or require additional encrypted auxiliary information about the plaintext vector.
In this work, we show how to generalize the results of Fleischhacker, Larsen, and Simkin to encryption schemes with arbitrarily small plaintext spaces.
Our construction follows the same general ideas laid out in previous works but modifies them in a way that allows compressing the encrypted vectors correctly with high probability, independently of the size of the plaintext space.
One-Way Functions vs. TFNP: Simpler and Improved
Simon (1998) proved that it is impossible to construct collision-resistant hash functions from one-way functions using a black-box reduction. It is conjectured more generally that one-way functions do not imply, via a black-box reduction, the hardness of any total NP search problem (collision-resistant hash functions being just one such example). We make progress towards this conjecture by ruling out a large class of “single-query” reductions.
In particular, we improve over the prior work of Hubáček et al. (2020) in two ways: our result is established via a novel simpler combinatorial technique and applies to a broader class of semi black-box reductions.
BALoo: First and Efficient Countermeasure dedicated to Persistent Fault Attacks
Persistent fault analysis is a novel and efficient cryptanalysis method. The persistent fault attacks take advantage of a persistent fault injected in a non-volatile memory, then present on the device until the reboot of the device. Contrary to classical physical fault injection, where differential analysis can be performed, persistent fault analysis requires new analyses and dedicated countermeasures. Persistent fault analysis requires a persistent fault injected in the S-box such that the bijective characteristic of the permutation function is not present anymore. In particular, the analysis will use the non-uniform distribution of the S-box values: when one of the possible S-box values never appears and one of the possible S-box values appears twice.
In this paper, we present the first dedicated protection to prevent persistent fault analysis. This countermeasure, called BALoo for Bijection Assert with Loops, checks the property of bijectivity of the S-box. We show that this countermeasure has a 100% fault coverage for the persistent fault analysis, with a very small software overhead (memory overhead) and reasonable hardware overhead (logical resources, memory and performance). To evaluate the overhead of BALoo, we provide experimental results obtained with the software and the hardware (FPGA) implementations of an AES-128.
Correlated-Output Differential Privacy and Applications to Dark Pools
In the classical setting of differential privacy, a privacy-preserving query is performed on a private database, after which the query result is released to the analyst; a differentially private query ensures that the presence of a single database entry is protected from the analyst’s view. In this work, we contribute the first definitional framework for differential privacy in the trusted curator setting; clients submit private inputs to the trusted curator, which then computes individual outputs privately returned to each client. The adversary is more powerful than the standard setting; it can corrupt up to n − 1 clients and subsequently decide inputs and learn outputs of corrupted parties. In this setting, the adversary also obtains leakage from the honest output that is correlated with a corrupted output. Standard differentially private mechanisms protect client inputs but do not mitigate output correlation leaking arbitrary client information, which can forfeit client privacy completely. We initiate the investigation of a novel notion of correlated output differential privacy to bound the leakage from output correlation in the trusted curator setting. We define the satisfaction of both standard and correlated-output differential privacy as round differential privacy and highlight the relevance of this novel privacy notion to all application domains in the trusted curator model.
We explore round differential privacy in traditional “dark pool” market venues, which promise privacy-preserving trade execution to mitigate front-running; privately submitted trade orders and trade execution are kept private by the trusted venue operator. We observe that dark pools satisfy neither classic nor correlated-output differential privacy; in markets with low trade activity, the adversary may trivially observe recurring, honest trading patterns, and anticipate and front-run future trades. In response, we present the first round differentially private market mechanisms that formally mitigate information leakage from all trading activity of a user. This is achieved with fuzzy order matching, inspired by the standard randomized response mechanism; however, this also introduces a liquidity mismatch as buy and sell orders are not guaranteed to execute pairwise, thereby weakening output correlation; this mismatch is compensated for by a round differentially private liquidity provider mechanism, which freezes a noisy amount of assets from the liquidity provider for the duration of a privacy epoch, but leaves trader balances unaffected. We propose oblivious algorithms for realizing our proposed market mechanisms with secure multi-party computation (MPC) and implement these in the Scale-Mamba Framework using Shamir Secret Sharing based MPC. We demonstrate practical, round differentially private trading with comparable throughput as prior work implementing (traditional) dark pool algorithms in MPC; our experiments demonstrate practicality for both traditional finance and decentralized finance settings.
Proactive Secret Sharing with Constant Communication
This paper presents the first protocols for Proactive Secret Sharing (PSS) that only require constant (in the number of parties, $n$) communication per party per epoch. By harnessing the power of expander graphs, we are able to obtain strong guarantees about the security of the system. We present the following PSS protocols:
– A PSS protocol that provides privacy (but no robustness) against an adversary controlling $O(n)$ parties per epoch.
– A PSS protocol that provides robustness (but no privacy) against an adversary controlling $O(n)$ parties per epoch.
– A PSS protocol that provides privacy against an adversary controlling $O(n^{a})$ parties per epoch and provides robustness against an adversary controlling $O(n^{1−a})$ parties per epoch, for any constant $0 \leq a \leq 1$. Instantiating this with $a = \frac{1}{2}$ gives a PSS protocol that is proactively secure (private and robust) against an adversary controlling $O(\sqrt{n})$ parties per epoch.
Additionally, we discuss how secure channels, whose existence is usually assumed by PSS protocols, are challenging to create in the mobile adversary setting, and we present a method to instantiate them from a weaker assumption.
Constant Input Attribute Based (and Predicate) Encryption from Evasive and Tensor LWE
Constructing advanced cryptographic primitives such as obfuscation or broadcast encryption from standard hardness assumptions in the post quantum regime is an important area of research, which has met with limited success despite significant effort. It is therefore extremely important to find new, simple to state assumptions in this regime which can be used to fill this gap. An important step was taken recently by Wee (Eurocrypt '22) who identified two new assumptions from lattices, namely evasive ${\sf LWE}$ and tensor ${\sf LWE}$, and used these to construct broadcast encryption and ciphertext policy attribute based encryption for ${\sf P}$ with optimal parameters. Independently, Tsabary formulated a similar assumption and used it to construct witness encryption (Crypto '22). Following Wee's work, Vaikuntanathan, Wee and Wichs independently provided a construction of witness encryption (Asiacrypt '22).
In this work, we advance this line of research by providing the first construction of multi-input attribute based encryption (${\sf MIABE}$) for the function class ${\sf NC_1}$ for any constant arity from evasive ${\sf LWE}$. Our construction can be extended to support the function class ${\sf P}$ by using evasive and a suitable strengthening of tensor ${\sf LWE}$. In more detail, our construction supports $k$ encryptors, for any constant $k$, where each encryptor uses the master secret key ${\sf msk}$ to encode its input $(\mathbf{x}_i, m_i)$, the key generator computes a key ${\sf sk}_f$ for a function $f \in {\sf NC}_1$ and the decryptor can recover $(m_1,\ldots,m_k)$ if and only if $f(\mathbf{x}_1,\ldots,\mathbf{x}_k)=1$. The only known construction for ${\sf MIABE}$ for ${\sf NC}_1$ by Agrawal, Yadav and Yamada (Crypto '22) supports arity $2$ and relies on pairings in the generic group model (or with a non-standard knowledge assumption) in addition to ${\sf LWE}$. Furthermore, it is completely unclear how to go beyond arity $2$ using this approach due to the reliance on pairings.
Using a compiler from Agrawal, Yadav and Yamada (Crypto '22), our ${\sf MIABE}$ can be upgraded to multi-input predicate encryption for the same arity and function class. Thus, we obtain the first constructions for constant-arity predicate and attribute based encryption for a generalized class such as ${\sf NC}_1$ or ${\sf P}$ from simple assumptions that may be conjectured post-quantum secure. Along the way, we show that the tensor ${\sf LWE}$ assumption can be reduced to standard ${\sf LWE}$ in an important special case which was not known before. This adds confidence to the plausibility of the assumption and may be of wider interest.
CryptAttackTester: high-assurance attack analysis
Quantitative analyses of the costs of cryptographic attack algorithms play a central role in comparing cryptosystems, guiding the search for improved attacks, and deciding which cryptosystems to standardize. Unfortunately, these analyses often turn out to be wrong. Sometimes errors are not caught until years later.
This paper introduces CryptAttackTester (CAT), a software framework for high-assurance quantification of attack effectiveness. CAT enforces complete definitions of attack algorithms all the way down through the model of computation, enforces complete definitions of probability predictions and cost predictions all the way down through the cost metric, and systematically tests the predictions on small-scale inputs.
For example, CAT gives a fully defined meaning to the statement "the median cost of brute-force search for an AES-128 key is under 2^{141.89} bit operations", and provides clear, auditable reasons to believe that the statement is correct. This does not rule out all possible analysis errors, but with CAT it is no longer possible for bugs to hide inside ambiguous or untested security-level claims. The paper gives various examples of errors in the literature that survived typical informal testing practices and that would have been caught if CAT-enforced links had been in place.
As an important case study, the bulk of the current CAT release consists of full definitions of a broad spectrum of algorithms for information-set decoding (ISD), along with cost/probability predictions for each algorithm. ISD is the top attack strategy against the McEliece cryptosystem. The predictions cover interactions between (1) high-level search strategies from Prange, Lee–Brickell, Leon, Stern, Dumer, May–Meurer–Thomae, and Becker–Joux–May–Meurer; (2) random walks from Omura, Canteaut–Chabaud, Canteaut–Sendrier, and Bernstein–Lange–Peters; and (3) speedups in core subroutines such as linear algebra and sorting. The predictions also account for various attack overheads that were omitted from previous analyses. These gaps add up to roughly 10 bits, depending on parameters. CAT's tests catch much smaller errors than this.
The cost metric selected in CAT has a very simple definition, is a lower bound for the price-performance ratio of non-quantum special-purpose hardware (although the bound is loose for attacks bottlenecked by long-distance communication), and allows many optimization efforts to be shared with the design of cryptographic circuits.
Speeding up elliptic computations for Ethereum Account Abstraction
Account Abstraction is a powerful feature that will transform today Web3 onboarding UX. This notes describes an EVM (Ethereum Virtual Machine) implementation of the well known secp256r1 and ed25519 curves optimized for the specificities of the EVM environment. Our optimizations rely on EVM dedicated XYZZ elliptic coordinates system, hacked precomputations, and assembly tricks to cut from more than 1M to 200K/62K (with or withoutprecomputations)
Musketeer: Incentive-Compatible Rebalancing for Payment Channel Networks
In this work, we revisit the severely limited throughput problem of cryptocurrencies and propose a novel rebalancing approach for Payment Channel Networks (PCNs). PCNs are a popular solution for increasing the blockchain throughput, however, their benefit depends on the overall users’ liquidity. Rebalancing mechanisms are the state-of-the-art approach to maintaining high liquidity in PCNs. However, existing opt-in rebalancing mechanisms exclude users that may assist in rebalancing for small service fees, leading to suboptimal solutions and under-utilization of the PCNs’ bounded liquidity.
We introduce the first rebalancing approach for PCNs that includes all users, following an “all for one and one for all” design philosophy that yields optimal throughput. The proposed approach introduces a double-auction rebalancing problem, which we term Musketeer, where users can participate as buyers (paying fees to rebalance) or sellers (charging fees to route transactions). The desired properties tailored to the unique characteristics of PCNs are formally defined, including the novel property of cyclic budget balance that is a stronger variation of strong budget balance. Basic results derived from auction theory, including an impossibility and multiple mechanisms that either achieve all desiderata under a relaxed model or sacrifice one of the properties, are presented. We also propose a novel mechanism that leverages time delays as an additional cost to users. This mechanism is provably truthful, cyclic budget balanced, individually rational, and economic efficient
but only with respect to liquidity.
WESP: An encryption method that, as the key size increases, require an exponentially growing time to break
WESP is a new encryption algorithm that is based on equation systems, in which the equations are generated using the values of tables that act as the encryption key, and the equations having features making them suitable for cryptographic use. The algorithm is defined, and its properties are discussed. Besides just describing the algorithm, also reasons are presented why the algorithm works the way it works. The key size in WESP can be altered and has no upper limit, and typically the key size is bigger than currently commonly used keys.
A calculation is presented, calculating how many bytes can be securely encrypted before the algorithm might start to repeat it’s sequence of encrypting bytes, and that this period can be adjusted to be arbitrarily large.
It is also shown that withing the period the resulting stream of encrypting bytes is statistically uniformly distributed.
It is also shown that if the encryption tables are not known, the equations in the system of equations cannot be known, and it is demonstrated that the system of equations cannot be solved if the equations are not known, and thus the encryption cannot be broken in a closed form.
Then, we calculate for all symbols used in the algorithm, the minimum amount of trials needed, in order to be able to verity the trials. Since the algorithm is constantly updating key values, verification becomes impossible if equations are not evaluated in order. The calculation shows that the minimum number of trials required is such that the number of trials, i.e., the time required to break the encryption, increases exponentially as the key size grows. Since there is no upper limit on the key size there is neither any upper limit on the time it requires to break the encryption.
Conditional Cube Key Recovery Attack on Round-Reduced Xoodyak
Since the announcement of the NIST call for a new lightweight
cryptographic standard, a lot of schemes have been proposed in response. Xoodyak is one of these schemes and is among the finalists of the NIST competition with a sponge structure very similar to the Keccak hash function – the winner of the SHA3 NIST competition. In this paper with conditional cube attack technique, we fully recover the key of Xoodyak reduced to 6 and 7 rounds with time complexity resp. 2^{42.58} and 2^{76.003} in the nonce-reusing scenario. In our attack setting, we import the cube variables in the absorbing associated data phase, which has higher degree of freedom in comparison to data absorption phase. We use MILP tool for finding enough cube variables to perform the conditional key recovery attack. The 6-round attack is practical and has been implemented. To the best of our knowledge, this is the first proposed attack on 7-round Xoodyak.
Stealthy Logic Misuse for Power Analysis Attacks in Multi-Tenant FPGAs (Extended Version)
FPGAs have been used in the cloud since several years, as accelerators for various workloads such as machine learning, database processes and security tasks. As for other cloud services, a highly desired feature is virtualization in which multiple tenants can share a single FPGA to increase utilization and by that efficiency. By solely using standard FPGA logic in the untrusted tenant, on-chip logic sensors allow remote power analysis side-channel and covert channel attacks on the victim tenant. However, such sensors are implemented by unusual circuit constructions, such as ring oscillators, delay lines, or unusual interconnect configuration, which might be easily detected by bitstream and/or netlist checking. In this paper, we show that such structural checking methods are not universal solutions as the attacks can make use of any normal circuits, which mean they are ``benign-looking'' to any checking method. We indeed demonstrate that -- without any additional and suspicious implementation constraints -- standard circuits intended for legitimate tasks can be misused as a sensor thereby monitoring instantaneous power consumption, and hence conducting key-recovery attacks. This extremely stealthy attack is a threat that can originate from the application layers, i.e. through various high-level synthesis approaches.
To Pass or Not to Pass: Privacy-Preserving Physical Access Control
Anonymous or attribute-based credential (ABC) systems are a versatile and important cryptographic tool to achieve strong access control guarantees while simultaneously respecting the privacy of individuals. A major problem in the practical adoption of ABCs is their transferability, i.e., such credentials can easily be duplicated, shared or lent. One way to counter this problem is to tie ABCs to biometric features of the credential holder and to require biometric verification on every use. While this is certainly not a viable solution for all ABC use-cases, there are relevant and timely use-cases, such as vaccination credentials as widely deployed during the COVID-19 pandemic. In such settings, ABCs that are tied to biometrics, which we call Biometric-Bound Attribute-Based Credentials (bb-ABC), allow to implement scalable and privacy-friendly systems to control physical access to (critical) infrastructure and facilities.
While there are some previous works on bb-ABC in the literature, the state of affairs is not satisfactory. Firstly, in existing work the problem is treated in a very abstract way when it comes to the actual type of biometrics. Thus, it does not provide concrete solutions which allow for assessing their practicality when deployed in a real-world setting. Secondly, there is no formal model which rigorously captures bb-ABC systems and their security requirements, making it hard to assess their security guarantees. With this work we overcome these limitations and provide a rigorous formalization of bb-ABC systems. Moreover, we introduce two generic constructions which offer different trade-offs between efficiency and trust assumptions, and provide benchmarks from a concrete instantiation of such a system using facial biometrics. The latter represents a contact-less biometric feature that provides acceptable accuracy and seems particularly suitable to the above use-case.
More Efficient Lattice-Based Electronic Voting from NTRU
In recent years, there has been much focus on developing core cryptographic primitives based on lattice assumptions, driven by the NIST call for post-quantum key encapsulation and digital signature algorithms. However, more work must be conducted on efficient privacy-preserving protocols based on quantum-safe assumptions.
Electronic voting is one such privacy-preserving protocol whose adoption is increasing across the democratic world. E-voting offers both a fast and convenient alternative to postal voting whilst further ensuring cryptographic privacy of votes and offering full verifiability of the process. Owing to the sensitivity of voting and its infrastructure challenges, it is crucial to ensure security against quantum computers is baked into e-voting solutions.
We present an e-voting scheme from quantum-safe assumptions based on the hardness of the RLWE and NTRU lattice problems, providing concrete parameters and an efficient implementation. Our design achieves a factor $5.3 \times$ reduction in ciphertext size, $2.5 \times$ reduction in total communication cost, and $2 \times$ reduction in total computation time compared to the state-of-the-art lattice-based voting scheme by Aranha et al. (ACM CCS 2023). We argue that the efficiency of this scheme makes it suitable for real-world elections.
Our scheme makes use of non-ternary NTRU secrets to achieve optimal parameters. In order to compute the security of our design, we extend the ternary-NTRU work of Ducas and van Woerden (ASIACRYPT 2021) by determining the concrete fatigue point (for general secrets) of NTRU to be $q = 0.0058 \cdot \sigma^2 \cdot d^{2.484}$ (above which parameters become overstretched) for modulus $q$, ring dimension $d$, and secrets drawn from a Gaussian of parameter $\sigma$. We consider this relation to be of independent interest and demonstrate its significance by improving the efficiency of the (partially) blind signature scheme by del Pino and Katsumata (CRYPTO 2022).
On the (Im)possibility of Time-Lock Puzzles in the Quantum Random Oracle Model
Time-lock puzzles wrap a solution $\mathrm{s}$ inside a puzzle $\mathrm{P}$ in such a way that ``solving'' $\mathrm{P}$ to find $\mathrm{s}$ requires significantly more time than generating the pair $(\mathrm{s},\mathrm{P})$, even if the adversary has access to parallel computing; hence it can be thought of as sending a message $\mathrm{s}$ to the future. It is known [Mahmoody, Moran, Vadhan, Crypto'11] that when the source of hardness is only a random oracle, then any puzzle generator with $n$ queries can be (efficiently) broken by an adversary in $O(n)$ rounds of queries to the oracle.
In this work, we revisit time-lock puzzles in a quantum world by allowing the parties to use quantum computing and, in particular, access the random oracle in quantum superposition. An interesting setting is when the puzzle generator is efficient and classical, while the solver (who might be an entity developed in the future) is quantum powered and is supposed to need a long sequential time to succeed. We prove that in this setting there is no construction of time-lock puzzles solely from quantum (accessible) random oracles. In particular, for any $n$-query classical puzzle generator, our attack only asks $O(n)$ (also classical) queries to the random oracle, even though it does indeed run in quantum polynomial time if the honest puzzle solver needs quantum computing.
Assuming perfect completeness, we also show how to make the above attack run in exactly $n$ rounds while asking a total of $m\cdot n$ queries where $m$ is the query complexity of the puzzle solver. This is indeed tight in the round complexity, as we also prove that a classical puzzle scheme of Mahmoody et al. is also secure against quantum solvers who ask $n-1$ rounds of queries. In fact, even for the fully classical case, our attack quantitatively improves the total queries of the attack of Mahmoody et al. for the case of perfect completeness from $\Omega(mn \log n)$ to $mn$. Finally, assuming perfect completeness, we present an attack in the ``dual'' setting in which the puzzle generator is quantum while the solver is classical.
We then ask whether one can extend our classical-query attack to the fully quantum setting, in which both the puzzle generator and the solver could be quantum. We show a barrier for proving such results unconditionally. In particular, we show that if the folklore simulation conjecture, first formally stated by Aaronson and Ambainis [arXiv'2009] is false, then there is indeed a time-lock puzzle in the quantum random oracle model that cannot be broken by classical adversaries. This result improves the previous barrier of Austrin et. al [Crypto'22] about key agreements (that can have interactions in both directions) to time-lock puzzles (that only include unidirectional communication).
Compact Identity Based Encryption Based on n^{th} - Residuosity Assumption
Uncategorized
Uncategorized
Practical Identity Based Encryption (IBE) schemes use the costly bilinear pairing computation. Clifford Cock proposed an IBE based on quadratic residuosity in 2001 which does not use bilinear pairing but was not efficient in practice, due to the large ciphertext size. In 2007, Boneh et al. proposed the first space efficient IBE that was also based on quadratic residuosity problem. It was an improvement over Cock's scheme but still the time required for encryption was quartic in the security parameter. In this paper, we propose a compact, space and time efficient identity based encryption scheme without pairing, based on a variant of Paillier Cryptosystem and prove it to be CPA secure. We have also proposed a CCA secure scheme based on the basic IBE scheme using the Fujisaki-Okamoto transformation. We have proved both the schemes in the random oracle model.
Lattice-Based Succinct Arguments for NP with Polylogarithmic-Time Verification
Succinct arguments that rely on the Merkle-tree paradigm introduced by Kilian (STOC 92) suffer from larger proof sizes in practice due to the use of generic cryptographic primitives. In contrast, succinct arguments with the smallest proof sizes in practice exploit homomorphic commitments. However these latter are quantum insecure, unlike succinct arguments based on the Merkle-tree paradigm.
A recent line of works seeks to address this limitation, by constructing quantum-safe succinct arguments that exploit lattice-based commitments. The eventual goal is smaller proof sizes than those achieved via the Merkle-tree paradigm. Alas, known constructions lack succinct verification.
In this paper, we construct the first interactive argument system for NP with succinct verification that, departing from the Merkle-tree paradigm, exploits the homomorphic properties of lattice-based commitments. For an arithmetic circuit with N gates, our construction achieves verification time polylog(N) based on the hardness of the Ring Short-Integer-Solution (RSIS) problem.
The core technique in our construction is a delegation protocol built from commitment schemes based on leveled bilinear modules, a new notion that we deem of independent interest. We show that leveled bilinear modules can be realized from pre-quantum and from post-quantum cryptographic assumptions.
The QARMAv2 Family of Tweakable Block Ciphers
We introduce the QARMAv2 family of tweakable block ciphers. It is a redesign of QARMA (from FSE 2017) to improve its security bounds and allow for longer tweaks, while keeping similar latency and area.
The wider tweak input caters to both specific use cases and the design of modes of operation with higher security bounds. This is achieved through new key and tweak schedules, revised S-Box and linear layer choices, and a more comprehensive security analysis. QARMAv2 offers competitive latency and area in fully unrolled hardware implementations.
Some of our results may be of independent interest. These include: new MILP models of certain classes of diffusion matrices; the comparative analysis of a full reflection cipher against an iterative half-cipher; our boomerang attack framework; and an improved approach to doubling the width of a block cipher.
On vectorial functions mapping strict affine subspaces of their domain into strict affine subspaces of their co-domain, and the strong D-property
Given three positive integers $n<N$ and $M$, we study those vectorial Boolean $(N,M)$-functions $\mathcal{F}$ which map an $n$-dimensional affine space $A$ into an $m$-dimensional affine space where $m<M$ and possibly $m=n$. This provides $(n,m)$-functions $\mathcal{F}_A$ as restrictions of $\mathcal{F}$. We show that the nonlinearity of $\mathcal{F}$ must not be too large for allowing this, and we observe that if it is zero, then it is always possible. In this case, we show that the nonlinearity of the restriction may be large.
We then focus on the case $M=N$ and $\mathcal{F}$ of the form $\psi(\mathcal{G}(x))$ where $\mathcal{G}$ is almost perfect nonlinear (APN) and $\psi$ is a linear function with a kernel of dimension $1.$ We observe that the problem of determining the D-property of APN $(N-1,N)$-functions $\mathcal{G}_A$, where $A$ is a hyperplane, is related to the problem of constructing APN $(N-1,N-1)$-functions $\mathcal{F}_A$. For this reason, we introduce the strong D-property defined for $(N,N)$-functions $\mathcal{G}$. We give a characterization of this property for crooked functions and their compositional inverse (if it exists) by means of their ortho-derivatives, and we prove that the Gold APN function in dimension $N$ odd big enough has the strong D-property. We also prove in simpler a way than Taniguchi in 2023 that the strong D-property of the Gold APN function holds for $N$ even big enough. Then we give a partial result on the Dobbertin APN power function, and on the basis of this result, we conjecture that it has the strong D-property as well.
We then move our focus to two known infinite families of differentially 4-uniform $(N-1,N-1)$-permutations constructed as the restrictions of $(N,N)$-functions $\mathcal{F}(x)=\psi(\mathcal{G}(x))$ or $\mathcal{F}(x)=\psi(\mathcal{G}(x))+x$ where $\psi$ is linear with a kernel of dimension $1$ and $\mathcal{G}$ is an APN permutation. After a deeper investigation on these classes, we provide proofs (which were missing) that they are not APN in dimension $n=N-1$ even. Then we present our own construction by relaxing some hypothesis on $\psi$ and $\mathcal{G}$.
Collision Entropy Estimation in a One-Line Formula
We address the unsolved question of how best to estimate the collision entropy, also called quadratic or second order Rényi entropy. Integer-order Rényi entropies are synthetic indices useful for the characterization of probability distributions. In recent decades, numerous studies have been conducted to arrive at their valid estimates starting from experimental data, so to derive suitable classification methods for the underlying processes, but optimal solutions have not been reached yet. Limited to the estimation of collision entropy, a one-line formula is presented here. The results of some specific Monte Carlo experiments give evidence of the validity of this estimator even for the very low densities of the data spread in high-dimensional sample spaces. The method strengths are unbiased consistency, generality and minimum computational cost.
Analysis of the security of the PSSI problem and cryptanalysis of the Durandal signature scheme
We present a new attack against the PSSI problem, one of the three problems at the root of security of Durandal, an efficient rank metric code-based signature scheme with a public key size of 15 kB and a signature size of 4 kB, presented at EUROCRYPT'19. Our attack recovers the private key using a leakage of information coming from several signatures produced with the same key. Our approach is to combine pairs of signatures and perform Cramer-like formulas in order to build subspaces containing a secret element. We break all existing parameters of Durandal: the two published sets of parameters claiming a security of 128 bits are broken in respectively $2^{66}$ and $2^{73}$ elementary bit operations, and the number of signatures required to finalize the attack is 1,792 and 4,096 respectively. We implemented our attack and ran experiments that demonstrated its success with smaller parameters.
Homomorphic Indistinguishability Obfuscation and its Applications
In this work, we propose the notion of homomorphic indistinguishability obfuscation ($\mathsf{HiO}$) and present a construction based on subexponentially-secure $\mathsf{iO}$ and one-way functions. An $\mathsf{HiO}$ scheme allows us to convert an obfuscation of circuit $C$ to an obfuscation of $C'\circ C$, and this can be performed obliviously (that is, without knowing the circuit $C$). A naive solution would be to obfuscate $C' \circ \mathsf{iO}(C)$. However, if we do this for $k$ hops, then the size of the final obfuscation is exponential in $k$. $\mathsf{HiO}$ ensures that the size of the final obfuscation remains polynomial after repeated compositions. As an application, we show how to build function-hiding hierarchical multi-input functional encryption and homomorphic witness encryption using $\mathsf{HiO}$.
Generalized Initialization of the Duplex Construction
The duplex construction is already well analyzed with many papers proving its security in the random permutation model. However, so far, the first phase of the duplex, where the state is initialized with a secret key and an initialization vector ($\mathit{IV}$), is typically analyzed in a worst case manner. More detailed, it is always assumed that the adversary is allowed to choose the $\mathit{IV}$ on its will. In this paper, we analyze how the security changes if restrictions on the choice of the $\mathit{IV}$ are imposed, varying from the global nonce case over the random $\mathit{IV}$ case to the $\mathit{IV}$ on key case. The last one, in particular, is the duplex analogue of the use of a nonce masked with a secret in AES-GCM in TLS 1.3. We apply our findings to duplex-based encryption and authenticated encryption, and discuss the practical applications of our results.
Video-Based Cryptanalysis: Extracting Cryptographic Keys from Video Footage of a Device’s Power LED
In this paper, we present video-based cryptanalysis,
a new method used to recover secret keys from a device by
analyzing video footage of a device’s power LED. We show that
cryptographic computations performed by the CPU change the
power consumption of the device which affects the brightness of
the device’s power LED. Based on this observation, we show how
attackers can exploit commercial video cameras (e.g., an iPhone
13’s camera or Internet-connected security camera) to recover
secret keys from devices. This is done by obtaining video footage
of a device’s power LED (in which the frame is filled with the
power LED) and exploiting the video camera’s rolling shutter
to increase the sampling rate by three orders of magnitude
from the FPS rate (60 measurements per second) to the rolling
shutter speed (60K measurements per second in the iPhone 13
Pro Max). The frames of the video footage of the device’s power
LED are analyzed in the RGB space, and the associated RGB
values are used to recover the secret key by inducing the power
consumption of the device from the RGB values. We demonstrate
the application of video-based cryptanalysis by performing two
side-channel cryptanalytic timing attacks and recover: (1) a 256-
bit ECDSA key from a smart card by analyzing video footage of
the power LED of a smart card reader via a hijacked Internet-connected security camera located 16 meters away from the smart
card reader, and (2) a 378-bit SIKE key from a Samsung Galaxy
S8 by analyzing video footage of the power LED of Logitech Z120
USB speakers that were connected to the same USB hub (that
was used to charge the Galaxy S8) via an iPhone 13 Pro Max.
Finally, we discuss countermeasures, limitations, and the future
of video-based cryptanalysis in light of the expected improvements
in video cameras’ specifications.
mR$_{\text{LWE}}$-CP-ABE a revocable CP-ABE for Post-Quantum Cryptography
We address the problem of user fast revocation in the lattice based CP-ABE by extending the scheme originally introduced in [A ciphertext policy attribute-based encryption scheme without pairings. J. Zhang, Z. Zhang - ICISC 2011]. While a lot of work exists on the construction of revocable schemes for CP-ABE based on pairings, works based on lattices are not so common, and – to the best of our knowledge – we introduce the first server-aided revocation scheme in a lattice based CP-ABE scheme, hence providing post-quantum safety. In particular, we rely on semi-trusted "mediators" to provide a multi-step decryption capable of handling mediation without re-encryption.
We comment on the scheme and its application and we provide performance experiments on a prototype implementation in the ABE spin-off library of Palisade to evaluate the overhead compared with the original scheme.
Efficient Card-Based Millionaires' Protocols via Non-Binary Input Encoding
Comparison of integers, a traditional topic in secure multiparty computation since Yao's pioneering work on "Millionaires' Problem" (FOCS 1982), is also well studied in card-based cryptography. For the problem, Miyahara et al. (Theoretical Computer Science, 2020) proposed a protocol using binary cards (i.e., cards with two kinds of symbols) that is highly efficient in terms of numbers of cards and shuffles, and its extension to number cards (i.e., cards with distinct symbols). In this paper, with a different design strategy which we name "Tug-of-War technique", we propose new protocols based on binary cards and on number cards. For binary cards, our protocol improves the previous protocol asymptotically (in bit lengths of input integers) in terms of numbers of cards and shuffles when adopting ternary encoding of input integers. For number cards, at the cost of increasing the number of cards, our protocol improves the number of shuffles of the previous protocol even with binary encoding, and more with $q$-ary encoding where $q > 2$.
Beware Your Standard Cells! On Their Role in Static Power Side-Channel Attacks
Static or leakage power, which is especially prominent in advanced technology nodes, enables so-called static power side-channel attacks (S-PSCA). While countermeasures exist, they often incur considerable overheads. Besides, hardware Trojans represent another threat. Although the interplay between static power, down-scaling of technology nodes, and the vulnerability to S-PSCA is already established, an important detail was not covered yet: the role of the components at the heart of this sensitive interplay, the standard cells.
Here, we study this intricate relationship for two commercial 28nm and 65nm technologies, using a commercial-grade IC design setup, and under realistic PPA objectives. Specifically, we study how threshold-voltage (VT) tuning of standard cells impacts the resilience of representative AES and PRESENT cipher hardware, including versions with established countermeasures. Our proposed CAD framework enables a security-vs-PPA-aware design-space exploration. Contrary to the belief that high-performance designs are generally more vulnerable to S-PSCA, we find that timing constraints and the distribution of different VT cells are more pivotal factors. Furthermore, we discover that attackers can deploy highly effective and stealthy S-PSCA-based Trojans, all without any gate overheads or any timing violations.
Threshold Private Set Intersection with Better Communication Complexity
Given $\ell$ parties with sets $X_1, \dots, X_\ell$ of size $n$, we would like to securely compute the intersection $\cap_{i=1}^\ell X_i$, if it is larger than $n-t$ for some threshold $t$, without revealing any other additional information. It has previously been shown (Ghosh and Simkin, Crypto 2019) that this function can be securely computed with a communication complexity that only depends on $t$ and in particular does not depend on $n$. For small values of $t$, this results in protocols that have a communication complexity that is sublinear in the size of the inputs. Current protocols either rely on fully homomorphic encryption or have an at least quadratic dependency on the parameter $t$.
In this work, we construct protocols with a quasilinear dependency on $t$ from simple assumptions like additively homomorphic encryption and oblivious transfer. All existing approaches, including ours, rely on protocols for computing a single bit, which indicates whether the intersection is larger than $n-t$ without actually computing it. Our key technical contribution, which may be of independent interest, takes any such protocol with secret shared outputs and communication complexity $\mathcal{O}(\lambda \ell \cdot\mathrm{poly}(t))$, where $\lambda$ is the security parameter, and transforms it into a protocol with communication complexity $\mathcal{O}(\lambda^2 \ell t \cdot\mathrm{polylog}(t))$.
Invertible Bloom Lookup Tables with Less Memory and Randomness
In this work we study Invertible Bloom Lookup Tables (IBLTs) with small failure probabilities.
IBLTs are highly versatile data structures that have found applications in set reconciliation protocols, error-correcting codes, and even the design of advanced cryptographic primitives.
For storing $n$ elements and ensuring correctness with probability at least $1 - \delta$, existing IBLT constructions require $\Omega(n(\frac{\log(1/\delta)}{\log(n)}+1))$ space and they crucially rely on fully random hash functions.
We present new constructions of IBLTs that are simultaneously more space efficient and require less randomness.
For storing $n$ elements with a failure probability of at most $\delta$, our data structure only requires $\mathcal{O}\left(n + \log(1/\delta)\log\log(1/\delta)\right)$ space and $\mathcal{O}\left(\log(\log(n)/\delta)\right)$-wise independent hash functions.
As a key technical ingredient we show that hashing $n$ keys with any $k$-wise independent hash function $h:U \to [Cn]$ for some sufficiently large constant $C$ guarantees with probability $1 - 2^{-\Omega(k)}$ that at least $n/2$ keys will have a unique hash value.
Proving this is non-trivial as $k$ approaches $n$.
We believe that the techniques used to prove this statement may be of independent interest.
We apply our new IBLTs to the encrypted compression problem, recently studied by Fleischhacker, Larsen, Simkin (Eurocrypt 2023).
We extend their approach to work for a more general class of encryption schemes and using our new IBLT we achieve an asymptotically better compression rate.
Zeromorph: Zero-Knowledge Multilinear-Evaluation Proofs from Homomorphic Univariate Commitments
A multilinear polynomial is a multivariate polynomial of degree at most one in each variable. This paper introduces a new scheme to commit to multilinear polynomials and to later prove evaluations thereof. The scheme exponentially improves on the added prover costs for evaluation proofs to be zero-knowledge.
The construction of the scheme is generic and relies only on the additive homomorphic property of any scheme to commit to univariate polynomials, and on a protocol to prove that committed polynomials satisfy public degree bounds. As the construction requires to check that several committed univariate polynomials do not exceed given, separate bounds, the paper also gives a method to batch executions of any degree-check protocol on homomorphic commitments.
For an n-linear polynomial, the instantiation of the scheme with a hiding version of KZG commitments (Kate, Zaverucha and Goldberg at Asiacrypt 2010) leads to a scheme with an evaluation prover that performs only n + 5 extra (i.e., compared to the variant of the same scheme that is not zero-knowledge) first-group operations to achieve the zero-knowledge property. In contrast, previous constructions require an extra 2^n multi-scalar multiplication. The instantiation does so without any concessions on the other performance measures compared to the state of the art.
Unlinkability and Interoperability in Account-Based Universal Payment Channels
Payment channels allow a sender to do multiple transactions with a receiver without recording each single transaction on-chain. While most of the current constructions for payment channels focus on UTXO-based cryptocurrencies with reduced scripting capabilities (e.g., Bitcoin or Monero), little attention has been given to the possible benefits of adapting such constructions to cryptocurrencies based on the account model and offering a Turing complete language (e.g., Ethereum).
The focus of this work is to implement efficient payment channels tailored to the capabilities of account-based cryptocurrencies with Turing-complete language support in order to provide scalable payments that are interoperable across different cryptocurrencies and unlinkable for third-parties (e.g., payment intermediaries). More concretely, we continue the line of research on cryptocurrency universal payment channels (UPC) which facilitate interoperable payment channel transactions across different ledgers in a hub-and-spoke model, by offering greater scalability than point-to-point architectures. Our design proposes two different versions, UPC and AUPC. For UPC we formally describe the protocol ideas sketched in previous work and evaluate our proof-of-concept implementation. Then, AUPC further extends the concept of universal payment channels by payment unlinkability against the intermediary server.
Attribute-based Single Sign-On: Secure, Private, and Efficient
A Single Sign-On (SSO) system allows users to access different remote services while authenticating only once. SSO can greatly improve the usability and security of online activities by dispensing with the need to securely remember or store tens or hundreds of authentication secrets. On the downside, today's SSO providers can track users' online behavior, and collect personal data that service providers want to see asserted before letting a user access their resources.
In this work, we propose a new policy-based Single Sign-On service, i.e., a system that produces access tokens that are conditioned on the user's attributes fulfilling a specified policy. Our solution is based on multi-party computation and threshold cryptography, and generates access tokens of standardized format. The central idea is to distribute the role of the SSO provider among several entities, in order to shield user attributes and access patterns from each individual entity. We provide a formal security model and analysis in the Universal Composability framework, against proactive adversaries. Our implementation and benchmarking show the practicality of our system for many real-world use cases.
Limits in the Provable Security of ECDSA Signatures
Digital Signatures are ubiquitous in modern computing. One of the most widely used digital signature schemes is ECDSA due to its use in TLS, various Blockchains such as Bitcoin and Etherum, and many other applications. Yet the formal analysis of ECDSA is comparatively sparse. In particular, all known security results for ECDSA rely on some idealized model such as the generic group model or the programmable (bijective) random oracle model.
In this work, we study the question whether these strong idealized models are necessary for proving the security of ECDSA. Specifically, we focus on the programmability of ECDSA's "conversion function" which maps an elliptic curve point into its $x$-coordinate modulo the group order. Unfortunately, our main results are negative. We establish, by means of a meta reductions, that an algebraic security reduction for ECDSA can only exist if the security reduction is allowed to program the conversion function. As a consequence, a meaningful security proof for ECDSA is unlikely to exist without strong idealization.
Hidden Stream Ciphers and TMTO Attacks on TLS 1.3, DTLS 1.3, QUIC, and Signal
Transport Layer Security (TLS) 1.3 and the Signal protocol are very important and widely used security protocols. We show that the key update function in TLS 1.3 and the symmetric key ratchet in Signal can be modeled as non-additive synchronous stream ciphers. This means that the efficient Time Memory Tradeoff Attacks for stream ciphers can be applied. The implication is that TLS 1.3, QUIC, DTLS 1.3, and Signal offer a lower security level against TMTO attacks than expected from the key sizes. We provide detailed analyses of the key update mechanisms in TLS 1.3 and Signal, illustrate the importance of ephemeral key exchange, and show that the process that DTLS 1.3 and QUIC use to calculate AEAD limits is flawed. We provide many concrete recommendations for the analyzed protocols.
Randomness of random in Cisco ASA
It all started with ECDSA nonces and keys duplications in a large amount of X.509 certificates generated by Cisco ASA security gateways, detected through TLS campaigns analysis.
After some statistics and blackbox keys recovery, it continued by analyzing multiple firmwares for those hardware devices and virtual appliances to unveil the root causes of these collisions. It ended up with keygens to recover RSA keys, ECDSA keys and signatures nonces.
The current article describes our journey understanding Cisco ASA randomness issues through years, leading to CVE-2023-20107 [CVE-2023-20107, CSCvm90511]. More generally, it also provides technical and practical feedback on what can and cannot be done regarding entropy sources in association with DRBGs and other random processing mechanisms.
General Results of Linear Approximations over Finite Abelian Groups
In recent years, progress in practical applications of secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge proofs (ZK) motivate people to explore symmetric-key cryptographic algorithms, as well as corresponding cryptanalysis techniques (such as differential cryptanalysis, linear cryptanalysis), over general finite fields $\mathbb{F}$ or the additive group induced by $\mathbb{F}^n$. This investigation leads to the break of some MPC/FHE/ZK-friendly symmetric-key primitives, the United States format-preserving encryption standard FF3-1 and the South-Korean standards FEA-1 and FEA-2. In this paper, we revisit linear cryptanalysis and give general results of linear approximations over arbitrary finite Abelian groups. We consider the nonlinearity, which is the maximal non-trivial linear approximation, to characterize the resistance of a function against linear cryptanalysis. The lower bound of the nonlinearity of a function $F:G\rightarrow H$ over an arbitrary finite Abelian group was first given by Pott in 2004. However, the result was restricted to the case that the size of $G$ divides the size of $H$ due to its connection to relative difference sets. We complete the generalization from $\mathbb{F}_2^n$ to finite Abelian groups and give the lower bound of $\lambda_F$ for all different cases. Our result is deduced by the new links that we established between linear cryptanalysis and differential cryptanalysis over general finite Abelian groups.
Amortized Functional Bootstrapping in less than 7ms, with $\tilde{O}(1)$ polynomial multiplications
Amortized bootstrapping offers a way to refresh multiple ciphertexts of a fully homomorphic encryption scheme in parallel more efficiently than refreshing a single ciphertext at a time. Micciancio and Sorrell (ICALP 2018) first proposed the technique to bootstrap $n$ LWE ciphertexts simultaneously, reducing the total cost from $\tilde{O}(n^2)$ to $\tilde{O}(3^\epsilon n^{1+\frac{1}{\epsilon}})$ for arbitrary $\epsilon > 0$. Several recent works have further improved the asymptotic cost. Despite these amazing progresses in theoretical efficiency, none of them demonstrates the practicality of batched LWE ciphertext bootstrapping. Moreover, most of these works only support limited functional bootstrapping, i.e. only supporting the evaluation of some specific type of function when performing bootstrapping.
In this work, we propose a construction that is not only asymptotically efficient (requiring only $\tilde{O}(n)$ polynomial multiplications for bootstrapping of $n$ LWE ciphertexts) but also concretely efficient. We implement our scheme as a C++ library and show that it takes $< 5$ms per LWE ciphertext to bootstrap for a binary gate, which is an order of magnitude faster than the state-of-the-art C++ implementation on LWE ciphertext bootstrapping in OpenFHE. Furthermore, our construction supports batched arbitrary functional bootstrapping. For a 9-bit messages space, our scheme takes ${\sim}6.7$ms per LWE ciphertext to evaluate an arbitrary function with bootstrapping, which is about two to three magnitudes faster than all the existing schemes that achieve a similar functionality and message space.
Efficient 3PC for Binary Circuits with Application to Maliciously-Secure DNN Inference
In this work, we focus on maliciously secure 3PC for binary circuits with honest majority. While the state-of-the-art (Boyle et al. CCS 2019) has already achieved the same amortized communication as the best-known semi-honest protocol (Araki et al. CCS 2016), they suffer from a large computation overhead: when comparing with the best-known implementation result (Furukawa et al. Eurocrypt 2017) which requires $9\times$ communication cost of Araki et al., the protocol by Boyle et al. is around $4.5\times$ slower than that of Furukawa et al.
In this paper, we design a maliciously secure 3PC protocol that matches the same communication as Araki et al. with comparable concrete efficiency as Furukawa et al. To obtain our result, we manage to apply the distributed zero-knowledge proofs (Boneh et al. Crypto 2019) for verifying computations over $\mathbb{Z}_2$ by using \emph{prime} fields and explore the algebraic structure of prime fields to make the computation of our protocol friendly for native CPU computation.
Experiment results show that our protocol is around $3.5\times$ faster for AES circuits than Boyle et al. We also applied our protocol to the binary part (e.g. comparison and truncation) of secure deep neural network inference, and results show that we could reduce the time cost of achieving malicious security in the binary part by more than $67\%$.
Besides our main contribution, we also find a hidden security issue in many of the current probabilistic truncation protocols, which may be of independent interest.
A Hardware-Software Co-Design for the Discrete Gaussian Sampling of FALCON Digital Signature
Sampling random values from a discrete Gaussian distribution with high precision is a major and computationally intensive operation of upcoming or existing cryptographic standards. FALCON is one such algorithm that the National Institute of Standards and Technology chose to standardize as a next-generation, quantum-secure digital signature algorithm. The discrete Gaussian sampling of FALCON has both flexibility and efficiency needs—it constitutes 72% of total signature generation in reference software and requires sampling from a variable mean and standard deviation. Unfortunately, there are no prior works on accelerating this complete sampling procedure.
In this paper, we propose a hardware-software co-design for accelerating FALCON’s discrete Gaussian sampling subroutine. The proposed solution handles the flexible computations for setting the variable parameters in software and executes core operations with low latency, parameterized, and custom hardware. The hardware parameterization allows trading off area vs. performance. On a Xilinx SoC FPGA Architecture, the results show that compared to the reference software, our solution can accelerate the sampling up to 9.83× and the full signature scheme by 2.7×. Moreover, we quantified that our optimized multiplier circuits can improve the throughput over a straightforward implementation by 60%.
Efficient Zero Knowledge for Regular Language
A succinct zero knowledge proof for regular language mem-
bership, i.e., to prove a secret string behind an encryption (hash) belongs
to a regular language is useful, e.g., for asserting that an encrypted email
is free of malware. The great challenge in practice is that the regular
language used is often huge. We present zkreg, a distributed commit-
and-prove system that handles such complexity. In zkreg, cryptographic
operations are encoded using arithmetic circuits, and input acceptance is
modeled as a zero knowledge subset problem using Σ-protocols. We in-
troduce a Feedback Commit-and-Prove (FB-CP) scheme, which connects
Σ-protocols and the Groth16 system with O(1) proof size and verifier
cost. We present a close-to-optimal univariate instantiation of zk-VPD,
a zero knowledge variation of the KZG polynomial commitment scheme,
based on which an efficient zk-subset protocol is developed. We develop
a 2-phase proof scheme to further exploit the locality of Aho-Corasick
automata. To demonstrate the performance and scalability of zkreg, we
prove that all ELF files (encrypted and hashed) in a Linux CentOS 7
are malware free. Applying inner pairing product argument, we obtain
an aggregated proof of 1.96 MB which can be verified in 6.5 seconds.
Optimal Broadcast Encryption and CP-ABE from Evasive Lattice Assumptions
We present a new, simple candidate broadcast encryption scheme for $N$ users with parameter size poly$(\log N)$. We prove security of our scheme under a non-standard variant of the LWE assumption where the distinguisher additionally receives short Gaussian pre-images, while avoiding zeroizing attacks. This yields the first candidate optimal broadcast encryption that is plausibly post-quantum secure, and enjoys a security reduction to a simple assumption. As a secondary contribution, we present a candidate ciphertext-policy attribute-based encryption (CP-ABE) scheme for circuits of a-priori bounded polynomial depth where the parameter size is independent of the circuit size, and prove security under an additional non-standard assumption.
$\mathsf{zkSaaS}$: Zero-Knowledge SNARKs as a Service
A decade of active research has led to practical constructions of zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) that are now being used in a wide variety of applications. Despite this astonishing progress, overheads in proof generation time remain significant.
In this work, we envision a world where consumers with low computational resources can outsource the task of proof generation to a group of untrusted servers in a privacy-preserving manner. The main requirement is that these servers should be able to collectively generate proofs at a faster speed (than the consumer). Towards this goal, we introduce a framework called zk-SNARKs-as-a-service ($\mathsf{zkSaaS}$) for faster computation of zk-SNARKs. Our framework allows for distributing proof computation across multiple servers such that each server is expected to run for a shorter duration than a single prover. Moreover, the privacy of the prover's witness is ensured against any minority of colluding servers.
We design custom protocols in this framework that can be used to obtain faster runtimes for widely used zk-SNARKs, such as Groth16 [EUROCRYPT 2016], Marlin [EUROCRYPT 2020], and Plonk [EPRINT 2019]. We implement proof of concept zkSaaS for the Groth16 and Plonk provers. In comparison to generating these proofs on commodity hardware, we show that not only can we generate proofs for a larger number of constraints (without memory exhaustion), but can also get $\approx 22\times$ speed-up when run with 128 parties for $2^{25}$ constraints with Groth16 and $2^{21}$ gates with Plonk.
Pseudorandom Strings from Pseudorandom Quantum States
We study the relationship between notions of pseudorandomness in the quantum and classical worlds. Pseudorandom quantum state generator (PRSG), a pseudorandomness notion in the quantum world, is an efficient circuit that produces states that are computationally indistinguishable from Haar random states. PRSGs have found applications in quantum gravity, quantum machine learning, quantum complexity theory, and quantum cryptography. Pseudorandom generators, on the other hand, a pseudorandomness notion in the classical world, is ubiquitous to theoretical computer science. While some separation results were known between PRSGs, for some parameter regimes, and PRGs, their relationship has not been completely understood.
In this work, we show that a natural variant of pseudorandom generators called quantum pseudorandom generators (QPRGs) can be based on the existence of logarithmic output length PRSGs. Our result along with the previous separations gives a better picture regarding the relationship between the two notions. We also study the relationship between other notions, namely, pseudorandom function-like state generators and pseudorandom functions. We provide evidence that QPRGs can be as useful as PRGs by providing cryptographic applications of QPRGs such as commitments and encryption schemes.
Our primary technical contribution is a method for pseudodeterministically extracting uniformly random strings from Haar-random states.
Near-Optimal Oblivious Key-Value Stores for Efficient PSI, PSU and Volume-Hiding Multi-Maps
In this paper, we study oblivious key-value stores (OKVS) that enable encoding n key-value pairs into length $m$ encodings while hiding the input keys. The goal is to obtain high rate, $n/m$, with efficient encoding and decoding algorithms. We present $\mathsf{RB\text{-}OKVS}$ built on random band matrices that obtains near-optimal rates as high as 0.97 whereas prior works could only achieve rates up to 0.81 with similar encoding times.
Using $\mathsf{RB\text{-}OKVS}$, we obtain state-of-the-art protocols for private set intersection (PSI) and union (PSU). Our semi-honest PSI has up to 12% smaller communication and 13% reductions in monetary cost with slightly larger computation. We also obtain similar improvements for both malicious and circuit PSI. For PSU, our protocol obtains improvements of up to 22% in communication, 40% in computation and 21% in monetary cost. In general, we obtain the most communication- and cost-efficient protocols for all the above primitives.
Finally, we present the first connection between OKVS and volume-hiding encrypted multi-maps (VH-EMM) where the goal is to outsource storage of multi-maps while hiding the number of values associated with each key (i.e., volume). We present $\mathsf{RB\text{-}MM}$ with 16% smaller storage, 5x faster queries and 8x faster setup than prior works.
SublonK: Sublinear Prover PlonK
We propose SublonK - a new zero-knowledge succinct non-interactive argument of knowledge (zkSNARK). SublonK builds on PlonK [EPRINT'19], a popular state-of-the-art practical zkSNARK. Our new construction preserves all the great features of PlonK, i.e., it supports constant size proofs, constant time proof verification, a circuit-independent universal setup, as well as support for custom gates and lookup gates. Moreover, SublonK achieves improved prover running time over PlonK. In PlonK, the prover runtime grows with circuit size. Instead, in Sublonk, the prover runtime grows with the size of the "active part" of the circuit. For instance, consider circuits encoding conditional execution, where only a fraction of the circuit is exercised by the input. For such circuits, the prover runtime in SublonK grows only with the exercised execution path.
As an example, consider the zkRollup circuit. This circuit involves executing one of n code segments k times. For this case, using PlonK involves proving a circuit of size n.k code segments. In SublonK, the prover costs are close to proving a PlonK proof for a circuit of size roughly k code segments. Concretely, based on our implementation, for parameter choices derived from rollup contracts on Ethereum, n = 8, k = {2^{10},...,2^{16}}, the SublonK prover is approximately 4.6 times faster than the PlonK prover. Proofs in SublonK are 2.4KB, and can be verified in under 50ms.
Secure Multiparty Computation with Free Branching
We study secure multi-party computation (MPC) protocols for branching circuits that contain multiple sub-circuits (i.e., branches) and the output of the circuit is that of a single active branch. Crucially, the identity of the active branch must remain hidden from the protocol participants.
While such circuits can be securely computed by evaluating each branch and then multiplexing the output, such an approach incurs a communication cost linear in the size of the entire circuit. To alleviate this, a series of recent works have investigated the problem of reducing the communication cost of branching executions inside MPC (without relying on fully homomorphic encryption). Most notably, the stacked garbling paradigm [Heath and Kolesnikov, CRYPTO'20] yields garbled circuits for branching circuits whose size only depends on the size of the largest branch. Presently, however, it is not known how to obtain similar communication improvements for secure computation involving more than two parties.
In this work, we provide a generic framework for branching multi-party computation that supports any number of parties. The communication complexity of our scheme is proportional to the size of the largest branch and the computation is linear in the size of the entire circuit. We provide an implementation and benchmarks to demonstrate practicality of our approach.
What If Alice Wants Her Story Told?
In this paper, we pose the problem of defining technical privacy guarantees for anonymizing stories. This is a challenge that arises in practice in contexts like journalism and memoir writing, and can be crucial in determining whether a story gets told and how effectively it is presented. While recent decades have seen leaps and bounds in the development of approaches like differential privacy for numerical and categorical data sets, none of these techniques are directly applicable to settings where narrative structure is crucial to preserve. Here we begin an exploration of what kind of definitions could be achievable in such settings, and discuss the building blocks and interdisciplinary collaborations that could shape new solutions. Having scientific guarantees for anonymity in narrative forms could have far-reaching effects - in the peace of mind of potential tellers and subjects of stories, as well as in the legal sphere. This could ultimately expand the kinds of stories that can be told.
Practical Schnorr Threshold Signatures Without the Algebraic Group Model
Threshold signatures are digital signature schemes in which a set of $n$ signers specify a threshold $t$ such that any subset of size $t$ is authorized to produce signatures on behalf of the group. There has recently been a renewed interest in this primitive, largely driven by the need to secure highly valuable signing keys, e.g., DNSSEC keys or keys protecting digital wallets in the cryptocurrency ecosystem. Of special interest is FROST, a practical Schnorr threshold signature scheme, which is currently undergoing standardization in the IETF and whose security was recently analyzed at CRYPTO'22.
We continue this line of research by focusing on FROST's unforgeability combined with a practical distributed key generation (DKG) algorithm. Existing proofs of this setup either use non-standard heuristics, idealized group models like the AGM, or idealized key generation. Moreover, existing proofs do not consider all practical relevant optimizations that have been proposed. We close this gap between theory and practice by presenting the Schnorr threshold signature scheme Olaf, which combines the most efficient known FROST variant FROST3 with a variant of Pedersen's DKG protocol (as commonly used for FROST), and prove its unforgeability. Our proof relies on the AOMDL assumption (a weaker and falsifiable variant of the OMDL assumption) and, like proofs of regular Schnorr signatures, on the random oracle model.
Spilling-Cascade: an Optimal PKE Combiner for KEM Hybridization
Hybrid Post-Quantum cryptography is a cautious approach that aims to guard against the threat posed by the quantum computer, through the simultaneous use of Post-Quantum (PQ) and classical (i.e. pre-quantum) cryptosystems, should the post-quantum schemes used prove insecure.
Regarding the hybridization of Key Encapsulation Mechanisms (KEMs), most recent studies focus on safely combining the symmetric keys output by a parallel execution of classical and Post-Quantum KEMs. While this architecture is straightforward, it appears to lack bandwidth optimization.
Hence, we propose a novel method for hybridizing several KEMs more effectively, by combining the underlying Public-Key Encryption schemes (PKEs) in an innovative variant of the cascade composition that we call “spilling-cascade”, before turning the hybrid PKE into a KEM with a FO transformation. We prove that this architecture constitutes a robust combiner for encryption schemes up to IND-CPA security, which permits to eventually generate an IND-CCA-secure KEM.
In terms of performance, our spilling-cascade scheme has a better communication cost than the commonly used parallel combination, with a bandwidth gain of its ciphertext that ranges from 2.8% to 13 % com- pared to the latter, depending on the number and the characteristics of the PKEs that are combined. Moreover, we prove that for given PKEs to hybridize, the ciphertext communication cost of the spilling-cascade is optimal.
On the Impossibility of Algebraic NIZK In Pairing-Free Groups
Non-Interactive Zero-Knowledge proofs (NIZK) allow a prover to convince a verifier that a statement is true by sending only one message and without conveying any other information.
In the CRS model, many instantiations have been proposed from group-theoretic assumptions.
On the one hand, some of these constructions use the group structure in a black-box way but rely on pairings, an example being the celebrated Groth-Sahai proof system.
On the other hand, a recent line of research realized NIZKs from sub-exponential DDH in pairing-free groups using Correlation Intractable Hash functions, but at the price of making non black-box usage of the group.
As of today no construction is known to simultaneously reduce its security to pairing-free group problems and to use the underlying group in a black-box way.
This is indeed not a coincidence:
in this paper, we prove that for a large class of NIZK either a pairing-free group is used non black-box by relying on element representation, or security reduces to external hardness assumptions.
More specifically our impossibility applies to two incomparable cases.
The first one covers Arguments of Knowledge (AoK) which proves that a preimage under a given one way function is known.
The second one covers NIZK (not necessarily AoK) for hard subset problems, which captures relations such as DDH, Decision-Linear and Matrix-DDH.
Improved Gadgets for the High-Order Masking of Dilithium
We present novel and improved high-order masking gadgets for Dilithium, a post-quantum signature scheme that has been standardized by the National Institute of Standards and Technologies (NIST). Our proposed gadgets include the ShiftMod gadget, which is used for efficient arithmetic shifts and serves as a component in other masking gadgets. Additionally, we propose a new algorithm for Boolean-to-arithmetic masking conversion of a $\mu$-bit integer $x$ modulo any integer $q$, with a complexity that is independent of both $\mu$ and $q$. This algorithm is used in Dilithium to mask the generation of the random variable $y$ modulo $q$. Moreover, we describe improved techniques for masking the Decompose function in Dilithium. Our new gadgets are proven to be secure in the $t$-probing model.
We demonstrate the effectiveness of our countermeasures by presenting a complete high-order masked implementation of Dilithium that utilizes the improved gadgets described above. We provide practical results obtained from a C implementation and compare the performance improvements provided by our new gadgets with those of previous work.
ModHE: Modular Homomorphic Encryption Using Module Lattices: Potentials and Limitations
The promising field of homomorphic encryption enables functions to be evaluated on encrypted data and produce results that mimic the same computations done on plaintexts. It, therefore, comes as no surprise that many ventures at constructing homomorphic encryption schemes have come into the limelight in recent years. Most popular are those that rely on the hard lattice problem, called the Ring Learning with Errors problem (RLWE). One major limitation of these homomorphic encryption schemes is that in order to securely increase the maximum multiplicative depth, they need to increase the polynomial-size thereby also increasing the complexity of the design. We aim to bridge this gap by proposing a homomorphic encryption (HE) scheme based on the Module Learning with Errors problem (MLWE), ModHE that allows us to break the big computations into smaller ones. Given the popularity of module lattice-based post-quantum schemes, it is an evidently interesting research
endeavor to also formulate module lattice-based homomorphic encryption schemes.
While our proposed scheme is general, as a case study, we port the well-known RLWE-based CKKS scheme to the MLWE setting. The module version of the scheme completely stops the polynomial-size blowups when aiming for a greater circuit depth. Additionally, it presents greater opportunities for designing flexible, reusable, and parallelizable hardware architecture. A hardware implementation is provided to support our claims. We also acknowledge that as we try to decrease the complexity of computations, the amount of computations (such as relinearizations) increases. We hope that the potential and limitations of using such a hardware-friendly scheme will spark further research.
Differentially Private Selection from Secure Distributed Computing
Given a collection of vectors $\mathbf{x}^{(1)},\dots,\mathbf{x}^{(n)} \in \{0,1\}^d$, the selection problem asks to report the index of an "approximately largest" entry in $\mathbf{x}=\sum_{j=1}^n \mathbf{x}^{(j)}$. Selection abstracts a host of problems; in machine learning it can be used for hyperparameter tuning, feature selection, or to model empirical risk minimization. We study selection under differential privacy, where a released index guarantees privacy for individual vectors. Though selection can be solved with an excellent utility guarantee in the central model of differential privacy, the distributed setting where no single entity is trusted to aggregate the data lacks solutions. Specifically, strong privacy guarantees with high utility are offered in high trust settings, but not in low trust settings. For example, in the popular shuffle model of distributed differential privacy, there are strong lower bounds suggesting that the utility of the central model cannot be obtained. In this paper we design a protocol for differentially private selection in a trust setting similar to the shuffle model - with the crucial difference that our protocol tolerates corrupted servers while maintaining privacy. Our protocol uses techniques from secure multi-party computation (MPC) to implement a protocol that: (i) has utility on par with the best mechanisms in the central model, (ii) scales to large, distributed collections of high-dimensional vectors, and (iii) uses $k\geq 3$ servers that collaborate to compute the result, where the differential privacy guarantee holds assuming an honest majority. Since general-purpose MPC techniques are not sufficiently scalable, we propose a novel application of integer secret sharing, and evaluate the utility and efficiency of our protocol both theoretically and empirically. Our protocol is the first to demonstrate that large-scale differentially private selection is possible in a distributed setting.
Diversity Algorithms for Laser Fault Injection
Before third-party evaluation and certification, manufacturers often conduct internal security evaluations on secure hardware devices, including fault injection (FI). Within this process, FI aims to identify parameter combinations that reveal device vulnerabilities.
The impracticality of conducting an exhaustive search over FI parameters has prompted the development of advanced and guided algorithms. However, these proposed methods often focus on a specific, critical region, which is beneficial for attack scenarios requiring a single optimal FI parameter combination.
In this work, we introduce two novel metrics that align better with the goal of identifying multiple optima. These metrics consider the number of unique vulnerable locations and clusters (regions). Furthermore, we present two methods promoting diversity in tested parameter combinations - Grid Memetic Algorithm (GridMA) and Evolution Strategy (ES). Our findings reveal that these diversity methods, though identifying fewer vulnerabilities overall than the Memetic Algorithm (MA), still outperform Random Search (RS), identifying at least $\approx8\times$ more vulnerabilities. Using our novel metrics, we observe that the number of distinct vulnerable locations is similar across all three evolutionary algorithms, with $\approx30\%$ increase over RS. Importantly, ES and GridMA prove superior in discovering multiple vulnerable regions, with ES identifying $\approx55\%$ more clusters than the worst-performing MA.
Suboptimality in DeFi
The decentralized finance (DeFi) ecosystem has proven to be popular in facilitating financial operations, such as token exchange and lending. The public availability of DeFi platforms’ code, together with real-time data on all user interactions with them, has given rise to complex tools that find and seize profit opportunities on behalf of users.
In this work, we show that both users and the aforementioned tools sometimes act suboptimally: their profits can be increased by more than 100%, with the highest amount of missed revenue by a suboptimal action reaching 428.14ETH ($517K). To reach these findings, we examine core DeFi primitives used by over 850 platforms that are responsible for a daily volume of more than 100 million USD in Ethereum alone: (1) lending and borrowing funds, (2) using flashswaps to close arbitrage opportunities between decentralized exchanges (DEXs), (3) liquidation of insolvent loans using flashswaps, which combines the previous two. The profit that can be made from each primitive is then cast as an optimization problem that can be solved. We show that missed opportunities to make a profit are noticed by others and are sometimes followed by back-running transactions that extract profits using similar actions. By analyzing these events, we find that some transactions are circumstantially tied to specific miners and hypothesize that they use their knowledge of private orderflow for a profit. Essentially, this is an instance of miner-extractable value (MEV) "in action".
When is Slower Block Propagation More Profitable for Large Miners?
For years, Bitcoin miners put little effort into adopting several widely-acclaimed block acceleration techniques, which, as some argued, would secure their revenues. Their indifference inspires a theory that slower block propagation is beneficial for some miners. In this study, we analyze and confirm this counterintuitive theory. Specifically, by modeling inadvertent slower blocks, we show that a mining coalition that controls more than a third of the total mining power can earn unfair revenue by propagating blocks slower to outsiders. Afterward, we explore the strategies of an attacker that consciously exploits this phenomenon. The results indicate that an attacker with 45% of the total mining power can earn 58% of the total revenue. This attack is alarming as it is equally fundamental but more stealthy than the well-known selfish mining attack. At last, we discuss its detection and defense mechanisms.
Efficient Evaluation of Frequency Test for Overlapping Vectors Statistic
Randomness testing is one of the essential and easiest tools for evaluating cryptographic primitives. The faster we can test, the greater volume of data that can be tested. Thus a more detailed analysis is possible.
This paper presents a range of observations made for a well-known frequency test for overlapping vectors in binary sequence testing. We have obtained precise chi-square statistic computed in $O \left(dt 2^{dt} \right)$ instead of $O\left( 2^{2dt}\right)$ time, without precomputed tables.
A note on ``LAKAF: lightweight authentication and key agreement framework for smart grid network''
We show that the key agreement scheme [J. Syst. Archit., 116: 102053, 2021] is flawed. It makes use of a symmetric key encryption to transfer data between the user and server. But the symmetric key is easily retrieved by an adversary, which results in the loss of data confidentiality, and makes it vulnerable to impersonation attack.
Last updated: 2023-06-09
Further results on several classes of optimal ternary cyclic codes with minimum distance four
Cyclic codes are a subclass of linear codes and have applications in consumer electronics, data storage systems, and communication systems as they have efficient encoding and decoding algorithms. In this paper, by analyzing the solutions of certain equations over $\mathbb{F}_{3^m}$ and using the multivariate method, we present three classes of optimal ternary cyclic codes in the case of $m$ is odd and five classes of optimal ternary cyclic codes with explicit values $e$, respectively. In addition, two classes of optimal ternary cyclic codes $C_{(u, v)}$ are given.
Pairwise and Parallel: Enhancing the Key Mismatch Attacks on Kyber and Beyond
Key mismatch attacks resilience is a great concern for KEMs in the NIST PQC standardization process. In key mismatch attacks, the adversary aims to recover the reused key by sending special form of ciphertexts to the target party and observing whether the shared key matches his guesses or not.
In this paper, we propose pairwise-parallel key mismatch attacks on Kyber and other lattice-based KEMs. The strategy is to recover partial information about multiple secret key coefficient-pairs in a parallel way per query. We realize the required multi-value key mismatch oracle in a simple key exchange scenario and experimentally validate our proposed attacks. Our attacks greatly reduce the number of queries required to recover the full secret key. Specifically, compared with state-of-the-art key mismatch attacks on CPA-secure Kyber, our attacks reduce the number of queries by 95% with computational complexity $2^{32}$.
Then we employ the post-processing with lattice reduction to further minimize the number of queries. The results show we only need 78 queries to recover the full secret key with a lattice reduction cost of $2^{32}$. Moreover, our proposed pairwise-parallel attack method can be directly applied to enhance the PC oracle-based SCA against CCA-secure Kyber, reducing the number of queries/traces by 16.67%.
Reductions from module lattices to free module lattices, and application to dequantizing module-LLL
In this article, we give evidence that free modules (i.e., modules which admit a basis) are no weaker than arbitrary modules, when it comes to solving cryptographic algorithmic problems (and when the rank of the module is at least 2). More precisely, we show that for three algorithmic problems used in cryptography, namely the shortest vector problem, the Hermite shortest vector problem and a variant of the closest vector problem, there is a reduction from solving the problem in any module of rank $n ≥ 2$ to solving the problem in any free module of the same rank $n$. As an application, we show that this can be used to dequantize the LLL algorithm for module lattices presented by Lee et al. (Asiacrypt 2019).
Vectorized and Parallel Computation of Large Smooth-Degree Isogenies using Precedence-Constrained Scheduling
Strategies and their evaluations play important roles in speeding up the computation of large smooth-degree isogenies. The concept of optimal strategies for such computation was introduced by De Feo et al., and virtually all implementations of isogeny-based protocols have adopted this approach, which is provably optimal for single-core platforms. In spite of its inherent sequential nature, several recent works have studied ways of speeding up this isogeny computation by exploiting the rich parallelism available in vectorized and multi-core platforms. One obstacle to taking full advantage of this parallelism, however, is that De Feo et al.'s strategies are not necessarily optimal in multi-core environments. To illustrate how the speed of vectorized and parallel isogeny computation can be improved at the strategy-level, we present two novel software implementations that utilize a state-of-the-art evaluation technique, called precedence-constrained scheduling (PCS), presented by Phalakarn et al., with our proposed strategies crafted for these environments. Our first implementation relies only on the parallelism provided by multi-core processors. The second implementation targets multi-core processors supporting the latest generation of the Intel's Advanced Vector eXtensions (AVX) technology, commonly known as AVX-512IFMA instructions. To better handle the computational concurrency associated with PCS, we equip both implementations with extensive synchronization techniques. Our first implementation outperforms the implementation of Cervantes-Vazquez et al. by yielding up to 14.36% reduction in the execution time, when targeting platforms with two- to four-core processors. Our second implementation, equipped with four cores, achieves up to 34.05% reduction in the execution time compared to the single-core implementation of Cheng et al. of CHES 2022.
Near Collision Attack Against Grain v1
A near collision attack against the Grain v1 stream cipher was proposed by Zhang et al. in Eurocrypt 18. The attack uses the fact that two internal states of the stream cipher with very low hamming distance between them, produce similar keystream sequences which can be identified by simple statistical tests. Such internal states once found in the stream cipher simplify the task of cryptanalysis for the attacker. However this attack has recently come under heavy criticism from Derbez et al. at ToSC 2020:4, who claim that some of the assumptions made in the above paper were not correct. As a result they concluded that the attack presented by Zhang et al. when implemented would take time more than required for a brute force search. In this paper, we take another look at the near collision attack against the Grain v1 stream cipher. We avoid the techniques of the above Eurocrypt paper that have come under criticism, and independently show that a near collision attack can still be applied to Grain v1.
Prouff & Rivain’s Formal Security Proof of Masking, Revisited: Tight Bounds in the Noisy Leakage Model
Masking is a counter-measure that can be incorporated to
software and hardware implementations of block ciphers to provably se-
cure them against side-channel attacks. The security of masking can be
proven in different types of threat models. In this paper, we are interested
in directly proving the security in the most realistic threat model, the
so-called noisy leakage adversary, that captures well how real-world side-
channel adversaries operate. Direct proofs in this leakage model have
been established by Prouff & Rivain at Eurocrypt 2013, Dziembowski
et al. at Eurocrypt 2015, and Prest et al. at Crypto 2019. Both proofs
are complementary to each other, in the sense that the weaknesses of one
proof are fixed in at least one of the others, and conversely. These weak-
nesses concerned in particular the strong requirements on the noise level
and the security parameter to get meaningful security bounds, and some
requirements on the type of adversary covered by the proof — i.e., cho-
sen or random plaintexts. This suggested that the drawbacks of each
security bound could actually be proof artifacts. In this paper, we solve
these issues, by revisiting Prouff & Rivain’s approach.
Expand-Convolute Codes for Pseudorandom Correlation Generators from LPN
The recent development of pseudorandom correlation generators (PCG) holds tremendous promise for highly efficient MPC protocols. Among other correlations, PCGs allow for the efficient generation of oblivious transfer (OT) and vector oblivious linear evaluations (VOLE) with sublinear communication and concretely good computational overhead. This type of PCG makes use of a so-called LPN-friendly error-correcting code. That is, for large dimensions the code should have very efficient encoding and have high minimum distance.
We investigate existing LPN-friendly codes and find that several candidates are less secure than was believed. Beginning with the recent expand-accumulate codes, we find that for their aggressive parameters, aimed at good concrete efficiency, they achieve a smaller [pseudo] minimum distance than conjectured. This decreases the resulting security parameter of the PCG but it remains unclear by how much. We additionally show that the recently proposed and extremely efficient silver codes achieve only very small minimum distance and result in concretely efficient attacks on the resulting
PCG protocol. As such, silver codes should not be used.
We introduce a new LPN-friendly code which we call \emph{expand-convolute}. These codes have provably high minimum distance and faster encoding time than suitable alternatives, e.g. expand-accumulate. The main contribution of these codes is the introduction of a convolution step that dramatically increases the minimum distance. This in turn allows for a more efficient parameter selection which results in improved concrete performance. In particular, we observe a 3 times improvement in running time for a comparable security level.
Last updated: 2023-06-12
Strict Linear Lookup Argument
Given a table $\mathfrak{t} ∈ \mathbb{F}_N$ , and a commitment to a polynomial $f (X) \in $ $\mathbb{F}_{<n}[X]$ over a multiplicative subgroup $\mathbb{H} ⊂ \mathbb{F}$. The lookup argument asserts that $f |_{\mathbb{H}} ⊂ \mathfrak{t}$. We present a new lookup argument protocol that achieves strict linear prover complexity, after a pre-processing step of $O(N log(N ))$
On Active Attack Detection in Messaging with Immediate Decryption
The widely used Signal protocol provides protection against state exposure attacks through forward security (protecting past messages) and post-compromise security (for restoring security). It supports immediate decryption, allowing messages to be re-ordered or dropped at the protocol level without affecting correctness. In this work, we consider strong active attack detection for secure messaging with immediate decryption, where parties are able to immediately detect active attacks under certain conditions. We first consider in-band active attack detection, where participants who have been actively compromised but are still able to send a single message to their partner can detect the compromise. We propose two complementary notions to capture security, and present a compiler that provides security with respect to both notions. Our notions generalise existing work (RECOVER security) which only supported in-order messaging. We also study the related out-of-band attack detection problem by considering communication over out-of-band, authenticated channels and propose analogous security notions. We prove that one of our two notions in each setting imposes a linear communication overhead in the number of sent messages and security parameter using an information-theoretic argument. This implies that each message must information-theoretically contain all previous messages and that our construction, that essentially attaches the entire message history to every new message, is asymptotically optimal. We then explore ways to bypass this lower bound and highlight the feasibility of practical active attack detection compatible with immediate decryption.
On cubic-like bent Boolean functions
Cubic bent Boolean functions (i.e. bent functions of algebraic degree at most 3) have the property that, for every nonzero element $a$ of $\mathbb{F}_2^n$, the derivative $D_af(x)=f(x)+f(x+a)$ of $f$ admits at least one derivative $D_bD_af(x)=f(x)+f(x+a)+f(x+b)+f(x+a+b)$ that is equal to constant function 1. We study the general class of those Boolean functions having this property, which we call cubic-like bent. We study the properties of such functions and the structure of their constant second-order derivatives. We characterize them by means of their Walsh transform (that is, by their duals), by the Walsh transform of their derivatives and by other means. We study them within the Maiorana-McFarland class of bent functions, providing characterizations and constructions and showing the existence of cubic-like bent functions of any algebraic degree between 2 and $\frac n2$.
Introducing two Low-Latency Cipher Families: Sonic and SuperSonic
For many latency-critical operations in computer systems, like memory reads/writes, adding encryption can have a big impact on the performance. Hence, the existence of cryptographic primitives with good security properties and minimal latency is a key element in the wide-spread implementation of such security measures. In this paper, we introduce two new families of low-latency permutations/block ciphers called Sonic and SuperSonic, inspired by the Simon block ciphers.
Public-Key Encryption with Quantum Keys
In the framework of Impagliazzo's five worlds, a distinction is often made between two worlds, one where public-key encryption exists (Cryptomania), and one in which only one-way functions exist (MiniCrypt). However, the boundaries between these worlds can change when quantum information is taken into account. Recent work has shown that quantum variants of oblivious transfer and multi-party computation, both primitives that are classically in Cryptomania, can be constructed from one-way functions, placing them in the realm of quantum MiniCrypt (the so-called MiniQCrypt). This naturally raises the following question: Is it possible to construct a quantum variant of public-key encryption, which is at the heart of Cryptomania, from one-way functions or potentially weaker assumptions?
In this work, we initiate the formal study of the notion of quantum public-key encryption (qPKE), i.e., public-key encryption where keys are allowed to be quantum states. We propose new definitions of security and several constructions of qPKE based on the existence of one-way functions (OWF), or even weaker assumptions, such as pseudorandom function-like states (PRFS) and pseudorandom function-like states with proof of destruction (PRFSPD). Finally, to give a tight characterization of this primitive, we show that computational assumptions are necessary to build quantum public-key encryption. That is, we give a self-contained proof that no quantum public-key encryption scheme can provide information-theoretic security.
Circular Multiplicative Modular Exponentiation: A New Public Key Exchange Algorithm
The major objective of this paper is to present a theoretical model for an algorithm that has not been previously described in the literature, capable of generating a secret key through the transmission of data over a public channel. We explain how the method creates a shared secret key by attaining commutativity through the multiplication of the modular exponentiation of a minimum of two bases and an equal number of private exponents for each party involved in the exchange. In addition, we explore the relationship between CMME and the traditional Diffie-Hellman scheme. We just briefly discuss the algorithm's security, opting to leave the essential investigation to cryptanalysts, while we elucidate on the algorithm's mechanism by illustrating some cases.
The Power of Undirected Rewindings for Adaptive Security
Existing proofs of adaptive security (e.g., in settings in which decryption keys are adaptively revealed) often rely on guessing arguments. Such guessing arguments can be simple (and, e.g., just involve guessing which keys are revealed), or more complex "partitioning'' arguments. Since guessing directly and negatively impacts the loss of the corresponding security reduction, this leads to black-box lower bounds for a number of cryptographic scenarios that involve adaptive security.
In this work, we provide an alternative to such guessing arguments: instead of guessing in a security reduction which adaptive choices an adversary A makes, we rewind A many times until we can successfully embed a given computational challenge. The main benefit of using rewindings is that these rewindings can be arranged sequentially, and the corresponding reduction loss only accumulates additively (instead of multiplicatively, as with guessing). The main technical challenge is to show that A's success is not negatively affected after (potentially many) rewindings. To this end, we develop a machinery for "undirected'' rewindings that preserve A's success across (potentially many) rewindings.
We use this strategy to show
- security of the "Logical Key Hierarchy'' protocol underlying the popular TreeKEM key management protocol, and
- security of the Goldreich-Goldwasser-Micali (GGM) pseudorandom function (PRF) as a prefix-constrained PRF.
In both cases, we provide the first polynomial reductions to standard assumptions (i.e., to IND-CPA and PRG security, respectively), and in case of the GGM PRF, we also circumvent an existing lower bound.
Distributed Broadcast Encryption from Bilinear Groups
Distributed broadcast encryption (DBE) improves on the traditional notion of broadcast encryption by eliminating the key-escrow problem: In a DBE system, users generate their own secret keys non- interactively without the help of a trusted party. Then anyone can broadcast a message for a subset S of the users, in such a way that the resulting ciphertext size is sublinear in (and, ideally, independent of) |S|. Unfortunately, the only known constructions of DBE requires heavy cryptographic machinery, such as general-purpose indistinguishability obfuscation, or come without a security proof.
In this work, we formally show that obfuscation is not necessary for DBE, and we present two practical DBE schemes from standard assumptions in prime-order bilinear groups. Our constructions are conceptually simple, satisfy the strong notion of adaptive security, and are concretely efficient. In fact, their performance, in terms of number of group elements and efficiency of the algorithms, is comparable with that of traditional (non distributed) broadcast encryption schemes from bilinear groups.
Digital signature schemes using non-square matrices or scrap automorphisms
We offer two very transparent digital signature schemes: one using non-square matrices and the other using scrap automorphisms. The former can be easily converted to a public key encryption scheme.