All papers in 2023 (Page 16 of 1971 results)
Compact Aggregate Signature from Module-Lattices
We propose the first aggregate signature scheme such that: (1) its security is based on the standard lattice assumptions in the random oracle model; (2) the aggregate signature size is logarithmic; (3) it is not one-time; and (4) it supports non-interactive aggregation. To obtain such a scheme, we combine the most compact SNARK (Succinct Non-interactive ARgument of Knowledge) system and a SNARK-friendly signature scheme. As a result, our aggregated signature size is sufficiently compact. For example, the size required to aggregate $2^{20}$ signatures is only a few hundred kilobytes. This result shows that our scheme is superior to the existing lattice-based schemes in compressing many signatures.
GeT a CAKE: Generic Transformations from Key Encaspulation Mechanisms to Password Authenticated Key Exchanges
Password Authenticated Key Exchange (PAKE) have become a key building block in many security products as they provide interesting efficiency/security trade-offs. Indeed, a PAKE allows to dispense with the heavy public key infrastructures and its efficiency and portability make it well suited for applications such as Internet of Things or e-passports.
With the emerging quantum threat and the effervescent development of post-quantum public key algorithms in the last five years, one would wonder how to modify existing password authenticated key exchange protocols that currently rely on Diffie-Hellman problems in order to include newly introduced and soon-to-be-standardized post-quantum key encapsulation mechanisms (KEM).
A generic solution is desirable for maintaining modularity and adaptability with the many post-quantum KEM that have been introduced.
In this paper, we propose two new generic and natural constructions proven in the Universal Composability (UC) model to transform, in a black-box manner, a KEM into a PAKE with very limited performance overhead: one or two extra symmetric encryptions. Behind the simplicity of the designs, establishing security proofs in the UC model is actually non-trivial and requires some additional properties on the underlying KEM like fuzziness and anonymity. Luckily, post-quantum KEM protocols often enjoy these two extra properties. As a demonstration, we prove that it is possible to apply our transformations to Crystals-Kyber, a lattice-based post-quantum KEM that will soon be standardized by the National Institute of Standards and Technology (NIST).
In a nutshell, this work opens up the possibility to securely include post-quantum cryptography in PAKE-based real-world protocols.
Four Attacks and a Proof for Telegram
We study the use of symmetric cryptography in the MTProto 2.0 protocol, Telegram's equivalent of the TLS record protocol. We give positive and negative results. On the one hand, we formally and in detail model a slight variant of Telegram's "record protocol" and prove that it achieves security in a suitable bidirectional secure channel model, albeit under unstudied assumptions; this model itself advances the state-of-the-art for secure channels. On the other hand, we first motivate our modelling deviation from MTProto as deployed by giving two attacks – one of practical, one of theoretical interest – against MTProto without our modifications. We then also give a third attack exploiting timing side channels, of varying strength, in three official Telegram clients. On its own this attack is thwarted by the secrecy of salt and id fields that are established by Telegram's key exchange protocol. We chain the third attack with a fourth one against the implementation of the key exchange protocol on Telegram's servers. This fourth attack breaks the authentication properties of Telegram's key exchange, allowing a MitM attack. More mundanely, it also recovers the id field, reducing the cost of the plaintext recovery attack to guessing the 64-bit salt field. In totality, our results provide the first comprehensive study of MTProto's use of symmetric cryptography, as well as highlight weaknesses in its key exchange.
A new approach on IoT security: n-out-of-n
Internet of Things (IoT) has become an established part of our daily lives by interconnecting billions of devices in diverse areas such as health care, smart home technologies, agriculture, etc. However, IoT devices are limited in memory, energy and computational capabilities. This creates a great potential for security issues, since being constrained prevents producers from implementing mostly complex cryptographic algorithms in IoT devices. In this study, we propose a novel method to provide a low-cost and secure communication for constrained IoT devices. The proposed method is based on an $n$-out-of-$n$ secret sharing scheme and mimicks the idea of visual cryptography in a digital setup. Whenever an IoT device communicates with an outer party, it establishes the communication by itself or through a mediary such as a central hub or gateway; in which the latter mostly leads to a single point of failure. Our proposed method aims for a distributed environment in which IoT devices within a secure network collaborate with each other in order to send a message to a master device over an insecure channel.
Secure Floating-Point Training
Secure 2-party computation (2PC) of floating-point arithmetic is improving in performance and recent work runs deep learning algorithms with it, while being as numerically precise as commonly used machine learning (ML) frameworks like PyTorch. We find that the existing 2PC libraries for floating-point support generic computations and lack specialized support for ML training. Hence, their latency and communication costs for compound operations (e.g., dot products) are high. We provide novel specialized 2PC protocols for compound operations and prove their precision using numerical analysis. Our implementation BEACON outperforms state-of-the-art libraries for 2PC of floating-point by over $6\times$.
Don't be Dense: Efficient Keyword PIR for Sparse Databases
In this paper, we introduce $\mathsf{SparsePIR}$, a single-server keyword private information retrieval (PIR) construction that enables querying over sparse databases. At its core, $\mathsf{SparsePIR}$ is based on a novel encoding algorithm that encodes sparse database entries as linear combinations while being compatible with important PIR optimizations including recursion. $\mathsf{SparsePIR}$ achieves response overhead that is half of state-of-the art keyword PIR schemes without requiring long term client storage of linear sized mappings. We also introduce two variants, $\mathsf{SparsePIR}^g$ and $\mathsf{SparsePIR}^c$, that further reduces the size of the serving database at the cost of increased encoding time and small additional client storage, respectively. Our frameworks enable performing keyword PIR with, essentially, the same costs as standard PIR. Finally, we also show that $\mathsf{SparsePIR}$ may be used to build batch keyword PIR with halved response overhead without any client mappings.
RPU: The Ring Processing Unit
Ring-Learning-with-Errors (RLWE) has emerged as the foundation of many important techniques for improving security and privacy, including homomorphic encryption and post-quantum cryptography. While promising, these techniques have received limited use due to their extreme overheads of running on general-purpose machines. In this paper, we present a novel vector Instruction Set Architecture (ISA) and microarchitecture for accelerating the ring-based computations of RLWE. The ISA, named B512, is developed to meet the needs of ring processing workloads while balancing high-performance and general-purpose programming support. Having an ISA rather than fixed hardware facilitates continued software improvement post-fabrication and the ability to support the evolving workloads. We then propose the ring processing unit (RPU), a high-performance, modular implementation of B512. The RPU has native large word modular arithmetic support, capabilities for very wide parallel processing, and a large capacity high-bandwidth scratchpad to meet the needs of ring processing. We address the challenges of programming the RPU using a newly developed SPIRAL backend. A configurable simulator is built to characterize design tradeoffs and quantify performance. The best performing design was implemented in RTL and used to validate simulator performance. In addition to our characterization, we show that a RPU using 20.5mm2 of GF 12nm can provide a speedup of 1485x over a CPU running a 64k, 128-bit NTT, a core RLWE workload
A Generic Construction of an Anonymous Reputation System and Instantiations from Lattices
With an anonymous reputation system one can realize the process of rating sellers anonymously in an online shop.
While raters can stay anonymous, sellers still have the guarantee that they can be only be reviewed by raters who bought their product.
We present the first generic construction of a reputation system from basic building blocks, namely digital signatures, encryption schemes, non-interactive zero-knowledge proofs, and linking indistinguishable tags.
We then show the security of the reputation system in a strong security model.
Among others, we instantiate the generic construction with building blocks based on lattice problems, leading to the first module lattice-based reputation system in the random oracle model.
Simplex Consensus: A Simple and Fast Consensus Protocol
We present a theoretical framework for analyzing the efficiency of consensus protocols, and apply it to analyze the optimistic and pessimistic confirmation times of state-of-the-art partially-synchronous protocols in the so-called "rotating leader/random leader" model of consensus (recently popularized in the blockchain setting).
We next present a new and simple consensus protocol in the partially synchronous setting, tolerating $f < n/3$ byzantine faults; in our eyes, this protocol is essentially as simple to describe as the simplest known protocols, but it also enjoys an even simpler security proof, while matching and, even improving, the efficiency of the state-of-the-art (according to our theoretical framework).
As with the state-of-the-art protocols, our protocol assumes a (bare) PKI, a digital signature scheme, collision-resistant hash functions, and a random leader election oracle, which may be instantiated with a random oracle (or a CRS).
Overdrive LowGear 2.0: Reduced-Bandwidth MPC without Sacrifice
Some of the most efficient protocols for Multi-Party Computation (MPC) follow a two-phase approach where correlated randomness, in particular Beaver triples, is generated in the offline phase and then used to speed up the online phase. Recently, more complex correlations have been introduced to optimize certain operations even further, such as matrix triples for matrix multiplications. In this
paper, our goal is to improve the efficiency of the triple generation in general and in particular for classical field values as well as matrix operations. To this end, we modify the Overdrive LowGear protocol to remove the costly sacrificing step and therewith reduce the round complexity and the bandwidth. We extend the state-of-the-art MP-SPDZ implementation with our new protocols and show that the new offline phase outperforms state-of-the-art protocols for the generation of Beaver triples and matrix triples. For example, we save 33 % in bandwidth compared to Overdrive LowGear.
Deep Learning based Differential Classifier of PRIDE and RC5
Deep learning-based cryptanalysis is one of the emerging
trends in recent times. Differential cryptanalysis is one of the most po-
tent approaches to classical cryptanalysis. Researchers are now modeling
classical differential cryptanalysis by applying deep learning-based tech-
niques. In this paper, we report deep learning-based differential distin-
guishers for block cipher PRIDE and RC5, utilizing deep learning models:
CNN, LGBM and LSTM. We found distinguishers up to 23 rounds for
PRIDE and nine rounds for RC5. To the best of our knowledge this is
the first deep learning based differential classifier for cipher PRIDE and
RC5.
A unified construction of weightwise perfectly balanced Boolean functions
At Eurocrypt 2016, Méaux et al. presented FLIP, a new family of stream ciphers {that aimed to enhance the efficiency of homomorphic encryption frameworks. Motivated by FLIP, recent research has focused on the study of Boolean functions with good cryptographic properties when restricted to subsets of the space $\mathbb{F}_2^n$. If an $n$-variable Boolean function has the property of balancedness when restricted to each set of vectors with fixed Hamming weight between $1$ and $n-1$, it is a weightwise perfectly balanced (WPB) Boolean function. In the literature, a few algebraic constructions of WPB functions are known, in which there are some constructions that use iterative method based on functions with low degrees of 1, 2, or 4. In this paper, we generalize the iterative method and contribute a unified construction of WPB functions based on functions with algebraic degrees that can} be any power of 2. For any given positive integer $d$ not larger than $m$, we first provide a class of $2^m$-variable Boolean functions with a degree of $2^{d-1}$. Utilizing these functions, we then present a construction of $2^m$-variable WPB functions $g_{m;d}$. In particular, $g_{m;d}$ includes four former classes of WPB functions as special cases when $d=1,2,3,m$. When $d$ takes other integer values, $g_{m;d}$ has never appeared before. In addition, we prove the algebraic degree of the constructed WPB functions and compare the weightwise nonlinearity of WPB functions known so far in 8 and 16 variables.
SCMA: Plaintext Classification Assisted Side Channel Spectral Modulation Attacks. Towards Noise-insensitive SCA Attacks...
Side-channel analysis (SCA) attacks manifest a significant challenge to the security of cryptographic devices. In turn, it is generally quite expensive to protect from SCAs (energy, area, performance etc.). In this work we exhibit a significant change in paradigm for SCA attacks: our proposed attack is quite different from conventional SCA attacks and is able to filter out physical measurement noise, algorithmic noise, as well as thwart various countermeasures, and extract information from the entire leakage waveform as a whole and not only points-of-interest. We demonstrate on measured devices break of masking schemes of orders 2 and 3, supported by a model and also shuffling and dual-rail based countermeasures model; all performed efficiently with the same methodology, and with orders of magnitude less measurements and smaller computation time; underpinning the importance of this form of attack.
In essence, in our attack we assume nothing different than a standard side-channel attack, i.e., a known plaintext scenario. However, we further group and classify leakages associated with specific subsets of plaintexts bits. The fact that we group specific (sub-)plaintexts associated leakages, and than in the next stage group or concatenate the associated leakages of these large groups in a predefined ordered sequence (modulation), enables far stronger attacks against SCA protected and unprotected designs. The evaluation-domain or the modulation-domain is the frequency domain in which per frequency it is possible to build a two feature constellation diagrams (amplitude and phase) and construct distinguishers over these diagrams.
On top of the methodological contribution of this new SCA, the main observation we push forward is that practically such an attack is devastating for many countermeasures we were used to consider as secure to some level, such as masking or shuffling with large permutation size. As an example, leakage from a third order masked design can be detected with merely 100 leakage traces from the first statistical moment of the leakage as compared to $15\cdot10^6$ traces with conventional SCA leakage detection test from the third statistical order.
Non-interactive Universal Arguments
In 2002, Barak and Goldreich introduced the notion of a universal argument and constructed an interactive universal argument for non-deterministic computations based on polynomially hard collision-resistant hash functions. Since then, and especially in recent years, there have been tremendous developments in the construction of non-interactive succinct arguments for deterministic computations under standard hardness assumptions. However, the constructed succinct arguments can be proven universal only under sub-exponential assumptions.
Assuming polynomially hard fully homomorphic encryption and a widely believed worst-case complexity assumption, we prove a general lifting theorem showing that all existing non-interactive succinct arguments can be made universal. The required complexity assumption is that non-uniformity does not allow arbitrary polynomial speedup. In the setting of uniform adversaries, this extra assumption is not needed.
Registered FE beyond Predicates: (Attribute-Based) Linear Functions and more
This paper introduces the first registered functional encryption RFE scheme tailored for linear functions. Distinctly different from classical functional encryption (FE), RFE addresses the key-escrow issue and negates the master key exfiltration attack. Instead of relying on a centralized trusted authority, it introduces a “key curator” - a fully transparent entity that does not retain secrets. In an RFE framework, users independently generate secret keys and subsequently register their respective public keys, along with their authorized functions, with the key curator. This curator consolidates public keys from various users into a unified, concise master public key. For decryption, users occasionally secure helper decryption keys from the key curator, which they use in conjunction way with their private keys. It is imperative that the aggregate public key, helper decryption keys, ciphertexts, and the times for encryption/decryption are polylogarithmic in the number of registered users.
All existing RFE designs were confined to predicates where given the correct credentials a user can retrieve the entire payload from a ciphertext or gain no information about it otherwise. Contrarily, our RFE scheme facilitates the computation of linear functions on encrypted content and extraction of only the computation results. Recognizing potential leaks from linear functions, we further enhance our RFE by incorporating an attribute-based access control mechanism. The outcome is the first registered attribute-based linear FE (RABIPFE), which supports access policies depicted as linear secret sharing schemes LSSS. Our proposed schemes are realized in the common reference string (CRS) model as introduced by Hohenberger et al.[EUROCRYPT 2023], employ simple tools and black-box methods. Specifically, our constructs operate in asymmetric prime-order bilinear group regime setting and are proven secure in the generic bilinear group model. Aligning with all pre-existing black-box RFE designs within the CRS model, our schemes cater to a predetermined maximum user count. A notable variant of our RABIPFE scheme also yields the first efficient register ABE (RABE) system for LSSS access policies in asymmetric prime-order bilinear groups. Conclusively, demonstrating feasibility, we formulated an RFE blueprint that supports general functionalities and an infinite user base, leveraging indistinguishability obfuscation and one-way functions.
Generalised Asynchronous Remote Key Generation for Pairing-based Cryptosystems
Asynchronous Remote Key Generation (ARKG, introduced in ACM CCS 2020) allows for a party to create public keys for which corresponding private keys may be later computed by another intended party only. ARKG can be composed with standard public-key cryptosystems and has been used to construct a new class of privacy-preserving proxy signatures. The original construction of ARKG, however, generates discrete logarithm key pairs of the form $(x, g^x)$.
In this paper we define a generic approach for building ARKG schemes which can be applied to a wide range of pairing-based cryptosystems. This construction is based on a new building block which we introduce and call Asymmetric Key Generation (AKG) along with its extension $\phi$-AKG where $\phi$ is a suitable mapping for capturing different key structures and types of pairings. We show that appropriate choice of $\phi$ allows us to create a secure ARKG scheme compatible with any key pair that is secure under the Uber assumption (EUROCRYPT 2004).
To demonstrate the extensive range of our general approach, we construct ARKG schemes for a number of popular pairing-based primitives: Boneh-Lynn-Shacham (JoC 2004), Camenisch-Lysyanskaya (CRYPTO 2004), Pointcheval-Sanders (CT-RSA 2016), Waters (EUROCRYPT 2005) signatures and structure-preserving signatures on equivalence classes (ASIACRYPT 2014). For each scheme we give an implementation and provide benchmarks that show the feasibility of our techniques.
Tri-State Circuits: A Circuit Model that Captures RAM
We introduce tri-state circuits (TSCs). TSCs form a natural model of computation that, to our knowledge, has not been considered by theorists. The model captures a surprising combination of simplicity and power. TSCs are simple in that they allow only three wire values ($0,1,$ and undefined - $\mathcal{Z}$) and three types of fan-in two gates; they are powerful in that their statically placed gates fire (execute) eagerly as their inputs become defined, implying orders of execution that depend on input. This behavior is sufficient to efficiently evaluate RAM programs.
We construct a TSC that emulates $T$ steps of any RAM program and that has only $O(T \cdot \log^3 T \cdot \log \log T)$ gates. Contrast this with the reduction from RAM to Boolean circuits, where the best approach scans all of memory on each access, incurring quadratic cost.
We connect TSCs with cryptography by using them to improve Yao's Garbled Circuit (GC) technique. TSCs capture the power of garbling far better than Boolean Circuits, offering a more expressive model of computation that leaves per-gate cost essentially unchanged.
As an important application, we construct authenticated Garbled RAM (GRAM), enabling constant-round maliciously-secure 2PC of RAM programs. Let $\lambda$ denote the security parameter. We extend authenticated garbling to TSCs; by simply plugging in our TSC-based RAM, we obtain authenticated GRAM running at cost $O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda)$, outperforming all prior work, including prior semi-honest GRAM.
We also give semi-honest garbling of TSCs from a one-way function (OWF). This yields OWF-based GRAM at cost $O(T \cdot \log^3 T \cdot \log \log T \cdot \lambda)$, outperforming the best prior OWF-based GRAM by more than factor $\lambda$.
Wireless-channel Key Exchange
Wireless-channel key exchange (WiKE) protocols that leverage Physical Layer Security (PLS) techniques could become an alternative solution for secure communication establishment, such as vehicular ad-hoc networks, wireless IoT networks, or cross-layer protocols.
In this paper, we provide a novel abstraction of WiKE protocols and present the first game-based security model for WiKE. Our result enables the analysis of security guarantees offered by these cross-layer protocols and allows the study of WiKE's compositional aspects. Further, we address the potential problem of the slow-rate secret-key generation in WiKE due to inadequate environmental conditions that might render WiKE protocols impractical or undesirably slow. We explore a solution to such a problem by bootstrapping a low-entropy key coming as the output of WiKE using a Password Authenticated Key Exchange (PAKE). On top of the new security definition for WiKE and those which are well-established for PAKE, we build a compositional WiKE-then-PAKE model and define the minimum security requirements for the safe sequential composition of the two primitives in a black-box manner. Finally, we show the pitfalls of previous ad-hoc attempts to combine WiKE and PAKE.
An algebraic attack for forging signatures of MPPK/DS
We give an algebraic attack to forge the signature of a scheme called
MPPK/DS, which can be achieved by solving a linear system in 5 variables
with coefficients on $\mathbb{Z}/2^x q \mathbb{Z}$ for some odd prime $q$ and $x ≥ 1$.
Piano: Extremely Simple, Single-Server PIR with Sublinear Server Computation
We construct a sublinear-time single-server pre-processing Private Information Retrieval
(PIR) scheme with optimal client storage and server computation (up to poly-logarithmic factors), only relying on the assumption of the existence of One Way Functions (OWF). Our scheme achieves amortized $\tilde{O}(\sqrt{n})$ online server computation and client computation and $O(\sqrt{n})$
online communication per query, and requires $\widetilde{O}_\lambda(\sqrt{n})$ client storage. Unlike prior single-server PIR schemes that rely on heavy cryptographic machinery such as Homomorphic Encryption, our scheme only utilizes lightweight cryptography such as PRFs, which is easily instantiated in practice. To our knowledge, this is the first practical implementation of a single-server sublinear-time PIR scheme.
Compared to existing linear time single-server solutions, our schemes are faster by $40-900\times$ and are comparable to the fastest two-server schemes. In particular, for a 100GB database of 1.6 billion entries, our experiments show that our scheme has 12ms online computation time on a single core.
Non-interactive VSS using Class Groups and Application to DKG
We put forward a non-interactive verifiable secret sharing (NI-VSS) scheme using class groups – we call it cgVSS. Our construction follows the standard framework of encrypting the shares to a set of recipients and generating a non-interactive proof of correct sharing. However, as opposed to prior works, such as Groth’s [Eprint 2021], or Gentry et al.’s [Eurocrypt 2022], we do not require any range proof - this is possible due to the unique structure of class groups, that enables efficient encryption/decryption of large field elements in the exponent of an ElGamal-style encryption scheme. Importantly, this is possible without destroying the additive holomorphic structure, which is required to make the proof-of-correctness highly efficient. This approach not only substantially simplifies the NI-VSS process, but also outperforms the state-of-art schemes significantly. For example, our implementation shows that for a 150 node system cgVSS outperforms (a simplified implementation of) Groth’s protocol in overall communication complexity by 5.6x, about 9.3 − 9.7x in the dealer time and 2.4 − 2.7x in the receiver time per node.
Additionally, we formalize the notion of public verifiability, which enables anyone, possibly outside the participants, to verify the correctness of the dealing. In fact, we re-interpret the notion of public verifiability and extend it to the setting when potentially all recipients may be corrupt and yet can not defy public verifiability – to distinguish from state-of-art, we call this strong public verifiability. Our formalization uses the universal composability framework.
Finally, through a generic transformation, we obtain a non-interactive distributed key generation (NI-DKG) scheme for threshold systems, where the secret key is the discrete log of the public key. Our security analysis in the VSS-hybrid model uses a formalization that considers a (strong) public verifiability notion for DKG, even when more than threshold parties are corrupt. Instantiating with cgVSS we obtain a NI-DKG scheme from class groups – we call it cgDKG.
Unlocking doors from half a continent away: A relay attack against HID Seos
HID Global is a major vendor of physical access control systems. In 2012, it introduced Seos, its newest and most secure contactless RFID credential technology, successfully remediating known flaws in predecessors iCLASS and Prox. Seos has been widely deployed to secure sensitive assets and facilities. To date, no published research has demonstrated a security flaw in Seos. We present a relay attack developed with inexpensive COTS hardware, including the Proxmark 3 RDV4. Our attack is capable of operating over extremely long ranges as it uses the Internet as a communications backbone. We have tested multiple real-world attack scenarios and are able to unlock a door in our lab with a card approximately 1960 km away. Our attack is covert and does not require long-term access to the card. Further, our attack is generic and is potentially applicable to other protocols that, like Seos, use ISO/IEC 14443A to communicate. We discuss several mitigations capable of thwarting our attack that could be introduced in future credential systems or as an update to Seos-compatible readers' firmware; these rely on rejecting cards that take too long to reply.
Multidimensional Approximate Agreement with Asynchronous Fallback
Multidimensional Approximate Agreement considers a setting of $n$ parties, where each party holds a vector in $\mathbb{R}^D$ as input. The honest parties are required to obtain very close outputs in $\mathbb{R}^D$ that lie inside the convex hull of their inputs.
Existing Multidimensional Approximate Agreement protocols achieve resilience against $t_s < n / (D + 1)$ corruptions under a synchronous network where messages are delivered within some time $\Delta$, but become completely insecure as soon as a single message is further delayed. On the other hand, asynchronous solutions do not rely on any delay upper bound, but only achieve resilience up to $t_a < n / (D + 2)$ corruptions.
We investigate the feasibility of achieving Multidimensional Approximate Agreement protocols that achieve simultaneously guarantees in both network settings: We want to tolerate $t_s$ corruptions when the network is synchronous, and also tolerate $t_a \leq t_s$ corruptions when the network is asynchronous. We provide a protocol that works as long as $(D + 1) \cdot t_s + t_a < n$, and matches several existing lower bounds.
Last updated: 2023-06-05
Generalized Inverse Matrix Construction for Code Based Cryptography
The generalized inverses of systematic non-square binary matrices have applications in mathematics, channel coding and decoding, navigation signals, machine learning, data storage and cryptography such as the McEliece and Niederreiter public-key cryptosystems.
A systematic non-square $(n-k) \times n$ matrix $H$, $n > k$, has $2^{k\times(n-k)}$ different generalized inverse matrices.
This paper presents an algorithm for generating these matrices and compares it with two well-known methods, i.e. Gauss-Jordan elimination and Moore-Penrose methods. A random generalized inverse matrix construction method is given which has a lower execution time than the Gauss-Jordan elimination and Moore-Penrose approaches.
Provable Lattice Reduction of $\mathbb Z^n$ with Blocksize $n/2$
The Lattice Isomorphism Problem (LIP) is the computational task of recovering, assuming it exists, a orthogonal linear transformation sending one lattice to another. For cryptographic purposes, the case of the trivial lattice $\mathbb Z^n$ is of particular interest ($\mathbb Z$LIP). Heuristic analysis suggests that the BKZ algorithm with blocksize $\beta = n/2 + o(n)$ solves such instances (Ducas, Postlethwaite, Pulles, van Woerden, ASIACRYPT 2022).
In this work, I propose a provable version of this statement, namely, that $\mathbb Z$LIP can indeed be solved by making polynomially many calls to a Shortest Vector Problem (SVP) oracle in dimension at most $n/2 + 1$.
Phoenix: Hash-and-Sign with Aborts from Lattice Gadgets
Preimage sampling is a fundamental tool in lattice-based cryptography, and its performance directly impacts that of the cryptographic mechanisms relying on it. In 2012, Micciancio and Peikert proposed a new way of generating trapdoors (and an associated preimage sampling procedure) with very interesting features. Unfortunately, in some applications such as digital signatures, the performance may not be as competitive as other approaches like Fiat-Shamir with Aborts. In an effort to improve preimage sampling for Micciancio-Peikert (MP) trapdoors, Lyubashevsky and Wichs (LW) introduced a new sampler which leverages rejection sampling but suffers from strong parameter requirements that hampered performance. As a consequence it seemed to be restricted to theoretical applications and has not been, to our knowledge, considered for real-world applications.
Our first contribution is to revisit the LW sampler by proposing an improved analysis which yields much more compact parameters. This leads to gains on the preimage size of about 60% over the LW sampler, and up to 25% compared to the original MP sampling technique. It thus sheds a new light on the LW sampler, opening promising perspectives for the efficiency of advanced lattice-based constructions relying on such mechanisms. To provide further improvements, we show that it perfectly combines with the approximate trapdoors approach by Chen, Genise and Mukherjee, but with a smaller preimage error.
Building upon those results, we introduce a hash-and-sign signature scheme called Phoenix. The scheme is based on the M-LWE and M-SIS assumptions and features attractive public key and signature sizes which are even smaller than those of the most recent gadget-based construction Eagle of Yu, Jia and Wang (Crypto’23). Moreover, Phoenix is designed to be implementation-friendly, avoiding in particular complex Gaussian samplers that are often hard to protect.
Fully Adaptive Schnorr Threshold Signatures
We prove adaptive security of a simple three-round threshold Schnorr signature scheme, which we call Sparkle+. The standard notion of security for threshold signatures considers a static adversary – one who must declare which parties are corrupt at the beginning of the protocol. The stronger adaptive adversary can at any time corrupt parties and learn their state. This notion is natural and practical, yet not proven to be met by most schemes in the literature.
In this paper, we demonstrate that Sparkle+ achieves several levels of security based on different corruption models and assumptions. To begin with, Sparkle+ is statically secure under minimal assumptions: the discrete logarithm assumption (DL) and the random oracle model (ROM). If an adaptive adversary corrupts fewer than t/2 out of a threshold of t+1 signers, then Sparkle+ is adaptively secure under a weaker variant of the one-more discrete logarithm assumption (AOMDL) in the ROM. Finally, we prove that Sparkle+ achieves full adaptive security, with a corruption threshold of t, under AOMDL in the algebraic group model (AGM) with random oracles. Importantly, we show adaptive security without requiring secure erasures. Ours is the first proof achieving full adaptive security without exponential tightness loss for any threshold Schnorr signature scheme.
Compact Bounded-Collusion Identity-based Encryption via Group Testing
Bounded-collusion identity-based encryption (BC-IBE) is a variant of identity-based encryption, where an adversary obtains user secrete keys corresponding to at most $d$ identities. From results of existing work, it is proven that BC-IBE can be constructed from public key encryption (PKE) with several properties. In particular, we focus on post-quantum PKE schemes submitted to the NIST PQC competition, as the underlying PKE of BC-IBE schemes. This is because post-quantum cryptography is one of active research areas, due to recent advancement of developing quantum computers. Hence, it is reasonable to consider converting such PKE schemes into encryption schemes with additional functionalities. By using existing generic constructions of BC-IBE, those post-quantum PKE schemes are transformed into BC-IBE with non-compact public parameter.
In this paper, we propose generic constructions of BC-IBE whose public parameter-size is more compact, and it is possible to apply many post-quantum PKE schemes secure against chosen plaintext attacks, into our generic constructions. To this end, we construct BC-IBE schemes from a group testing perspective, while existing ones are constructed by employing error-correcting codes or cover-free families. As a result, we can obtain BC-IBE schemes with more compact public parameter, which are constructed from the NIST PQC PKE schemes.
Abstraction Model of Probing and DFA Attacks on Block Ciphers
A thread of physical attacks that try to obtain secret information from cryptographic modules has been of academic and practical interest. One of the concerns is determining its efficiency, e.g., the number of attack trials to recover the secret key. However, the accurate estimation of the attack efficiency is generally expensive because of the complexity of the physical attack on a cryptographic algorithm. Based on this background, in this study, we propose a new abstraction model for evaluating the attack efficiency of the probing and DFA attacks. The proposed model includes an abstracted attack target and attacker to determine the amount of leaked information obtained in a single attack trial. We can adapt the model flexibly to various attack scenarios and can get the attack efficiency quickly and precisely. In the probing attack on AES, the difference in the attack efficiency is only approximately 0.3% between the model and experimental values, whereas that of a previous model is approximately 16%. We also apply the probing attack on DES, and the results show that DES has a high resistance to the probing attack. Moreover, the proposed model works accurately also for the DFA attack on AES.
Non-interactive privacy-preserving naive Bayes classifier using homomorphic encryption
In this paper, we propose a non-interactive privacy-preserving naive Bayes classifier from leveled fully homomorphic encryption schemes. The classifier runs on a server that is also the model’s owner (modeler), whose input is the encrypted data from a client. The classifier produces encrypted classification results, which can only be decrypted by the client, while the modelers model is only accessible to the server. Therefore, the classifier does not leak any privacy on either the servers model or the clients data and results. More importantly, the classifier does not require any interactions between the server and the client during the classification phase. The main technical ingredient is an algorithm that computes the maximum index of an encrypted array homomorphically without any interactions. The proposed classifier is implemented using HElib. Experiments show the accuracy and efficiency of our classifier. For instance, the average cost can achieve about 34ms per sample for a real data set in UCI Machine Learning Repository with the security parameter about 100 and accuracy about 97%.
Unconditionally secure ciphers with a short key for a source with unknown statistics
We consider the problem of constructing an unconditionally secure cipher with a short key for the case where the probability distribution of encrypted messages is unknown. Note that unconditional security means that an adversary with no computational constraints can obtain only a negligible amount of information ("leakage") about an encrypted message (without knowing the key).
Here we consider the case of a priori (partially) unknown message source statistics.
More specifically, the message source probability distribution belongs to a given family of distributions. We propose an unconditionally secure cipher for this case. As an example, one can consider constructing
a single cipher for texts written in any of the languages of the European Union. That is, the message to be encrypted could be written in any of these languages.
On the Possibility of a Backdoor in the Micali-Schnorr Generator
In this paper, we study both the implications and potential impact of backdoored parameters for two RSA-based pseudorandom number generators: the ISO-standardized Micali-Schnorr generator and a closely related design, the RSA PRG. We observe, contrary to common understanding, that the security of the Micali-Schnorr PRG is not tightly bound to the difficulty of inverting RSA. We show that the Micali-Schnorr construction remains secure even if one replaces RSA with a publicly evaluatable PRG, or a function modeled as an efficiently invertible random permutation. This implies that any cryptographic backdoor must somehow exploit the algebraic structure of RSA, rather than an attacker's ability to invert RSA or the presence of secret keys. We exhibit two such backdoors in related constructions: a family of exploitable parameters for the RSA PRG, and a second vulnerable construction for a finite-field variant of Micali-Schnorr. We also observe that the parameters allowed by the ISO standard are incompletely specified, and allow insecure choices of exponent. Several of our backdoor constructions make use of lattice techniques, in particular multivariate versions of Coppersmith's method for finding small solutions to polynomials modulo integers.
Standard Model Time-Lock Puzzles: Defining Security and Constructing via Composition
The introduction of time-lock puzzles initiated the study of publicly “sending information into the future.” For time-lock puzzles, the underlying security-enabling mechanism is the computational complexity of the operations needed to solve the puzzle, which must be tunable to reveal the solution after a predetermined time, and not before that time. Time-lock puzzles are typically constructed via a commitment to a secret, paired with a reveal algorithm that sequentially iterates a basic function
over such commitment. One then shows that short-cutting the iterative process violates cryptographic hardness of an underlying problem.
To date, and for more than twenty-five years, research on time-lock
puzzles relied heavily on iteratively applying well-structured algebraic functions. However, despite the tradition of cryptography to reason about primitives in a realistic model with standard hardness assumptions (often after initial idealized assumptions), most analysis of time-lock puzzles to date still relies on cryptography modeled (in an ideal manner) as a random oracle function or a generic group function. Moreover, Mahmoody et al. showed that time-lock puzzles with superpolynomial gap cannot be constructed from random-oracles; yet still, current treatments generally use an algebraic trapdoor to efficiently construct a puzzle with a large time gap, and then apply the inconsistent (with respect to Mahmoody et al.) random-oracle idealizations to analyze the solving process. Finally, little attention has been paid to the nuances of composing multi-party computation with timed puzzles that are solved as part of the protocol.
In this work, we initiate a study of time-lock puzzles in a model built upon a realistic (and falsifiable) computational framework. We present a new formal definition of residual complexity to characterize a realistic, gradual time-release for time-lock puzzles. We also present a general definition of timed multi-party computation (MPC) and both sequential and concurrent composition theorems for MPC in our model.
Minimal $p$-ary codes via the direct sum of functions, non-covering permutations and subspaces of derivatives
In this article, we propose several generic methods for constructing minimal linear codes over the field $\mathbb{F}_p$. The first construction uses the method of direct sum of an arbitrary function $f:\mathbb{F}_{p^r}\to \mathbb{F}_{p}$ and a bent function $g:\mathbb{F}_{p^s}\to \mathbb{F}_p$ to induce minimal codes with parameters $[p^{r+s}-1,r+s+1]$ and minimum distance larger than $p^r(p-1)(p^{s-1}-p^{s/2-1})$. For the first time, we provide a general construction of linear codes from a subclass of non-weakly regular plateaued functions, which partially answers an open problem posed in [22]. The second construction deals with a bent function $g:\mathbb{F}_{p^m}\to \mathbb{F}_p$ and a subspace of suitable derivatives $U$ of $g$, i.e., functions of the form $g(y+a)-g(y)$ for some $a\in \mathbb{F}_{p^m}^*$. We also provide a sound generalization of the recently introduced concept of non-covering permutations [45]. Some important structural properties of this class of permutations are derived in this context. The most remarkable observation is that the class of non-covering permutations contains the class of APN power permutations (characterized by having two-to-one derivatives). Finally, the last general construction combines the previous two methods (direct sum, non-covering permutations and subspaces of derivatives), using a bent function in the Maiorana-McFarland class to construct minimal codes (even those violating the Ashikhmin-Barg bound) with larger dimensions. This last method proves to be highly flexible since it can lead to several non-equivalent codes, depending to a great extent on the choice of the underlying non-covering permutation.
Interoperable Private Attribution: A Distributed Attribution and Aggregation Protocol
Measuring people’s interactions that span multiple websites can provide unique insight that enables better products and improves people’s experiences, but directly observing people’s individual journeys creates privacy risks that conflict with the newly emerging privacy model for the web. We propose a protocol that uses the combination of multi-party computation and differential privacy that enables the processing of peoples’ data such that only aggregate measurements are revealed, strictly limiting the information leakage about individual people. Our primary application of this protocol is measuring, in aggregate, the effectiveness of digital advertising without enabling cross-site tracking of individuals. In this paper we formalize our protocol, Interoperable Private Attribution (IPA), and analyze its security. IPA is proposed in the W3C’s Private Advertising Technology Community Group (PATCG) [8]. We have implemented our protocol in the malicious honest majority MPC setting for three parties where network costs dominate compute costs. For processing a query with 1M records it uses around 18GB of network which at \$0.08 per GB leads to a network cost of \$1.44.
SQISignHD: New Dimensions in Cryptography
We introduce SQIsignHD, a new post-quantum digital signature scheme inspired by SQIsign.
SQIsignHD exploits the recent algorithmic breakthrough underlying the attack on SIDH, which allows to efficiently represent isogenies of arbitrary degrees as components of a higher dimensional isogeny. SQIsignHD overcomes the main drawbacks of SQIsign. First, it scales well to high security levels, since the public parameters for SQIsignHD are easy to generate: the characteristic of the underlying field needs only be of the form $2^{f}3^{f'}-1$. Second, the signing procedure is simpler and more efficient. Our signing procedure implemented in C runs in 28 ms, which is a significant improvement compared to SQISign. Third, the scheme is easier to analyse, allowing for a much more compelling security reduction. Finally, the signature sizes are even more compact than (the already record-breaking) SQIsign, with compressed signatures as small as 109 bytes for the post-quantum NIST-1 level of security.
These advantages may come at the expense of the verification, which now requires the computation of an isogeny in dimension $4$, a task whose optimised cost is still uncertain, as it has been the focus of very little attention. Our experimental sagemath implementation of the verification runs in around 600 ms, indicating the potential cryptographic interest of dimension $4$ isogenies after optimisations and low level implementation.
Optimal Security Notion for Decentralized Multi-Client Functional Encryption
Research on (Decentralized) Multi-Client Functional Encryption (or (D)MCFE) is very active, with interesting constructions, especially for the class of inner products. However, the security notions have been evolving over the time. While the target of the adversary in distinguishing ciphertexts is clear, legitimate scenarios that do not consist of trivial attacks on the functionality are less obvious. In this paper, we wonder whether only trivial attacks are excluded from previous security games. And, unfortunately, this was not the case.
We then propose a stronger security notion, with a large definition of admissible attacks, and prove it is optimal: any extension of the set of admissible attacks is actually a trivial attack on the functionality, and not against the specific scheme. In addition, we show that all the previous constructions are insecure w.r.t. this new security notion. Eventually, we propose new DMCFE schemes for the class of inner products that provide the new features and achieve this stronger security notion.
The Self-Anti-Censorship Nature of Encryption: On the Prevalence of Anamorphic Cryptography
As part of the responses to the ongoing ``crypto wars,'' the notion of {\em Anamorphic Encryption} was put forth [Persiano-Phan-Yung Eurocrypt '22].
The notion allows private communication in spite of a dictator who (in violation of the usual normative conditions under which Cryptography is developed) is engaged in an extreme form of surveillance and/or censorship, where it asks for all private keys and knows and may even dictate all messages.
The original work pointed out efficient ways to use two known schemes in the anamorphic mode, bypassing the draconian censorship and hiding information from the all-powerful dictator.
A question left open was whether these examples are outlier results or whether anamorphic mode is pervasive in existing systems.
Here we answer the above question: we develop new techniques, expand the notion, and show that the notion of Anamorphic Cryptography is, in fact, very much prevalent.
We first refine the notion of Anamorphic Encryption with respect to the nature of covert communication.
Specifically, we distinguish {\em Single-Receiver Encryption} for many to one communication, and {\em Multiple-Receiver Encryption} for many to many communication within the group of conspiring (against the dictator) users. We then show that Anamorphic Encryption can be embedded in the randomness used in the encryption, and give families of constructions that can be applied to numerous ciphers. In total the families cover classical encryption schemes, some of which in actual use (RSA-OAEP, Pailler, Goldwasser-Micali, ElGamal schemes, Cramer-Shoup, and Smooth Projective Hash based systems). Among our examples is an anamorphic channel with much higher capacity than the regular channel.
In sum, the work shows the very large extent of the potential futility of control and censorship over the use of strong encryption by the dictator (typical for and even stronger than governments engaging in the ongoing ``crypto-wars''): While such limitations obviously hurt utility which encryption typically brings to safety in computing systems, they essentially, are not helping the dictator.
The actual implications of what we show here and what does it mean in practice require further policy and legal analyses and perspectives.
Efficiency of SIDH-based signatures (yes, SIDH)
In this note we assess the efficiency of a SIDH-based digital signature built on a weakened variant of a
recent identification protocol proposed by Basso et al.
Despite the devastating attacks against (the mathematical problem underlying) SIDH, this identification protocol remains secure, as its security is backed by a different (and more standard) isogeny-finding problem.
We conduct our analysis by applying some known cryptographic techniques to decrease the signature size by about $70\%$ for all parameter sets (obtaining signatures of approximately 21 KB for SIKEp434).
Moreover, we propose a minor optimisation to compute many isogenies in parallel from the same starting curve.
Our assessment confirms that the problem of designing a practical isogeny-based signature scheme remains largely open. However, concretely determine the current state of the art which future optimisations can compare to appears to be of relevance for a problem which has witnessed only small steps towards a solution.
Practical key-recovery attack on MQ-Sign
In this paper we describe attacks on the UOV-based signature scheme called MQ-Sign. MQ-Sign was submitted by Shim, Kim, and An as a a first-round candidate for standardization in the (South) Korean post-quantum cryptography competition (KpqC). The scheme makes use of sparseness of the secret central polynomials and equivalent key construction to reduce the size of the private key. The authors propose four variants exploiting different levels of sparsity, MQ-Sign-SS, MQ-Sign-RS, MQ-Sign-SR, and MQ-Sign-RR with the last one being the standard UOV signature scheme.
We show that apart from the MQ-Sign-RR variant, all the others are insecure. Namely, we present a polynomial-time key-recovery attack on the variants MQ-Sign-SS and MQ-Sign-RS and a forgery attack on the variant MQ-Sign-SR below the claimed security level.
Our attack exploits exactly the techniques used for reduction of keys - the sparsity of the central polynomials in combination with the specific structure of the secret linear map $\mathbf{S}$.
We provide a verification script for the polynomial-time key-recovery attack, that recovers the secret key in less than seven seconds for security level V. Furthermore, we provide an implementation of the non-guessing part of the forgery attack, confirming our complexity estimates.
Ruffle: Rapid 3-party shuffle protocols
Secure shuffle is an important primitive that finds use in several applications such as secure electronic voting, oblivious RAMs, secure sorting, to name a few. For time-sensitive shuffle-based applications that demand a fast response time, it is essential to design a fast and efficient shuffle protocol. In this work, we design secure and fast shuffle protocols relying on the techniques of secure multiparty computation. We make several design choices that aid in achieving highly efficient protocols. Specifically, we consider malicious 3-party computation setting with an honest majority and design robust ring-based protocols. Our shuffle protocols provide a fast online (i.e., input-dependent) phase compared to the state-of-the-art for the considered setting.
To showcase the efficiency improvements brought in by our shuffle protocols, we consider two distinct applications of anonymous broadcast and secure graph computation via the GraphSC paradigm. In both cases, multiple shuffle invocations are required. Hence, going beyond standalone shuffle invocation, we identify two distinct scenarios of multiple invocations and provide customised protocols for the same. Further, we showcase that our customized protocols not only provide a fast response time, but also provide improved overall run time for multiple shuffle invocations. With respect to the applications, we not only improve in terms of efficiency, but also work towards providing improved security guarantees, thereby outperforming the respective state-of-the-art works. We benchmark our shuffle protocols and the considered applications to analyze the efficiency improvements with respect to various parameters.
QuantumCharge: Post-Quantum Cryptography for Electric Vehicle Charging
ISO 15118 enables charging and billing of Electric Vehicles
(EVs) without user interaction by using locally installed cryptographic credentials that must be secure over the long lifetime of vehicles. In the dawn of quantum computers, Post-Quantum Cryptography (PQC) needs to be integrated into the EV charging infrastructure. In this paper, we propose QuantumCharge, a PQC extension for ISO 15118, which includes concepts for migration, crypto-agility, verifiable security, and the use of PQC-enabled hardware security modules. Our prototypical implementation and the practical evaluation demonstrate the feasibility, and our formal analysis shows the security of QuantumCharge, which thus paves the way for secure EV charging infrastructures of the future.
CPU to FPGA Power Covert Channel in FPGA-SoCs
FPGA-SoCs are a popular platform for accelerating a wide
range of applications due to their performance and flexibility. From a
security point of view, these systems have been shown to be vulnerable
to various attacks, especially side-channel attacks where an attacker can
obtain the secret key of a cryptographic algorithm via laboratory mea-
surement equipment or even remotely with sensors implemented inside
the FPGA logic itself. Fortunately, a variety of countermeasures on the
algorithmic level have been proposed to mitigate this threat. Beyond side-
channel attacks, covert channels constitute another threat which enables
communication through a hidden channel. In this work, we demonstrate
the possibility of implementing a covert channel between the CPU and
an FPGA by modulating the usage of the Power Distribution Network.
We show that this resource is especially vulnerable since it can be easily
controlled and observed, resulting in a stealthy communication and a
high transmission data rate. The power usage is modulated using simple
and inconspicuous instructions executed on the CPU. Additionally, we
use Time-to-Digital Converter sensors to observe these power variations.
The sensor circuits are programmed into the FPGA fabric using only
standard logic components. Our covert channel achieves a transmission
rate of up to 16.7 kbit/s combined with an error rate of 2.3%. Besides
a good transmission quality, our covert channel is also stealthy and can
be used as an activation function for a hardware trojan.
Security analysis of the Classic McEliece, HQC and BIKE schemes in low memory
With the advancement of NIST PQC standardization, three of the four candidates in Round 4 are code-based schemes, namely Classic McEliece, HQC and BIKE. Currently, one of the most important tasks is to further analyze their security levels for the suggested parameter sets. At PKC 2022 Esser and Bellini restated the major information set decoding (ISD) algorithms by using nearest neighbor search and then applied these ISD algorithms to estimate the bit security of Classic McEliece, HQC and BIKE under the suggested parameter sets. However, all major ISD algorithms consume a large amount of memory, which in turn affects their time complexities. In this paper, we reestimate the bit-security levels of the parameter sets suggested by these three schemes in low memory by applying $K$-list sum algorithms to ISD algorithms. Compared with Esser-Bellini's results, our results achieve the best gains for Classic McEliece, HQC, and BIKE, with reductions in bit-security levels of $11.09$, $12.64$, and $12.19$ bits, respectively.
SPRINT: High-Throughput Robust Distributed Schnorr Signatures
We describe high-throughput threshold protocols with guaranteed output delivery for generating Schnorr-type signatures. The protocols run a single message-independent interactive ephemeral randomness generation procedure (e.g., DKG) followed by a \emph{non-interactive} multi-message signature generation procedure. The protocols offer significant increase in throughput already for as few as ten parties while remaining highly-efficient for many hundreds of parties with thousands of signatures generated per minute (and over 10,000 in normal optimistic case).
These protocols extend seamlessly to the dynamic/proactive setting, where each run of the protocol uses a new committee, and they support sub-sampling the committees from among an effectively unbounded number of nodes. The protocols work over a broadcast channel in both synchronous and asynchronous networks.
The combination of these features makes our protocols a good match for implementing a signature service over an (asynchronous) public blockchain with many validators, where guaranteed output delivery is an absolute must. In that setting, there is a system-wide public key, where the corresponding secret signature key is distributed among the validators. Clients can submit messages (under suitable controls, e.g. smart contracts), and authorized messages are signed relative to the global public key.
Asymptotically, when running with committees of $n$ parties, our protocols can generate $\Omega(n^2)$ signatures per run, while providing resilience against $\Omega(n)$ corrupted nodes, and using broadcast bandwidth of only $O(n^2)$ group elements and scalars. For example, we can sign about $n^2/16$ messages using just under $2n^2$ total bandwidth while supporting resilience against $n/4$ corrupted parties, or sign $n^2/8$ messages using just over $2n^2$ total bandwidth with resilience against $n/5$ corrupted parties.
We prove security of our protocols by reduction to the hardness of the discrete logarithm problem in the random-oracle model.
A Tightly Secure Identity-based Signature Scheme from Isogenies
We present a tightly secure identity-based signature (IBS) scheme based on the supersingular isogeny problems. Although Shaw and Dutta proposed an isogeny-based IBS scheme with provable security, the security reduction is non-tight. For an IBS scheme with concrete security, the tightness of its security reduction affects the key size and signature size. Hence, it is reasonable to focus on a tight security proof for an isogeny-based IBS scheme. In this paper, we propose an isogeny-based IBS scheme based on the lossy CSI-FiSh signature scheme and give a tight security reduction for this scheme. While the existing isogeny-based IBS has the square-root advantage loss in the security proof, the security proof for our IBS scheme avoids such advantage loss, due to the properties of lossy CSI-FiSh.
Generic Construction of Dual-Server Public Key Authenticated Encryption with Keyword Search
Chen et al. (IEEE Transactions on Cloud Computing 2022) introduced dual-server public key authenticated encryption with keyword search (DS-PAEKS), and proposed a DS-PAEKS scheme under the decisional Diffie-Hellman assumption. In this paper, we propose a generic construction of DS-PAEKS from PAEKS, public key encryption, and signatures. By providing a concrete attack, we show that the DS-PAEKS scheme of Chen et al. is vulnerable. That is, the proposed generic construction yields the first DS-PAEKS schemes. Our attack with a slight modification works against the Chen et al. dual-server public key encryption with keyword search (DS-PEKS) scheme (IEEE Transactions on Information Forensics and Security 2016). Moreover, we demonstrate that the Tso et al. generic construction of DS-PEKS from public key encryption (IEEE Access 2020) is also vulnerable. We also analyze other pairing-free PAEKS schemes (Du et al., Wireless Communications and Mobile Computing 2022 and Lu and Li, IEEE Transactions on Mobile Computing 2022). Though we did not find any attack against these schemes, we show that at least their security proofs are wrong.
A Duality Between One-Way Functions and Average-Case Symmetry of Information
Symmetry of Information (SoI) is a fundamental property of Kolmogorov complexity that relates the complexity of a pair of strings and their conditional complexities. Understanding if this property holds in the time-bounded setting is a longstanding open problem. In the nineties, Longpré and Mocas (1993) and Longpré and Watanabe (1995) established that if SoI holds for time-bounded Kolmogorov complexity then cryptographic one-way functions do not exist, and asked if a converse holds.
We show that one-way functions exist if and only if (probabilistic) time-bounded SoI fails on average, i.e., if there is a samplable distribution of pairs (x,y) of strings such that SoI for pK$^t$ complexity fails for many of these pairs. Our techniques rely on recent perspectives offered by probabilistic Kolmogorov complexity and meta-complexity, and reveal further equivalences between inverting one-way functions and the validity of key properties of Kolmogorov complexity in the time-bounded setting: (average-case) language compression and (average-case) conditional coding.
Motivated by these results, we investigate correspondences of this form for the worst-case hardness of NP (i.e., NP ⊄ BPP) and for the average-case hardness of NP (i.e., DistNP ⊄ HeurBPP), respectively. Our results establish the existence of similar dualities between these computational assumptions and the failure of results from Kolmogorov complexity in the time-bounded setting. In particular, these characterizations offer a novel way to investigate the main hardness conjectures of complexity theory (and the relationships among them) through the lens of Kolmogorov complexity and its properties.
A Note on Hybrid Signature Schemes
This draft presents work-in-progress concerning hybrid/composite signature schemes. More concretely, we give several tailored combinations of Fiat-Shamir based signature schemes (such as Dilithium) or Falcon with RSA or DSA. We observe that there are a number of signature hybridization goals, few of which are not achieved through parallel signing or concatenation approaches. These include proof composability (that the post-quantum hybrid signature security can easily be linked to the component algorithms), weak separability, strong separability, backwards compatibility, hybrid generality (i.e., hybrid compositions that can be instantiated with different algorithms once proven to be secure), and simultaneous verification. We do not consider backwards compatibility in this work, but aim in our constructions to show the feasibility of achieving all other properties. As a work-in-progress, the constructions are presented without the accompanying formal security analysis, to be included in an update.
A Differential Fault Attack against Deterministic Falcon Signatures
We describe a fault attack against the deterministic variant of the Falcon signature scheme. It is the first fault attack that exploits specific properties of deterministic Falcon. The attack works under a very liberal and realistic single fault random model. The main idea is to inject a fault into the pseudo-random generator of the pre-image trapdoor sampler, generate different signatures for the same input, find reasonably short lattice vectors this way, and finally use lattice reduction techniques to obtain the private key. We investigate the relationship between fault location, the number of faults, computational effort for a possibly remaining exhaustive search step and success probability.
Interactive Oracle Arguments in the QROM and Applications to Succinct Verification of Quantum Computation
This work is motivated by the following question: can an untrusted quantum server convince a classical verifier of the answer to an efficient quantum computation using only polylogarithmic communication? We show how to achieve this in the quantum random oracle model (QROM), after a non-succinct instance-independent setup phase.
We introduce and formalize the notion of post-quantum interactive oracle arguments for languages in QMA, a generalization of interactive oracle proofs (Ben-Sasson-Chiesa-Spooner). We then show how to compile any non-adaptive public-coin interactive oracle argument (with private setup) into a succinct argument (with setup) in the QROM.
To conditionally answer our motivating question via this framework under the post-quantum hardness assumption of LWE, we show that the ZX local Hamiltonian problem with at least inverse-polylogarithmic relative promise gap has an interactive oracle argument with instance-independent setup, which we can then compile.
Assuming a variant of the quantum PCP conjecture that we introduce called the weak ZX quantum PCP conjecture, we obtain a succinct argument for QMA (and consequently the verification of quantum computation) in the QROM (with non-succinct instance-independent setup) which makes only black-box use of the underlying cryptographic primitives.
Making Classical (Threshold) Signatures Post-Quantum for Single Use on a Public Ledger
The Bitcoin architecture heavily relies on the ECDSA signature scheme which is broken by quantum adversaries as the secret key can be computed from the public key in quantum polynomial time. To mitigate this attack, bitcoins can be paid to the hash of a public key (P2PKH). However, the first payment reveals the public key so all bitcoins attached to it must be spent at the same time (i.e. the remaining amount must be transferred to a new wallet). Some problems remain with this approach: the owners are vulnerable against rushing adversaries between the time the signature is made public and the time it is committed to the blockchain. Additionally, there is no equivalent mechanism for threshold signatures. Finally, no formal analysis of P2PKH has been done.
In this paper, we formalize the security notion of a digital signature with a hidden public key and we propose and prove the security of a generic transformation that converts a classical signature to a post-quantum one that can be used only once. We compare it with P2PKH. Namely, our proposal relies on pre-image resistance instead of collision resistance as for P2PKH, so allows for shorter hashes. Additionally, we propose the notion of a delay signature to address the problem of the rushing adversary when used with a public ledger and discuss the advantages and disadvantages of our approach. We further extend our results to threshold signatures.
Asynchronous Remote Key Generation for Post-Quantum Cryptosystems from Lattices
Asynchronous Remote Key Generation (ARKG), introduced by Frymann et al. at CCS 2020, allows for the generation of unlinkable public keys by third parties, for which corresponding private keys may be later learned only by the key pair's legitimate owner. These key pairs can then be used in common public-key cryptosystems, including signatures, PKE, KEMs, and schemes supporting delegation, such as proxy signatures. The only known instance of ARKG generates discrete-log-based keys.
In this paper, we introduce new ARKG constructions for lattice-based cryptosystems. The key pairs generated using our ARKG scheme can be applied to lattice-based signatures and KEMs, which have recently been selected for standardisation in the NIST PQ process, or as alternative candidates.
In particular, we address challenges associated with the noisiness of lattice hardness assumptions, which requires a new generalised definition of ARKG correctness, whilst preserving the security and privacy properties of the former instantiation. Our ARKG construction uses key encapsulation techniques by Brendel et al. (SAC 2020) coined Split KEMs. As an additional contribution, we also show that Kyber (Bos et al., EuroS&P 2018) can be used to construct a Split KEM. The security of our protocol is based on standard LWE assumptions. We also discuss its use with selected candidates from the NIST process and provide an implementation and benchmarks.
The Round Complexity of Statistical MPC with Optimal Resiliency
In STOC 1989, Rabin and Ben-Or (RB) established an important milestone in the fields of cryptography and distributed computing by showing that every functionality can be computed with statistical (information-theoretic) security in the presence of an active (aka Byzantine) rushing adversary that controls up to half of the parties. We study the round complexity of general secure multiparty computation and several related tasks in the RB model.
Our main result shows that every functionality can be realized in only four rounds of interaction which is known to be optimal. This completely settles the round complexity of statistical actively-secure optimally-resilient MPC, resolving a long line of research.
Along the way, we construct the first round-optimal statistically-secure verifiable secret sharing protocol (Chor, Goldwasser, Micali, and Awerbuch; STOC 1985), show that every single-input functionality (e.g., multi-verifier zero-knowledge) can be realized in 3 rounds, and prove that the latter bound is optimal. The complexity of all our protocols is exponential in the number of parties, and the question of deriving polynomially-efficient protocols is left for future research.
Our main technical contribution is a construction of a new type of statistically-secure signature scheme whose existence was open even for smaller resiliency thresholds. We also describe a new statistical compiler that lifts up passively-secure protocols to actively-secure protocols in a round-efficient way via the aid of protocols for single-input functionalities. This compiler can be viewed as a statistical variant of the GMW compiler (Goldreich, Micali, Wigderson; STOC, 1987) that originally employed zero-knowledge proofs and public-key encryption.
Multivariate Correlation Attacks and the Cryptanalysis of LFSR-based Stream Ciphers
Cryptanalysis of modern symmetric ciphers may be done by using linear equation systems with multiple right hand sides, which describe the encryption process. The tool was introduced by Raddum and Semaev where several solving methods were developed. In this work, the probabilities are ascribed to the right hand sides and a statistical attack is then applied. The new approach is a multivariate generalisation of the correlation attack by Siegenthaler. A fast version of the attack is provided too. It may be viewed as an extension of the fast correlation attack by Meier and Staffelbach, based on exploiting so called parity-checks for linear recurrences. Parity-checks are a particular case of the relations that we introduce in the present work. The notion of a relation is irrelevant to linear recurrences. We show how to apply the method to some LFSR-based stream ciphers including those from the Grain family. The new method generally requires a lower number of the keystream bits to recover the initial states than other techniques reported in the literature.
Single Instance Self-Masking via Permutations
Self-masking allows the masking of success criteria, part of a problem instance (such as the sum in a subset-sum instance) that restricts the number of solutions. Self-masking is used to prevent the leakage of helpful information to attackers; while keeping the original solution valid and, at the same time, not increasing the number of unplanned solutions.
Self-masking can be achieved by xoring the sums of two (or more) independent subset sum instances \cite{DD20, CDM22}, and by doing so, eliminate all known attacks that use the value of the sum of the subset to find the subset fast, namely, in a polynomial time; much faster than the naive exponential exhaustive search.
We demonstrate that the concept of self-masking can be applied to a single instance of the subset sum and a single instance of the permuted secret-sharing polynomials.
We further introduce the benefit of permuting the bits of the success criteria, avoiding leakage of information on the value of the $i$'th bit of the success criteria, in the case of a single instance, or the parity of the $i$'th bit of the success criteria in the case of several instances.
In the case of several instances, we permute the success criteria bits of each instance prior to xoring them with each other. One basic permutation and its nesting versions (e.g., $\pi^i$) are used, keeping the solution space small and at the same time, attempting to create an ``all or nothing'' effect, where the result of a wrong $\pi$ trials does not imply much.
Maximally-Fluid MPC with Guaranteed Output Delivery
To overcome the limitations of traditional secure multi-party computation (MPC) protocols that consider a static set of participants, in a recent work, Choudhuri et al. [CRYPTO 2021] introduced a new model called Fluid MPC, which supports {\em dynamic} participants. Protocols in this model allow parties to join and leave the computation as they wish. Unfortunately, known fluid MPC protocols (even with strong honest-majority), either only achieve security with abort, or require strong computational and trusted setup assumptions.
In this work, we also consider the "hardest" setting --- called the maximally-fluid model --- where each party can leave the computation after participating in a single round. We study the problem of designing maximally-fluid MPC protocols that achieve security with {guaranteed output delivery}, and obtain the following main results:
1. We design a perfectly secure maximally-fluid MPC protocol, that achieves guaranteed output delivery against unbounded adversaries who are allowed to corrupt less than a third of the parties in every round/committee.
2. For the case where the adversary is allowed to corrupt up to half of the parties in each committee, we present a new computationally secure maximally-fluid MPC protocol with guaranteed output delivery. Unlike prior works that require correlated setup and NIZKs, our construction only uses a common random string setup and is based on linearly-homomorphic equivocal commitments.
Post-Quantum Privacy Pass via Post-Quantum Anonymous Credentials
It is known that one can generically construct a post-quantum anonymous credential scheme, supporting the showing of arbitrary predicates on its attributes using general-purpose zero-knowledge proofs secure against quantum adversaries [Fischlin, CRYPTO 2006].
Traditionally, such a generic instantiation is thought to come with impractical sizes and performance. We show that with careful choices and optimizations, such a scheme can perform surprisingly well.
In fact, it performs competitively against state-of-the-art post-quantum blind signatures, for the simpler problem of post-quantum unlinkable tokens, required for a post-quantum version of Privacy Pass.
To wit, a post-quantum Privacy Pass constructed in this way using zkDilithium, our proposal for a STARK-friendly variation on Dilithium2, allows for a trade-off between token size (85–175KB) and generation time (0.3–5s) with a proof security level of 115 bits. Verification of these tokens can be done in 20–30ms. We argue that these tokens are reasonably practical, adding less than a second upload time over traditional tokens, supported by a measurement study.
Finally, we point out a clear advantage of our approach: the flexibility afforded by the general purpose zero-knowledge proofs. We demonstrate this by showing how we can construct a rate-limited variant of Privacy Pass that doesn't not rely on non-collusion for privacy.
Accelerating HE Operations from Key Decomposition Technique
Lattice-based homomorphic encryption (HE) schemes are based on the noisy encryption technique, where plaintexts are masked with some random noise for security. Recent advanced HE schemes rely on a decomposition technique to manage the growth of noise, which involves a conversion of a ciphertext entry into a short vector followed by multiplication with an evaluation key. Prior to this work, the decomposition procedure turns out to be the most time-consuming part, as it requires discrete Fourier transforms (DFTs) over the base ring for efficient polynomial arithmetic. In this paper, an expensive decomposition operation over a large modulus is replaced with relatively cheap operations over a ring of integers with a small bound. Notably, the cost of DFTs is reduced from quadratic to linear with the level of a ciphertext without any extra noise growth. We demonstrate the implication of our approach by applying it to the key-switching procedure. Our experiments show that the new key-switching method achieves a speedup of 1.2-2.3 or 2.1-3.3 times over the previous method, when the dimension of a base ring is $2^{15}$ or $2^{16}$, respectively.
Generic Construction of Forward Secure Public Key Authenticated Encryption with Keyword Search
In this paper, we propose a generic construction of forward secure public key authenticated encryption with keyword search (FS-PAEKS) from PAEKS. In addition to PAEKS, we employ 0/1 encodings proposed by Lin et al. (ACNS 2005). Here, forward security means that a newly generated ciphertext is not allowed to be searched by previously generated trapdoors. We also show that the Jiang et al. FS-PAEKS scheme (The Computer Journal 2023) does not provide forward security. Our generic construction is quite simple, and it can also be applied to construct forward secure public key encryption with keyword search (FS-PEKS). Our generic construction yields a comparably efficient FS-PEKS scheme compared to the previous scheme. Moreover, it eliminates the hierarchical structure (Abdalla et al. (JoC 2016)) or attribute-based feature (Zeng et al. (IEEE Transactions on Cloud Computing 2022)) of the previous generic constructions which is meaningful from a feasibility perspective.
An Overview of Hash Based Signatures
Uncategorized
Uncategorized
Digital signatures are one of the most basic cryptographic building blocks which are utilized to provide attractive security features like authenticity, unforgeability, and undeniability. The security of existing state of the art digital signatures is based on hardness of number theoretic hardness assumptions like discrete logarithm and integer factorization. However, these hard problems are insecure and face a threat in the quantum world. In particular, quantum algorithms like Shor’s algorithm can be used to solve the above mentioned hardness problem in polynomial time. As an alternative, a new direction of research called post-quantum cryptography (PQC) is supposed to provide a new generation of quantum-resistant digital signatures. Hash based signature is one such candidate to provide post quantum secure digital signatures. Hash based signature schemes are a type of digital signature scheme that use hash functions as their central building block. They are efficient, flexible, and can be used in a variety of applications. In this document, we provide an overview of the hash based signatures. Our presentation of the topic covers a wide range of aspects that are not only comprehensible for readers without expertise in the subject matter, but also serve as a valuable resource for experts seeking reference material.
Unbounded Leakage-Resilience and Intrusion-Detection in a Quantum World
Can an adversary hack into our computer and steal sensitive data such as cryptographic keys? This question is almost as old as the Internet and significant effort has been spent on designing mechanisms to prevent and detect hacking attacks. Once quantum computers arrive, will the situation remain the same or can we hope to live in a better world?
We first consider ubiquitous side-channel attacks, which aim to leak side information on secret system components, studied in the leakage-resilient cryptography literature. Classical leakage-resilient cryptography must necessarily impose restrictions on the type of leakage one aims to protect against. As a notable example, the most well-studied leakage model is that of bounded leakage, where it is assumed that an adversary learns at most $\ell$ bits of leakage on secret components, for some leakage bound $\ell$. Although this leakage bound is necessary, many real-world side-channel attacks cannot be captured by bounded leakage. In this work, we design cryptographic schemes that provide guarantees against arbitrary side-channel attacks:
- Using techniques from unclonable quantum cryptography, we design several basic leakage-resilient primitives, such as public- and private-key encryption, (weak) pseudorandom functions, and digital signatures which remain secure under (polynomially) unbounded classical leakage. In particular, this leakage can be much longer than the (quantum) secret being leaked upon. In our view, leakage is the result of observations of quantities such as power consumption and hence is most naturally viewed as classical information. Notably, the leakage-resilience of our schemes holds even in the stronger adaptive "LOCC leakage'' model where the main adversary and the leakage adversary can cooperate via arbitrary local quantum operations and two-way classical communication in multiple rounds.
- What if the adversary simply breaks in and obtains unbounded quantum leakage (thus making leakage-resilience impossible)? Going beyond leakage, what if the adversary can even tamper with the data arbitrarily? We initiate the study of intrusion-detection in the quantum setting, where one would like to detect if security has been compromised even in the face of an arbitrary intruder attack which can leak and tamper with classical as well as quantum data. We design cryptographic schemes supporting intrusion detection for a host of primitives such as public- and private-key encryption, digital signature, functional encryption, program obfuscation and software protection. Our schemes are based on techniques from cryptography with secure key leasing and certified deletion.
Multi-Instance Randomness Extraction and Security against Bounded-Storage Mass Surveillance
Consider a state-level adversary who observes and stores large amounts of encrypted data from all users on the Internet, but does not have the capacity to store it all. Later, it may target certain "persons of interest" in order to obtain their decryption keys. We would like to guarantee that, if the adversary's storage capacity is only (say) $1\%$ of the total encrypted data size, then even if it can later obtain the decryption keys of arbitrary users, it can only learn something about the contents of (roughly) $1\%$ of the ciphertexts, while the rest will maintain full security. This can be seen as an extension of incompressible cryptography (Dziembowski CRYPTO '06, Guan, Wichs and Zhandry EUROCRYPT '22) to the multi-user setting. We provide solutions in both the symmetric key and public key setting with various trade-offs in terms of computational assumptions and efficiency.
As the core technical tool, we study an information-theoretic problem which we refer to as "multi-instance randomness extraction". Suppose $X_1, \ldots, X_t$ are correlated random variables whose total joint min-entropy rate is $\alpha$, but we know nothing else about their individual entropies. We choose $t$ random and independent seeds $S_1, \ldots, S_t$ and attempt to individually extract some small amount of randomness $Y_i = \mathsf{Ext}(X_i;S_i)$ from each $X_i$. We'd like to say that roughly an $\alpha$-fraction of the extracted outputs $Y_i$ should be indistinguishable from uniform even given all the remaining extracted outputs and all the seeds. We show that this indeed holds for specific extractors based on Hadamard and Reed-Muller codes.
Machine-Checked Security for $\mathrm{XMSS}$ as in RFC 8391 and $\mathrm{SPHINCS}^{+}$
This work presents a novel machine-checked tight security
proof for $\mathrm{XMSS}$ — a stateful hash-based signature scheme that is (1) standardized in RFC 8391 and NIST SP 800-208, and (2) employed as a primary building block of $\mathrm{SPHINCS}^{+}$, one of the signature schemes recently selected for standardization as a result of NIST’s post-quantum competition.
In 2020, Kudinov, Kiktenko, and Fedoro pointed out a flaw affecting the tight security proofs of $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$. For the case of $\mathrm{SPHINCS}^{+}$, this flaw was fixed in a subsequent tight security proof by Hülsing and Kudinov. Unfortunately, employing the fix from this proof to construct an analogous tight security proof for XMSS would merely demonstrate security with respect to an insufficient notion.
At the cost of modeling the message-hashing function as a random oracle, we complete the tight security proof for $\mathrm{XMSS}$ and formally verify it using the EasyCrypt proof assistant. As part of this endeavor, we formally verify the crucial step common to (the security proofs of) $\mathrm{SPHINCS}^{+}$ and $\mathrm{XMSS}$ that was found to be flawed before, thereby confirming that the core of the aforementioned security proof by Hülsing and Kudinov is correct.
As this is the first work to formally verify proofs for hash-based signature schemes in EasyCrypt, we develop several novel libraries for the fundamental cryptographic concepts underlying such schemes — e.g., hash functions and digital signature schemes — establishing a common starting point for future formal verification efforts. These libraries will be particularly helpful in formally verifying proofs of other hash-based signature schemes such as $\mathrm{LMS}$ or $\mathrm{SPHINCS}^{+}$.
Game Theoretical Analysis of DAG-Ledgers Backbone
We study the rational behaviors of agents in DAG-Based Distributed Ledgers. We an-
alyze generic algorithms that encapsulate the main actions of agents in a DAG-based dis-
tributed ledger: voting for a block, and checking its validity. Knowing that those actions
have costs, and validating a block gives rewards to agents who participated in the validation
procedure, we study using game theory how strategic agents behave while trying to maximize
their gains. We consider scenarios with different type of agents and investigate if there exist
equilibria where the properties of the protocols are guaranteed. The analysis is focused on
the study of equilibria when invalid blocks may be issued. We found that in such a case,
there exist equilibria where protocols properties may be violated. However, we also show
that in all studied cases, there exist equilibria satifisfying the protocol properties.
Quasi-linear masking to protect against both SCA and FIA
The implementation of cryptographic algorithms must be protected against physical attacks. Side-channel and fault injection analyses are two prominent such implem\-entation-level attacks. Protections against either do exist; they are characterized by security orders: the higher the order, the more difficult the attack.
In this paper, we leverage fast discrete Fourier transform to reduce the complexity of high-order masking, and extend it to allow for fault detection and/or correction. The security paradigm is that of code-based masking. Coding theory is amenable both to mix the information and masking material at a prescribed order, and to detect and/or correct errors purposely injected by an attacker.
For the first time, we show that quasi-linear masking (pioneered by Goudarzi, Joux and Rivain at ASIACRYPT 2018) can be achieved alongside with cost amortisation. This technique consists in masking several symbols/bytes with the same masking material, therefore improving the efficiency of the masking. Similarly, it allows to optimize the detection capability of codes as linear codes are all the more efficient as the information to protect is longer. Namely, we prove mathematically that our scheme features side-channel security order of $d+1-t$, detects $d$ faults and corrects $\lfloor(d-1)/2\rfloor$ faults, where $2d+1$ is the encoding length and $t$ is the information size ($t\geq1$). Applied to AES, one can get side-channel protection of order $d=7$ when masking one column/line ($t=4$ bytes) at once.
In addition to the theory, that makes use of the Frobenius Additive Fast Fourier Transform, we show performance results, both in software and hardware.
CaSCaDE: (Time-Based) Cryptography from Space Communications DElay
Uncategorized
Uncategorized
Time-based cryptographic primitives such as Time-Lock Puzzles (TLPs) and Verifiable Delay Functions (VDFs) have proven to be pivotal in several areas of cryptography. All existing candidate constructions, however, guarantee time-delays based on the average hardness of sequential computational problems. This means that any algorithmic or hardware improvement affects parameter choices and may turn deployed systems insecure.
To address this issue, we investigate how to build time-based cryptographic primitives where delays depend on sources other than sequential computations: namely, transmission delays caused by sequential communication. We explore sequential communication delays that arise when sending a message through a constellation of satellites in Space. This setting has the advantage that distances between protocol participants are guaranteed as positions of satellites are observable from Earth, moreover delay lower bounds are unconditional and can be easily computed using the laws of Physics (no transmission travels faster than the speed of Light).
We introduce proofs of sequential communication delay (SCD) in the Universal Composability framework, that can be used to convince a verifier that a message has accrued delay by traversing a path among a set of scattered satellites. With our SCD proofs we realize the first proposals of Publicly Verifiable TLPs and VDFs whose delay guarantees are rooted on physical limits, rather than ever-decreasing computational hardness. Finally, our notion of SCD paves the way to the first Delay Encryption construction not based on supersingular isogenies.
Efficient Laconic Cryptography from Learning With Errors
Uncategorized
Uncategorized
Laconic cryptography is an emerging paradigm that enables cryptographic primitives with sublinear communication complexity in just two messages. In particular, a two-message protocol between Alice and Bob is called laconic if its communication and computation complexity are essentially independent of the size of Alice's input. This can be thought of as a dual notion of fully-homomorphic encryption, as it enables "Bob-optimized" protocols. This paradigm has led to tremendous progress in recent years. However, all existing constructions of laconic primitives are considered only of theoretical interest: They all rely on non-black-box cryptographic techniques, which are highly impractical.
This work shows that non-black-box techniques are not necessary for basic laconic cryptography primitives. We propose a completely algebraic construction of laconic encryption, a notion that we introduce in this work, which serves as the cornerstone of our framework. We prove that the scheme is secure under the standard Learning With Errors assumption (with polynomial modulus-to-noise ratio). We provide proof-of-concept implementations for the first time for laconic primitives, demonstrating the construction is indeed practical: For a database size of $2^{50}$, encryption and decryption are in the order of single digit milliseconds.
Laconic encryption can be used as a black box to construct other laconic primitives. Specifically, we show how to construct:
- Laconic oblivious transfer
- Registration-based encryption scheme
- Laconic private-set intersection protocol
All of the above have essentially optimal parameters and similar practical efficiency. Furthermore, our laconic encryption can be preprocessed such that the online encryption step is entirely combinatorial and therefore much more efficient. Using similar techniques, we also obtain identity-based encryption with an unbounded identity space and tight security proof (in the standard model).
Real-World Deniability in Messaging
This work explores real-world deniability in messaging. We propose a formal model that considers the entire messaging system to analyze deniability in practice. Applying this model to the Signal application and DKIM-protected email, we demonstrate that these systems do not offer practical deniability guarantees. Additionally, we analyze 140 court cases in Switzerland that use conversations on messaging applications as evidence and find that none consider deniability, providing evidence that this property does not have an impact in the legal setting. Based on these technical and legal findings, we assess whether deniability is a desirable property and the challenges and shortcomings of designing a system that is deniable in practice. We posit that systems should either offer real-world deniability or refrain from claiming to achieve it. We discuss how to choose an appropriate threat model for deniability in a given context and how to design communication systems that are deniable in practice. For Signal, we propose and discuss a simple yet effective solution: the application should enable direct modification of locally stored messages in the user interface. This position paper raises several unanswered questions, aiming to further stimulate discussion and research on real-world deniability in messaging.situation, we propose a model for real world deniability that takes into account the entire messaging system. We then discuss how deniability is (not) used in practice and the challenges arising from the design of a deniable system. We propose a simple, yet powerful solution for deniability: applications should enable direct modification of local messages; we discuss the impacts of this strong deniability property.
Discretization Error Reduction for Torus Fully Homomorphic Encryption
In recent history of fully homomorphic encryption, bootstrapping has been actively studied throughout many HE schemes. As bootstrapping is an essential process to transform somewhat homomorphic encryption schemes into fully homomorphic, enhancing its performance is one of the key factors of improving the utility of homomorphic encryption.
In this paper, we propose an extended bootstrapping for TFHE, which we name it by EBS. One of the main drawback of TFHE bootstrapping was that the precision of bootstrapping is mainly decided by the polynomial dimension $N$. Thus if one wants to bootstrap with high precision, one must enlarge $N$, or take alternative method. Our EBS enables to use small $N$ for parameter selection, but to bootstrap in higher dimension to keep high precision. Moreover, it can be easily parallelized for faster computation. Also, the EBS can be easily adapted to other known variants of TFHE bootstrappings based on the original bootstrapping algorithm.
We implement our EBS along with the full domain bootstrapping methods known ($\mathsf{FDFB}$, $\mathsf{TOTA}$, $\mathsf{Comp}$), and show how much our EBS can improve the precision for those bootstrapping methods. We provide experimental results and thorough analysis with our EBS, and show that EBS is capable of bootstrapping with high precision even with small $N$, thus small key size, and small complexity than selecting large $N$ by birth.
Generic Construction of Broadcast Authenticated Encryption with Keyword Search
As a multi-receiver variant of public key authenticated encryption with keyword search (PAEKS), broadcast authenticated encryption with keyword search (BAEKS) was proposed by Liu et al. (ACISP 2021). BAEKS focuses on receiver anonymity, where no information about the receiver is leaked from ciphertexts, which is reminiscent of the anonymous broadcast encryption. Here, there are rooms for improving their security definitions, e.g., two challenge sets of receivers are selected before the setup phase, and an adversary is not allowed to corrupt any receiver. In this paper, we propose a generic construction of BAEKS derived from PAEKS that provides ciphertext anonymity and consistency in a multi-receiver setting. The proposed construction is an extension of the generic construction proposed by Libert et al. (PKC 2012) for the anonymous broadcast encryption and provides adaptive corruptions. We also demonstrate that the Qin et al. PAEKS scheme (ProvSec 2021) provides ciphertext anonymity and consistency in a multi-receiver setting and can be employed as a building block of the proposed generic construction. Moreover, we demonstrate that the Mukherjee BAEKS scheme (ACISP 2023) can be employed as a building block of the proposed generic construction.
Prime Match: A Privacy-Preserving Inventory Matching System
Inventory matching is a standard mechanism for trading financial stocks by which buyers and sellers can be paired. In the financial world, banks often undertake the task of finding such matches between their clients. The related stocks can be traded without adversely impacting the market price for either client. If matches between clients are found, the bank can offer the trade at advantageous rates. If no match is found, the parties have to buy or sell the stock in the public market, which introduces additional costs.
A problem with the process as it is presently conducted is that the involved parties must share their order to buy or sell a particular stock, along with the intended quantity (number of shares), to the bank. Clients worry that if this information were to “leak” somehow, then other market participants would become aware of their intentions and thus cause the price to move adversely against them before their transaction finalizes.
We provide a solution, Prime Match, that enables clients to match their orders efficiently with reduced market impact while maintaining privacy. In the case where there are no matches, no information is revealed. Our main cryptographic innovation is a two-round secure linear comparison protocol for computing the minimum between two quantities without preprocessing and with malicious security, which can be of independent interest. We report benchmarks of our Prime Match system, which runs in production and is adopted by a large bank in the US -- J.P. Morgan. The system is designed utilizing a star topology network, which provides clients with a centralized node (the bank) as an alternative to the idealized assumption of point-to-point connections, which would be impractical and undesired for the clients to implement in reality.
Prime Match is the first secure multiparty computation solution running live in the traditional financial world.
High Throughput Lattice-based Signatures on GPUs: Comparing Falcon and Mitaka
The US National Institute of Standards and Technology initiated a standardization process for post-quantum cryptography in 2017, with the aim of selecting key encapsulation mechanisms and signature schemes that can withstand the threat from emerging quantum computers. In 2022, Falcon was selected as one of the standard signature schemes, eventually attracting effort to optimize the implementation of Falcon on various hardware architectures for practical applications. Recently, Mitaka was proposed as an alternative to Falcon, allowing parallel execution of most of its operations. These recent advancements motivate us to develop high throughput implementations of Falcon and Mitaka signature schemes on Graphics Processing Units (GPUs), a massively parallel architecture widely available on cloud service platforms. In this paper, we propose the first parallel implementation of Falcon on various
GPUs. An iterative version of the sampling process in Falcon, which is also the most time-consuming Falcon operation, was developed. This allows us to implement Falcon signature generation without relying on expensive recursive function calls on GPUs. In addition, we propose a parallel random samples generation approach to accelerate the performance of Mitaka on GPUs. We evaluate our implementation techniques on state-of-the-art GPU architectures (RTX 3080, A100, T4 and V100). Experimental results show that our Falcon-512 implementation achieves 58, 595 signatures/second and 2, 721, 562 verifications/second on an A100 GPU, which is 20.03× and 29.51× faster than the highly optimized AVX2 implementation on CPU. Our Mitaka implementation achieves 161, 985 signatures/second and 1, 421, 046 verifications/second on the same GPU. Due to the adoption of a parallelizable sampling process, Mitaka signature generation enjoys ≈ 2 – 20× higher throughput than Falcon on various GPUs. The high throughput signature generation and verification achieved by this work can be very useful in various emerging applications, including the Internet of Things.
A New Linear Distinguisher for Four-Round AES
In SAC’14, Biham and Carmeli presented a novel attack on DES, involving
a variation of Partitioning Cryptanalysis. This was further extended in ToSC’18
by Biham and Perle into the Conditional Linear Cryptanalysis in the context of
Feistel ciphers. In this work, we formalize this cryptanalytic technique for block
ciphers in general and derive several properties. This conditional approximation is
then used to approximate the inv : GF(2^8) → GF(2^8) : x → x^254 function which
forms the only source of non-linearity in the AES. By extending the approximation to
encompass the full AES round function, a linear distinguisher for four-round AES in
the known-plaintext model is constructed; the existence of which is often understood
to be impossible. We furthermore demonstrate a key-recovery attack capable of
extracting 32 bits of information in 4-round AES using 2^125.62 data and time. In
addition to suggesting a new approach to advancing the cryptanalysis of the AES,
this result moreover demonstrates a caveat in the standard interpretation of the
Wide Trail Strategy — the design framework underlying many SPN-based ciphers
published in recent years.
Extended Abstract: HotStuff-2: Optimal Two-Phase Responsive BFT
In this paper, we observe that it is possible to solve partially-synchronous BFT and simultaneously achieves $O(n^2)$ worst-case communication, optimistically linear communication, a two-phase commit regime within a view, and optimistic responsiveness. Prior work falls short in achieving one or more of these properties, e.g., the most closely related work, HotStuff, requires a three-phase view while achieving all other properties. We demonstrate that these properties are achievable through a two-phase HotStuff variant named HotStuff-2.
The quest for two-phase HotStuff variants that achieve all the above desirable properties has been long, producing a series of results that are yet sub-optimal and, at the same time, are based on somewhat heavy hammers. HotStuff-2 demonstrates that none of these are necessary: HotStuff-2 is remarkably simple, adding no substantive complexity to the original HotStuff protocol.
The main takeaway is that two phases are enough for BFT after all.
Monomial Isomorphism for Tensors and Applications to Code Equivalence Problems
Starting from the problem of $d$-Tensor Isomorphism ($d$-TI), we study the relation between various Code Equivalence problems in different metrics. In particular, we show a reduction from the sum-rank metric (CE${}_{sr}$) to the rank metric (CE${}_{rk}$). To obtain this result, we investigate reductions between tensor problems. We define the Monomial Isomorphism problem for $d$-tensors ($d$-TI${}^*$), where, given two $d$-tensors, we ask if there are $d-1$ invertible matrices and a monomial matrix sending one tensor into the other. We link this problem to the well-studied $d$-TI and the TI-completeness of $d$-TI${}^*$ is shown. Due to this result, we obtain a reduction from CE${}_{sr}$ to CE${}_{rk}$. In the literature, a similar result was known, but it needs an additional assumption on the automorphisms of matrix codes. Since many constructions based on the hardness of Code Equivalence problems are emerging in cryptography, we analyze how such reductions can be taken into account in the design of cryptosystems based on CE${}_{sr}$.
Registered (Inner-Product) Functional Encryption
Registered encryption (Garg $et\ al.$, TCC'18) is an emerging paradigm that tackles the key-escrow problem associated with identity-based encryption by replacing the private-key generator with a much weaker entity known as the key curator. The key curator holds no secret information, and is responsible to:
(i) update the master public key whenever a new user registers its own public key to the system;
(ii) provide helper decryption keys to the users already registered in the system, in order to still enable them to decrypt after new users join the system.
For practical purposes, tasks (i) and (ii) need to be efficient, in the sense that the size of the public parameters, of the master public key, and of the helper decryption keys, as well as the running times for key generation and user registration, and the number of updates, must be small.
In this paper, we generalize the notion of registered encryption to the setting of functional encryption (FE). As our main contribution, we show an efficient construction of registered FE for the special case of ($attribute\text{-}hiding$) inner-product predicates, built over asymmetric bilinear groups of prime order. Our scheme supports a $large$ attribute universe and is proven secure in the bilinear generic group model. We also implement our scheme and experimentally demonstrate the efficiency requirements of the registered settings. Our second contribution is a feasibility result where we build registered FE for $\mathsf{P}/\mathsf{poly}$ based on indistinguishability obfuscation and somewhere statistically binding hash functions.
Fork-Resilient Continuous Group Key Agreement
Continuous Group Key Agreement (CGKA) lets a evolving group of clients agree on a sequence of group keys. An important application of CGKA is scalable asynchronous end-to-end (E2E) encrypted group messaging.
A major problem preventing the use of CGKA over unreliable infrastructure are so-called forks. A fork occurs when group members have diverging views of the group's history (and thus its current state); e.g. due to network or server failures. Once communication channels are restored, members resolve a fork by agreeing on the state of the group again. Today's CGKA protocols make fork resolution challenging, as natural resolution strategies seem to conflict with the way the protocols enforce group state agreement and forward secrecy. Meanwhile, secure group messaging protocols which do support fork resolution do not scale nearly as well as CGKA does.
In this work, we pave the way to practical scalable E2E messaging over unreliable infrastructure. To that end, we generalize CGKA to Fork Resilient-CGKA which allows clients to process significantly more types of out-of-order network traffic. This is important for many natural fork resolution procedures as they are based, in part, on replaying missed traffic. Next, we give two FR-CGKA constructions: a practical one based on the CGKA underlying the MLS messaging standard and an optimally secure one (albeit with only theoretical efficiency). To further assist with fork resolution, we introduce a simple new abstraction to describe a client's local protocol state. The abstraction describes all and only the information relevant to natural fork resolution, making it easier for higher-level fork resolution procedures to work with and reason about. We define a black-box extension of an FR-CGKA which maintains such a description of a client's internal state. Finally, as a proof of concept, we give a basic fork resolution protocol.
cqlin: Efficient linear operations on KZG commitments with cached quotients
Given two KZG-committed polynomials $f(X),g(X)\in \mathbb{F}_{<n}[X]$, a matrix $M\in \mathbb{F}^{n\times n}$, and subgroup $H\subset \mathbb{F}^*$ of order $n$,
we present a protocol for checking that $f|_{H}\cdot M = g|_{H}$.
After preprocessing, the prover makes $O(n)$ field and group operations.
This presents a significant improvement over the lincheck protocols in [CHMMVW, COS], where the prover's run-time (also after preprocessing) was quasilinear in the number of non-zeroes of $M$, which could be $n^2$.
Locally Covert Learning
The goal of a covert learning algorithm is to learn a function $f$ by querying it, while ensuring that an adversary, who sees all queries and their responses, is unable to (efficiently) learn any more about $f$ than they could learn from random input-output pairs. We focus on a relaxation that we call local covertness, in which queries are distributed across $k$ servers and we only limit what is learnable by $k - 1$ colluding servers.
For any constant $k$, we give a locally covert algorithm for efficiently learning any Fourier-sparse function (technically, our notion of learning is improper, agnostic, and with respect to the uniform distribution). Our result holds unconditionally and for computationally unbounded adversaries. Prior to our work, such an algorithm was known only for the special case of $O(\log n)$-juntas, and only with $k = 2$ servers, Ishai et al. (Crypto 2019).
Our main technical observation is that the original Goldreich-Levin algorithm only utilizes i.i.d. pairs of correlated queries, where each half of every pair is uniformly random. We give a simple generalization of this algorithm in which pairs are replaced by $k$-tuples in which any $k - 1$ components are jointly uniform. The cost of this generalization is that the number of queries needed grows exponentially with $k$.
Additional Modes for ASCON
NIST selected the A SCON family of cryptographic primitives for standardization in February 2023 as the final step in the Lightweight Cryptography Competition. The ASCON submission to the competition provided Authenticated Encryption with Associated Data (AEAD), hashing, and Extensible Output Function (XOF) modes. Real world cryptography systems often need more than packet encryption and simple hashing. Keyed message authentication, key derivation, cryptographically secure pseudo-random number generation (CSPRNG), password hashing, and encryption of sensitive values in memory are also important. This paper defines additional modes that can be deployed on top of ASCON based on proven designs from the literature.
Hashing to elliptic curves through Cipolla–Lehmer–Müller’s square root algorithm
The present article provides a novel hash function $\mathcal{H}$ to any elliptic curve of $j$-invariant $\neq 0, 1728$ over a finite field $\mathbb{F}_{\!q}$ of large characteristic. The unique bottleneck of $\mathcal{H}$ consists in extracting a square root in $\mathbb{F}_{\!q}$ as well as for most hash functions. However, $\mathcal{H}$ is designed in such a way that the root can be found by (Cipolla--Lehmer--)Müller's algorithm in constant time. Violation of this security condition is known to be the only obstacle to applying the given algorithm in the cryptographic context. When the field $\mathbb{F}_{\!q}$ is highly $2$-adic and $q \equiv 1 \ (\mathrm{mod} \ 3)$, the new batching technique is the state-of-the-art hashing solution except for some sporadic curves. Indeed, Müller's algorithm costs $\approx 2\log_2(q)$ multiplications in $\mathbb{F}_{\!q}$. In turn, original Tonelli--Shanks's square root algorithm and all of its subsequent modifications have the asymptotic complexity $\Theta(\log(q) + g(\nu))$, where $\nu$ is the $2$-adicity of $\mathbb{F}_{\!q}$ and a function $g(\nu) \neq O(\nu)$. As an example, it is shown that Müller’s algorithm actually needs several times fewer multiplications in the field $\mathbb{F}_{\!q}$ (whose $\nu = 96$) of the standardized curve NIST P-224.
TIDAL: Practical Collisions on State-Reduced Keccak Variants
An important tool that has contributed to collision search
on Keccak/SHA3 is the Target Difference Algorithm (TDA) and its inter-
nal differential counterpart Target Internal Difference Algorithm (TIDA),
which were introduced by Dinur et al. in separate works in FSE 2012 and
2013 respectively. These algorithms provide an ingenious way of extend-
ing the differential trails by one round and exploiting the affine subspaces
generated due to the low algebraic degree of the Keccak S-box. The cur-
rent work introduces TIDAL, which can extend TIDA by one more round
capitalizing on linearization techniques introduced by Guo et al. in JoC.
This approach requires increment consistency checks, which is also im-
proved in this work. The TIDAL strategy, in conjunction with a determin-
istic internal differential trail, has been applied to Keccak variants up to
400-bit state-size and leads to practical collision attacks for most of them
up to 5 rounds. In particular collisions have been confirmed for 4-round
Keccak[136, 64] with a complexity of 220 and on 6-round of Keccak[84,16]
with a complexity of 25 . Further, this work completely characterizes all
collision attacks on state-reduced variants, showcasing that TIDAL covers
most space up to 5 rounds. As state and round-reduced Keccak variants
are used to realize the internal states of many crypto primitives, the re-
sults presented here generate a significant impact. Finally, it shows new
directions for the long-standing problem of state-reduced variants being
difficult to be attacked.
Non-Interactive Blind Signatures for Random Messages
Blind signatures allow a signer to issue signatures on messages chosen by the signature recipient. The main property is that the recipient's message is hidden from the signer. There are many applications, including Chaum's e-cash system and Privacy Pass, where no special distribution of the signed message is required, and the message can be random. Interestingly, existing notions do not consider this practical use case separately.
In this paper, we show that constraining the recipient's choice over the message distribution spawns a surprising new primitive that improves the well-established state-of-the-art. We formalize this concept by introducing the notion of non-interactive blind signatures (${\sf NIBS}$). Informally, the signer can create a presignature with a specific recipient in mind, identifiable via a public key. The recipient can use her secret key to finalize it and receive a blind signature on a random message determined by the finalization process. The key idea is that online interaction between the signer and recipient is unnecessary. We show an efficient instantiation of ${\sf NIBS}$ in the random oracle model from signatures on equivalence classes.
The exciting part is that, in this case, for the recipient's public key, we can use preexisting keys for Schnorr, ECDSA signatures, El-Gamal encryption scheme, or even the Diffie-Hellman key exchange. Reusing preexisting public keys allows us to distribute anonymous tokens similarly to cryptocurrency airdropping. Additional contributions include tagged non-interactive blind signatures (${\sf TNIBS}$) and their efficient instantiation. A generic construction in the random oracle or common reference string model based on verifiable random functions, standard signatures, and non-interactive proof systems.
Constrained Pseudorandom Functions from Homomorphic Secret Sharing
We propose and analyze a simple strategy for constructing 1-key constrained pseudorandom functions (CPRFs) from homomorphic secret sharing. In the process, we obtain the following contributions. First, we identify desirable properties for the underlying HSS scheme for our strategy to work. Second, we show that (most) recent existing HSS schemes satisfy these properties, leading to instantiations of CPRFs for various constraints and from various assumptions. Notably, we obtain the first (1-key selectively secure, private) CPRFs for inner-product and (1-key selectively secure) CPRFs for NC 1 from the DCR assumption, and more. Lastly, we revisit two applications of HSS, equipped with these additional properties, to secure computation: we obtain secure computation in the silent preprocessing model with one party being able to precompute its whole preprocessing material before even knowing the other party, and we construct one-sided statistically secure computation with sublinear communication for restricted forms of computation.
Interoperability in End-to-End Encrypted Messaging
The Digital Markets Act (DMA) is a nascent European Union regulation adopted in May 2022. One of its most controversial provisions is a requirement that so-called “gatekeepers” offering end-to-end encrypted messaging apps, such as WhatsApp, implement “interoperability” with other messaging apps: in essence, encrypted messaging across service providers. This requirement represents a fundamental shift in the design assumptions of existing encrypted messaging systems, most of which are designed to be centralized. Technologists have not really begun thinking about the myriad security, privacy, and functionality questions raised by the interoperability requirement; given that the DMA’s interoperability mandate may take effect as soon as mid-2024, it is critical for researchers to begin understanding the challenges and offering solutions.
In this paper, we take an initial step in this direction. We break down the DMA’s effects on the design of encrypted messaging systems into three main areas: identity, or how to resolve identities across service providers; protocols, or how to establish a secure connection between clients on different platforms; and abuse prevention, or how service providers can detect and take action against users engaging in abuse or spam. For each area, we identify key security and privacy requirements, summarize existing proposals, and examine whether proposals meet our security and privacy requirements. Finally, we propose our own design for an interoperable encrypted messaging system, and point out open problems.
Zero Knowledge Protocols and Signatures from the Restricted Syndrome Decoding Problem
The Restricted Syndrome Decoding Problem (R-SDP) corresponds to the Syndrome Decoding Problem (SDP) with the additional constraint that all entries of the solution error vector must live in a fixed subset of the finite field. In this paper, we study how this problem can be applied to the construction of signatures derived from Zero-Knowledge (ZK) protocols. First, we show that R-SDP appears to be well-suited for this type of application: ZK protocols relying on SDP can easily be modified to use R-SDP, resulting in significant reductions in the communication cost.
We then introduce and analyze a variant of R-SDP, which we call R-SDP$(G)$, with the property that solution vectors can be represented with a number of bits that is slightly larger than the security parameter (which clearly provides an ultimate lower bound). This enables the design of competitive ZK protocols. We show that existing ZK protocols can greatly benefit from the use of R-SDP, achieving signature sizes in the order of 7 kB, which are smaller than those of several other schemes submitted to NIST's additional call for post-quantum digital signatures.
Last updated: 2023-03-17
Origami: Fold a Plonk for Ethereum’s VDF
We present Origami verifiable delay function, build from the
MinRoot hash and our dedicated plonk proof system that utilizes a tai-
lored custom gate and a folding scheme. MinRoot VDF is the leading
candidate for Ethereum adoption. For N iterations of MinRoot hash func-
tion, the overall cost of Origami is N +o(N ) group operations; improving
the previous best known result of 6N from a Nova based solution. The
proof size is 128k + 224 bytes if we fold the proofs for k times; and may
be further reduce to around 960 bytes, regardless of k, via a standard
recursive prover.
The Prospect of a New Cryptography: Extensive use of non-algorithmic randomness competes with mathematical complexity
Randomness cannot be compressed, hence expanded randomness is ‘contaminated randomness’ where hidden pattern is used. Current cryptography uses little randomness (the key) to generate large randomness (the ciphertext). The pattern used for this expansion is subject to cryptanalysis. By contrast, Vernam and the new breed of Trans-Vernam ciphers project security with sufficient supply of genuine randomness. Having no hidden pattern in their process, they expose no vulnerability to cryptanalysis, other than brute force, the efficacy of which, is well gauged by using enough randomness to brute-force through. Unlike the original genuine randomness cipher (the Vernam cipher; US patent: 1,310,719), the new breed of Trans-Vernam ciphers (US patents: 10,541,802, 10,911,215, 11,159,317 to name a few) projects security with shared randomness (between transmitter and recipient) as well as with unilateral randomness determined ad hoc by the transmitter, thereby controlling the vulnerability of the transmitted message, including eliminating it all together, rising to Vernam grade. The new Trans-Vernam ciphers exploit new technologies for generating high-grade randomness, storing it and communicating it in large quantities. Their security is mathematically established and barring faulty implementation these ciphers are unbreakable. We are looking at a flat cyberspace, no more hierarchy based on math skills: Vernam grade security delivered through modern Trans-Vernam ciphers. Robust privacy of communication will be claimed by all – for good and for ill; law-enforcement and national security will have to adjust. It's a new cryptography, and a new society.
On Homomorphic Secret Sharing from Polynomial-Modulus LWE
Homomorphic secret sharing (HSS) is a form of secret sharing that supports the local evaluation of functions on the shares, with applications to multi-server private information retrieval, secure computation, and more.
Insisting on additive reconstruction, all known instantiations of HSS from "Learning with Error (LWE)"-type assumptions either have to rely on LWE with superpolynomial modulus, come with non-negligible error probability, and/or have to perform expensive ciphertext multiplications, resulting in bad concrete efficiency.
In this work, we present a new 2-party local share conversion procedure, which allows to locally convert noise encoded shares to non-noise plaintext shares such that the parties can detect whenever a (potential) error occurs and in that case resort to an alternative conversion procedure.
Building on this technique, we present the first HSS for branching programs from (Ring-)LWE with polynomial input share size which can make use of the efficient multiplication procedure of Boyle et al.~(Eurocrypt 2019) and has no correctness error. Our construction comes at the cost of a -- on expectation -- slightly increased output share size (which is insignificant compared to the input share size) and a more involved reconstruction procedure.
More concretely, we show that in the setting of 2-server private counting queries we can choose ciphertext sizes of only a quarter of the size of the scheme of Boyle et al. at essentially no extra cost.
Nakamoto Consensus under Bounded Processing Capacity
For Nakamoto's longest-chain consensus protocol, whose proof-of-work (PoW) and proof-of-stake (PoS) variants power major blockchains such as Bitcoin and Cardano, we revisit the classic problem of the security--performance tradeoff: Given a network of nodes with finite communication- and computation-resources, against what fraction of adversary power is Nakamoto consensus (NC) secure for a given block production rate? State-of-the-art analyses of NC fail to answer this question, because their bounded-delay model does not capture the rate limits to nodes' processing of blocks, which cause congestion when blocks are released in quick succession. We develop a new analysis technique to prove a refined security--performance tradeoff for PoW NC in a bounded-capacity model. In this model, we show that, in contrast to the classic bounded-delay model, Nakamoto's private attack is no longer the worst attack, and a new attack we call the teasing strategy, that exploits congestion, is strictly worse. In PoS, equivocating blocks can exacerbate congestion, making traditional PoS NC insecure except at very low block production rates. To counter such equivocation spamming, we present a variant of PoS NC we call Blanking NC (BlaNC), which achieves the same resilience as PoW NC.
Security Analysis of Signature Schemes with Key Blinding
Digital signatures are fundamental components of public key cryptography. They allow a signer to generate verifiable and unforgeable proofs---signatures---over arbitrary messages with a private key, and allow recipients to verify the proofs against the corresponding and expected public key. These properties are used in practice for a variety of use cases, ranging from identity or data authenticity to non-repudiation. Unsurprisingly, signature schemes are widely used in security protocols deployed on the Internet today.
In recent years, some protocols have extended the basic syntax of signature schemes to support key blinding, a.k.a., key randomization. Roughly speaking, key blinding is the process by which a private signing key or public verification key is blinded (randomized) to hide information about the key pair. This is generally done for privacy reasons and has found applications in Tor and Privacy Pass.
Recently, Denis, Eaton, Lepoint, and Wood proposed a technical specification for signature schemes with key blinding in an IETF draft. In this work, we analyze the constructions in this emerging specification. We demonstrate that the constructions provided satisfy the desired security properties for signature schemes with key blinding. We experimentally evaluate the constructions and find that they introduce a very reasonable 2-3x performance overhead compared to the base signature scheme. Our results complement the ongoing standardization efforts for this primitive.
Asymmetric Quantum Secure Multi-Party Computation With Weak Clients Against Dishonest Majority
Secure multi-party computation (SMPC) protocols allow several parties that distrust each other to collectively compute a function on their inputs. In this paper, we introduce a protocol that lifts classical SMPC to quantum SMPC in a composably and statistically secure way, even for a single honest party. Unlike previous quantum SMPC protocols, our proposal only requires very limited quantum resources from all but one party; it suffices that the weak parties, i.e. the clients, are able to prepare single-qubit states in the X-Y plane.
The novel quantum SMPC protocol is constructed in a naturally modular way, and relies on a new technique for quantum verification that is of independent interest. This verification technique requires the remote preparation of states only in a single plane of the Bloch sphere. In the course of proving the security of the new verification protocol, we also uncover a fundamental invariance that is inherent to measurement-based quantum computing.
SGXonerated: Finding (and Partially Fixing) Privacy Flaws in TEE-based Smart Contract Platforms Without Breaking the TEE
TEE-based smart contracts are an emerging blockchain architecture, offering fully programmable privacy with better performance than alternatives like secure multiparty computation. They can also support compatibility with existing smart contract languages, such that existing (plaintext) applications can be readily ported, picking up privacy enhancements automatically. While previous analysis of TEE-based smart contracts have focused on failures of TEE itself, we asked whether other aspects might be understudied. We focused on state consistency, a concern area highlighted by Li et al., as well as new concerns including access pattern leakage and software upgrade mechanisms. We carried out a code review of a cohort of four TEE-based smart contract platforms. These include Secret Network, the first to market with in-use applications, as well as Oasis, Phala, and Obscuro, which have at least released public test networks.
The first and most broadly applicable result is that access pattern leakage occurs when handling persistent contract storage. On Secret Network, its fine-grained access pattern is catastrophic for the transaction privacy of SNIP-20 tokens. If ERC-20 tokens were naively ported to Oasis they would be similarly vulnerable; the others in the cohort leak coarse-grained information at approximately the page level (4 kilobytes). Improving and characterizing this will require adopting techniques from ORAMs or encrypted databases.
Second, the importance of state consistency has been under-appreciated, in part because exploiting such vulnerabilities is thought to be impractical. We show they are fully practical by building a proof-of-concept tool that breaks all advertised privacy properties of SNIP-20 tokens, able to query the balance of individual accounts and the token amount of each transfer. We additionally demonstrate MEV attacks against the Sienna Swap application. As a final consequence of lacking state consistency, the developers have inadvertently introduced a decryption backdoor through their software upgrade process. We have helped the Secret developers mitigate this through a coordinated vulnerability disclosure, after which their state consistency should be roughly on par with the rest.
FuLeeca: A Lee-based Signature Scheme
In this work we introduce a new code-based signature scheme, called \textsf{FuLeeca}, based on the NP-hard problem of finding codewords of given Lee-weight. The scheme follows the Hash-and-Sign approach applied to quasi-cyclic codes. Similar approaches in the Hamming metric have suffered statistical attacks, which revealed the small support of the secret basis. Using the Lee metric, we are able to thwart such attacks. We use existing hardness results on the underlying problem and study adapted statistical attacks. We propose parameters for \textsf{FuLeeca}~and compare them to an extensive list of proposed post-quantum secure signature schemes including the ones already standardized by NIST. This comparison reveals that \textsf{FuLeeca}~is competitive. For example, for NIST category I, i.e., 160 bit of classical security, we obtain an average signature size of 1100 bytes and public key sizes of 1318 bytes. Comparing the total communication cost, i.e., the sum of the signature and public key size, we see that \textsf{FuLeeca} is only outperformed by Falcon while the other standardized schemes Dilithium and SPHINCS+ show larger communication costs than \textsf{FuLeeca}.
Efficient computation of $(3^n,3^n)$-isogenies
The parametrization of $(3,3)$-isogenies by Bruin, Flynn and Testa requires over 37.500 multiplications if one wants to evaluate a single isogeny in a point. We simplify their formulae and reduce the amount of required multiplications by 94%. Further we deduce explicit formulae for evaluating $(3,3)$-splitting and gluing maps in the framework of the parametrization by Bröker, Howe, Lauter and Stevenhagen. We provide implementations to compute $(3^n,3^n)$-isogenies between principally polarized abelian surfaces with a focus on cryptographic application. Our implementation can retrieve Alice's secret isogeny in 11 seconds for the SIKEp751 parameters, which were aimed at NIST level 5 security.
Accelerating exp-log based finite field multiplication
Finite field multiplication is widely used for masking countermeasures against side-channel attacks. The execution time of finite field multiplication implementation is critical as it greatly impacts the overhead of the countermeasure. In this context, the use of exp-log tables is popular for the implementation of finite field multiplication. Yet, its performance is affected by the need for particular code to handle the case where one of the operands equals zero, as log is undefined for zero. As noticed by two recent papers, the zero case can be managed without any extra code by extending the exp table and setting $log[0]$ to a large-enough value. The multiplication of $a$ and $b$ then becomes as simple as: $exp[log[a]+log[b]]$. In this paper, we compare this approach with other implementations of finite field multiplication and show that it provides a good trade-off between memory use and execution time.
Practical-Time Related-Key Attack on GOST with Secret S-boxes
The block cipher GOST 28147-89 was the Russian Federation encryption standard for over 20 years, and is still one of its two standard block ciphers. GOST is a 32-round Feistel construction, whose security benefits from the fact that the S-boxes used in the design are kept secret. In the last 10 years, several attacks on the full 32-round GOST were presented. However, they all assume that the S-boxes are known. When the S-boxes are secret, all published attacks either target a small number of rounds, or apply for small sets of weak keys.
In this paper we present the first practical-time attack on GOST with secret S-boxes. The attack works in the related-key model and is faster than all previous attacks in this model which assume that the S-boxes are known. The complexity of the attack is less than $2^{27}$ encryptions. It was fully verified, and runs in a few seconds on a PC. The attack is based on a novel type of related-key differentials of GOST, inspired by local collisions.
Our new technique may be applicable to certain GOST-based hash functions as well. To demonstrate this, we show how to find a collision on a Davies-Meyer construction based on GOST with an arbitrary initial value, in less than $2^{10}$ hash function evaluations.
Consensus Algorithm Using Transaction History for Cryptocurrency
Blockchain consensus algorithms for cryptocurrency consist of the proof of work and proof of stake. However, current algorithms have problems, such as huge power consumption and equality issues. We propose a new consensus algorithm that uses transaction history. This algorithm ensures equality by randomly assigning approval votes based on past transaction records. We also incorporate a mechanism for adjusting issuance volume to measure the stability of the currency's value.
Practically Solving LPN in High Noise Regimes Faster Using Neural Networks
We conduct a systematic study of solving the learning parity with noise problem (LPN) using neural networks. Our main contribution is designing families of two-layer neural networks that practically outperform classical algorithms in high-noise, low-dimension regimes. We consider three settings where the numbers of LPN samples are abundant, very limited, and in between. In each setting we provide neural network models that solve LPN as fast as possible. For some settings we are also able to provide theories that explain the rationale of the design of our models.
Comparing with the previous experiments of Esser, Kübler, and May (CRYPTO 2017), for dimension $n=26$, noise rate $\tau = 0.498$, the "Guess-then-Gaussian-elimination'' algorithm takes 3.12 days on 64 CPU cores, whereas our neural network algorithm takes 66 minutes on 8 GPUs. Our algorithm can also be plugged into the hybrid algorithms for solving middle or large dimension LPN instances.