All papers in 2023 (Page 13 of 1971 results)
Revisiting Key Decomposition Techniques for FHE: Simpler, Faster and More Generic
Ring-LWE based homomorphic encryption computations in large depth use a combination of two techniques: 1) decomposition of big numbers into small limbs/digits, and 2) efficient cyclotomic multiplications modulo $X^N + 1$. It was long believed that the two mechanisms had to be strongly related, like in the full-RNS setting that uses a CRT decomposition of big numbers over an NTT-friendly family of prime numbers, and NTT over the same primes for multiplications. However, in this setting, NTT was the bottleneck of all large-depth FHE computations. A breakthrough result from Kim et al. (Crypto’2023) managed to overcome this limitation by introducing a second gadget decomposition and showing that it indeed shifts the bottleneck and renders the cost of NTT computations negligible compared to the rest of the computation. In this paper, we extend this result (far) beyond the Full-RNS settings and show that we can completely decouple the big number decomposition from the cyclotomic arithmetic aspects. As a result, we get modulus switching/rescaling for free. We verify both in theory and in practice that the performance of key-switching, external and internal products and automorphisms using our representation are faster than the one achieved by Kim et al., and we discuss the high impact of these results for low-level or hardware optimizations as well as the benefits of the new parametrizations for FHE compilers. We even manage to lower the running time of the gate bootstrapping of TFHE by eliminating one eighth of the FFTs and one sixth of the linear operations, which lowers the running time below 5.5ms on recent CPUs.
Towards compressed permutation oracles
Compressed oracles (Zhandry, Crypto 2019) are a powerful technique to reason about quantum random oracles, enabling a sort of lazy sampling in the presence of superposition queries. A long-standing open question is whether a similar technique can also be used to reason about random (efficiently invertible) permutations.
In this work, we make a step towards answering this question. We first define the compressed permutation oracle and illustrate its use. While the soundness of this technique (i.e., the indistinguishability from a random permutation) remains a conjecture, we show a curious 2-for-1 theorem: If we use the compressed permutation oracle methodology to show that some construction (e.g., Luby-Rackoff) implements a random permutation (or strong qPRP), then we get the fact that this methodology is actually sound for free.
Brakedown's expander code
This write-up summarizes the sampling analysis of the expander code from Brakedown [GLSTW21]. We elaborate their convexity argument for general linear expansion bounds, and we combine their approach with the one from Spielman [Sp96] to achieve asymptotic linear-time under constant field size. Choosing tighter expansion bounds we obtain more efficient parameters than [GLSTW21] for their 128 bit large field, reducing the encoding costs by 25% and beyond, and we provide a similar parameter set for the Mersenne prime field with modulus $p = 2^{31} - 1$, optimized by the combined Spielman-Brakedown approach.
Owl: An Augmented Password-Authenticated Key Exchange Scheme
We present Owl, an augmented password-authenticated key exchange (PAKE) protocol that is both efficient and supported by security proofs. Owl is motivated by recognized limitations in SRP-6a and OPAQUE. SRP-6a is the only augmented PAKE that has enjoyed wide use in practice to date, but it lacks the support of formal security proofs, and does not support elliptic curve settings. OPAQUE was proposed in 2018 as a provably secure and efficient alternative to SRP-6a, and was chosen by the IETF in 2020 for standardization, but open issues leave it unclear whether OPAQUE will replace SRP-6a in practice. Owl is obtained by efficiently adapting J-PAKE to an asymmetric setting, providing additional security against server compromise yet with lower computation than J-PAKE. Owl is provably secure, efficient and agile in supporting implementations in diverse multiplicative groups and elliptic curve settings. To the best of our knowledge, Owl is the first augmented PAKE solution that provides systematic advantages over SRP-6a in terms of security, computation, message sizes, and agility.
LFHE: Fully Homomorphic Encryption with Bootstrapping Key Size Less than a Megabyte
Fully Homomorphic Encryption (FHE) enables computations to be performed on encrypted data, so one can outsource computations of confidential information to an untrusted party. Ironically, FHE requires the client to generate massive evaluation keys and transfer them to the server side where all computations are supposed to be performed. In this paper, we propose LFHE, the Light-key FHE variant of the FHEW scheme introduced by Ducas and Micciancio in Eurocrypt 2015, and its improvement TFHE scheme proposed by Chillotti et al. in Asiacrypt 2016. In the proposed scheme the client generates small packed evaluation keys, which can be transferred to the server side with much smaller communication overhead compared to the original non-packed variant. The server employs a key reconstruction technique to obtain the evaluation keys needed for computations.
This approach allowed us to achieve the FHE scheme with the packed evaluation key transferring size of less than a Megabyte, which is an order of magnitude improvement compared to the best-known methods.
Lattice-based Commit-Transferrable Signatures and Applications to Anonymous Credentials
Anonymous Credentials are an important tool to protect user's privacy for proving possession of certain credentials.
Although various efficient constructions have been proposed based on pre-quantum assumptions, there have been limited accomplishments in the post-quantum and especially practical settings. This research aims to derive new methods that enhance the current state of the art.
To achieve this, we make the following contributions.
By distilling prior design insights, we propose a new primitive to instantiate \emph{signature with protocols}, called commit-transferrable signature (\CTS). When combined with a multi-theorem straight-line extractable non-interactive zero-knowledge proof of knowledge (\NIZKPoK), $\CTS$ gives a modular approach to construct anonymous credentials.
We then show efficient instantiations of $\CTS$ and the required \NIZKPoK from lattices, which are believed to be post-quantum hard. Finally, we propose concrete parameters for the $\CTS$, \NIZKPoK, and the overall Anonymous Credentials, based on Module-\SIS~and Ring-\LWE. This would serve as an important guidance for future deployment in practice.
Threshold ECDSA in Three Rounds
We present a three-round protocol for threshold ECDSA signing with malicious security against a dishonest majority, which information-theoretically UC-realizes a standard threshold signing functionality, assuming only ideal commitment and two-party multiplication primitives. Our protocol combines an intermediate representation of ECDSA signatures that was recently introduced by Abram et al. (Eurocrypt'22) with an efficient statistical consistency check reminiscent of the ones used by the protocols of Doerner et al. (S&P'18, S&P'19).
We show that shared keys for our signing protocol can be generated using a simple commit-release-and-complain procedure, without any proofs of knowledge, and to compute the intermediate representation of each signature, we propose a two-round vectorized multiplication protocol based on oblivious transfer that outperforms all similar constructions.
Subversion-Resilient Authenticated Encryption without Random Oracles
In 2013, the Snowden revelations have shown subversion of cryptographic implementations to be a relevant threat.
Since then, the academic community has been pushing the development of models and constructions
to defend against adversaries able to arbitrarily subvert cryptographic implementations.
To capture these strong capabilities of adversaries, Russell, Tang, Yung, and Zhou (CCS'17) proposed CPA-secure encryption in a model that utilizes a trusted party called a watchdog testing an implementation before use to detect potential subversion.
This model was used to construct subversion-resilient implementations of primitives such as random oracles by Russell, Tang, Yung, and Zhou (CRYPTO'18) or signature schemes by Chow et al. (PKC'19) but primitives aiming for a CCA-like security remained elusive in any watchdog model.
In this work, we present the first subversion-resilient authenticated encryption scheme with associated data (AEAD) without making use of random oracles.
At the core of our construction are subversion-resilient PRFs, which we obtain from weak PRFs in combination with the classical Naor-Reingold transformation.
We revisit classical constructions based on PRFs to obtain subversion-resilient MACs, where both tagging and verification are subject to subversion, as well as subversion-resilient symmetric encryption in the form of stream ciphers.
Finally, we observe that leveraging the classical Encrypt-then-MAC approach yields subversion-resilient AEAD.
Our results are based on the trusted amalgamation model by Russell, Tang, Yung, and Zhou (ASIACRYPT'16) and the assumption of honest key generation.
Undetectable Watermarks for Language Models
Recent advances in the capabilities of large language models such as GPT-4 have spurred increasing concern about our ability to detect AI-generated text. Prior works have suggested methods of embedding watermarks in model outputs, by $\textit{noticeably}$ altering the output distribution. We ask: Is it possible to introduce a watermark without incurring $\textit{any detectable}$ change to the output distribution?
To this end we introduce a cryptographically-inspired notion of undetectable watermarks for language models. That is, watermarks can be detected only with the knowledge of a secret key; without the secret key, it is computationally intractable to distinguish watermarked outputs from those of the original model. In particular, it is impossible for a user to observe any degradation in the quality of the text. Crucially, watermarks should remain undetectable even when the user is allowed to adaptively query the model with arbitrarily chosen prompts. We construct undetectable watermarks based on the existence of one-way functions, a standard assumption in cryptography.
How to Design Fair Protocols in the Multi-Blockchain Setting
Recently, there have been several proposals for secure computation with fair output delivery that require the use of a bulletin board abstraction (in addition to a trusted execution environment (TEE)). These proposals require all protocol participants to have read/write access to the bulletin board. These works envision the use of (public or permissioned) blockchains to implement the bulletin board abstractions. With the advent of consortium blockchains which place restrictions on who can read/write contents on the blockchain, it is not clear how to extend prior proposals to a setting where (1) not all parties have read/write access on a single consortium blockchain, and (2) not all parties prefer to post on a public blockchain.
In this paper, we address the above by showing the first protocols for fair secure computation in the multi-blockchain setting. More concretely, in a $n$-party setting where at most $t < n$ parties are corrupt, our protocol for fair secure computation works as long as (1) $t$ parties have access to a TEE (e.g., Intel SGX), and (2) each of the above $t$ parties are on some blockchain with each of the other parties. Furthermore, only these $t$ parties need write access on the blockchains.
In an optimistic setting where parties behave honestly, our protocol runs completely off-chain.
Nimble: Rollback Protection for Confidential Cloud Services (extended version)
This paper introduces Nimble, a cloud service that helps applications running in trusted execution environments (TEEs) to detect rollback attacks (i.e., detect whether a data item retrieved from persistent storage is the latest version). To achieve this, Nimble realizes an append-only ledger service by employing a simple state machine running in a TEE in conjunction with a crash fault-tolerant storage service. Nimble then replicates this trusted state machine to ensure the system is available even if a minority of state machines crash. A salient aspect of Nimble is a new reconfiguration protocol that allows a cloud provider to replace the set of nodes running the trusted state machine whenever it wishes—without affecting safety. We have formally verified Nimble’s core protocol in Dafny, and have implemented Nimble such that its trusted state machine runs in multiple TEE platforms (Intel SGX and AMD SNP-SEV). Our results show that a deployment of Nimble on machines running in different availability zones can achieve from tens of thousands of requests/sec with an end-to-end latency of under 3.2 ms (based on an in-memory key-value store) to several thousands of requests/sec with a latency of 30ms (based on Azure Table).
Time to Bribe: Measuring Block Construction Market
With the emergence of Miner Extractable Value (MEV), block construction markets on blockchains have evolved into a competitive arena. Following Ethereum's transition from Proof of Work (PoW) to Proof of Stake (PoS), the Proposer Builder Separation (PBS) mechanism has emerged as the dominant force in the Ethereum block construction market.
This paper presents an in-depth longitudinal study of the Ethereum block construction market, spanning from the introduction of PoS and PBS in September 2022 to May 2023. We analyze the market shares of builders and relays, their temporal changes, and the financial dynamics within the PBS system, including payments among builders and block proposers---commonly referred to as bribes. We introduce an MEV-time law quantifying the expected MEV revenue wrt. the time elapsed since the last proposed block. We provide empirical evidence that moments of crisis (e.g. the FTX collapse, USDC stablecoin de-peg) coincide with significant spikes in MEV payments compared to the baseline.
Despite the intention of the PBS architecture to enhance decentralization by separating actor roles, it remains unclear whether its design is optimal. Implicit trust assumptions and conflicts of interest may benefit particular parties and foster the need for vertical integration. MEV-Boost was explicitly designed to foster decentralization, causing the side effect of enabling risk-free sandwich extraction from unsuspecting users, potentially raising concerns for regulators.
Efficient TFHE Bootstrapping in the Multiparty Setting
In this paper, we introduce a new approach to efficiently compute TFHE bootstrapping keys for (predefined) multiple users. Hence, a fixed number of users can enjoy the same level of efficiency as in the single key setting, keeping their individual input privacy. Our construction relies on a novel algorithm called homomorphic indicator, which can be of independent interest. We provide a detailed analysis of the noise growth and a set of secure parameters suitable to be used in practice. Moreover, we compare the complexity of our technique with other state-of-the-art constructions and show which method performs better in what parameter sets, based on our noise analysis. We also provide a prototype implementation of our technique. To the best of our knowledge, this is the first implementation of TFHE in the multiparty setting.
Scaling Mobile Private Contact Discovery to Billions of Users
Uncategorized
Uncategorized
Mobile contact discovery is a convenience feature of messengers such as WhatsApp or Telegram that helps users to identify which of their existing contacts are registered with the service. Unfortunately, the contact discovery implementation of many popular messengers massively violates the users' privacy as demonstrated by Hagen et al. (NDSS '21, ACM TOPS '23). Unbalanced private set intersection (PSI) protocols are a promising cryptographic solution to realize mobile private contact discovery, however, state-of-the-art protocols do not scale to real-world database sizes with billions of registered users in terms of communication and/or computation overhead.
In our work, we make significant steps towards truly practical large-scale mobile private contact discovery. For this, we combine and substantially optimize the unbalanced PSI protocol of Kales et al. (USENIX Security '19) and the private information retrieval (PIR) protocol of Kogan and Corrigan-Gibbs (USENIX Security '21). Our resulting protocol has a total communication overhead that is sublinear in the size of the server's user database and also has sublinear online runtimes. We optimize our protocol by introducing database partitioning and efficient scheduling of user queries. To handle realistic change rates of databases and contact lists, we propose and evaluate different possibilities for efficient updates. We implement our protocol on smartphones and measure online runtimes of less than 2s to query up to 1024 contacts from a database with more than two billion entries. Furthermore, we achieve a reduction in setup communication up to factor 32x compared to state-of-the-art mobile private contact discovery protocols.
A Note on ``On the Design of Mutual Authentication and Key Agreement Protocol in Internet of Vehicles-Enabled Intelligent Transportation System''
We remark that the key agreement scheme [IEEE Trans. Veh. Technol. 2021, 70(2): 1736--1751] fails to keep anonymity and untraceability, because the user $U_k$ needs to invoke the public key $PK_{U_j}$ to verify the signature generated by the user $U_j$. Since the public key is compulsively linked to the true identity $ID_{U_j}$ for authentication, any adversary can reveal the true identity by checking the signature.
SDitH in the QROM
The MPC in the Head (MPCitH) paradigm has recently led to significant improvements for signatures in the code-based setting. In this paper we consider some modifications to a recent twist of MPCitH, called Hypercube-MPCitH, that in the code-based setting provides the currently best known signature sizes. By compressing the Hypercube-MPCitH five-round code-based identification scheme into three-rounds we obtain two main benefits. On the one hand, it allows us to further develop recent techniques to provide a tight security proof in the quantum-accessible random oracle model (QROM), avoiding the catastrophic reduction losses incurred using generic QROM-results for Fiat-Shamir. On the other hand, we can reduce the already low-cost online part of the signature even further. In addition, we propose the use of proof-of-work techniques that allow to reduce the signature size. On the technical side, we develop generalizations of several QROM proof techniques and introduce a variant of the recently proposed extractable QROM.
The security of Kyber's FO-transform
In this short note we give another direct proof for the variant of the FO transform used by Kyber in the QROM. At PKC'23 Maram & Xagawa gave the first direct proof which does not require the indirection via FO with explicit rejection, thereby avoiding either a non-tight bound, or the necessity to analyze the failure probability in a new setting. However, on the downside their proof produces a bound that incurs an additive collision bound term. We explore a different approach for a direct proof, which results in a simpler argument closer to prior proofs, but a slightly worse bound.
Batch Proofs are Statistically Hiding
Batch proofs are proof systems that convince a verifier that $x_1,\dots,x_t \in \mathcal{L}$, for some $\mathsf{NP}$ language $\mathcal{L}$, with communication that is much shorter than sending the $t$ witnesses. In the case of *statistical soundness* (where the cheating prover is unbounded but the honest prover is efficient given the witnesses), interactive batch proofs are known for $\mathsf{UP}$, the class of *unique-witness* $\mathsf{NP}$ languages. In the case of computational soundness (where both honest and dishonest provers are efficient), *non-interactive* solutions are now known for all of $\mathsf{NP}$, assuming standard lattice or group assumptions.
We exhibit the first negative results regarding the existence of batch proofs and arguments:
- Statistically sound batch proofs for $\mathcal{L}$ imply that $\mathcal{L}$ has a statistically witness indistinguishable ($\mathsf{SWI}$) proof, with inverse polynomial $\mathsf{SWI}$ error, and a non-uniform honest prover. The implication is unconditional for obtaining honest-verifier $\mathsf{SWI}$ or for obtaining full-fledged $\mathsf{SWI}$ from public-coin protocols, whereas for private-coin protocols full-fledged $\mathsf{SWI}$ is obtained assuming one-way functions.
This poses a barrier for achieving batch proofs beyond $\mathsf{UP}$ (where witness indistinguishability is trivial). In particular, assuming that $\mathsf{NP}$ does not have $\mathsf{SWI}$ proofs, batch proofs for all of $\mathsf{NP}$ do not exist.
- Computationally sound batch proofs (a.k.a batch arguments or $\mathsf{BARG}$s) for $\mathsf{NP}$, together with one-way functions, imply statistical zero-knowledge ($\mathsf{SZK}$) arguments for $\mathsf{NP}$ with roughly the same number of rounds, an inverse polynomial zero-knowledge error, and non-uniform honest prover.
Thus, constant-round interactive $\mathsf{BARG}$s from one-way functions would yield constant-round $\mathsf{SZK}$ arguments from one-way functions. This would be surprising as $\mathsf{SZK}$ arguments are currently only known assuming constant-round statistically-hiding commitments.
We further prove new positive implications of non-interactive batch arguments to non-interactive zero knowledge arguments (with explicit uniform prover and verifier):
- Non-interactive $\mathsf{BARG}$s for $\mathsf{NP}$, together with one-way functions, imply non-interactive computational zero-knowledge arguments for $\mathsf{NP}$. Assuming also dual-mode commitments, the zero knowledge can be made statistical.
Both our negative and positive results stem from a new framework showing how to transform a batch protocol for a language $\mathcal{L}$ into an $\mathsf{SWI}$ protocol for $\mathcal{L}$.
A Faster Software Implementation of SQISign
Isogeny-based cryptography is famous for its short key size. As one of the most compact digital signatures, SQIsign (Short Quaternion and Isogeny Signature) is attractive among post-quantum cryptography, but it is inefficient compared to other post-quantum competitors because of complicated procedures in the ideal-to-isogeny translation, which is the efficiency bottleneck of the signing phase.
In this paper, we recall the current implementation of SQIsign and mainly focus on how to improve the execution of the ideal-to-isogeny translation in SQIsign. Specifically, we demonstrate how to utilize the reduced Tate pairing to save one of the two elliptic curve discrete logarithms. In addition, the efficient implementation of the remainder discrete logarithm computation is explored. We speed up other procedures in the ideal-to-isogeny translation with various techniques as well. It should be noted that our improvements also benefit the performance of key generation and verification in SQIsign. In the instantiation with $p_{1973}$, the improvements lead to a speedup of 5.47%, 8.80% and 25.34% for key generation, signature and verification, respectively.
Schnorr protocol in Jasmin
We implement the Schnorr protocol in assembler via the Jasmin toolchain, and prove the security (proof-of-knowledge and zero-knowledge properties) and the absence of leakage through timing side-channels of that implementation in EasyCrypt.
In order to do so, we provide a semantic characterization of leakage-freeness for probabilistic Jasmin programs (that are not constant-time). We design a library for multiple-precision integer arithmetic in Jasmin -- the "libjbn'' library. Among others, we implement and verify algorithms for fast constant-time modular multiplication and exponentiation (using Barrett reduction and Montgomery ladder). We also implement and verify correctness and leakage-freeness of the rejection sampling algorithm. And finally, we put it all together and show the security of the overall implementation (end-to-end verification) of the Schnorr protocol, by connecting our implementation to prior security analyses in EasyCrypt (Firsov, Unruh, CSF~2023).
Scalable Agreement Protocols with Optimal Optimistic Efficiency
Designing efficient distributed protocols for various agreement tasks such as Byzantine Agreement, Broadcast, and Committee Election is a fundamental problem. We are interested in $scalable$ protocols for these tasks, where each (honest) party communicates a number of bits which is sublinear in $n$, the number of parties. The first major step towards this goal is due to King et al. (SODA 2006) who showed a protocol where each party sends only $\tilde O(1)$ bits throughout $\tilde O(1)$ rounds, but guarantees only that $1-o(1)$ fraction of honest parties end up agreeing on a consistent output, assuming constant $<1/3$ fraction of static corruptions. Few years later, King et al. (ICDCN 2011) managed to get a full agreement protocol in the same model but where each party sends $\tilde O(\sqrt{n})$ bits throughout $\tilde O(1)$ rounds. Getting a full agreement protocol with $o(\sqrt{n})$ communication per party has been a major challenge ever since.
In light of this barrier, we propose a new framework for designing efficient agreement protocols. Specifically, we design $\tilde O(1)$-round protocols for all of the above tasks (assuming constant $<1/3$ fraction of static corruptions) with optimistic and pessimistic guarantees:
$\bullet$ $Optimistic$ $complexity$: In an honest execution, (honest) parties send only $\tilde O(1)$ bits.
$\bullet$ $Pessimistic$ $complexity$: In any other case, (honest) parties send $\tilde O(\sqrt{n})$ bits.
Thus, all an adversary can gain from deviating from the honest execution is that honest parties will need to work harder (i.e., transmit more bits) to reach agreement and terminate. Besides the above agreement tasks, we also use our new framework to get a scalable secure multiparty computation (MPC) protocol with optimistic and pessimistic complexities.
Technically, we identify a relaxation of Byzantine Agreement (of independent interest) that allows us to fall-back to a pessimistic execution in a coordinated way by all parties. We implement this relaxation with $\tilde O(1)$ communication bits per party and within $\tilde O(1)$ rounds.
BAKSHEESH: Similar Yet Different From GIFT
We propose a lightweight block cipher named BAKSHEESH, which follows up on the popular cipher GIFT-128 (CHES'17). BAKSHEESH runs for 35 rounds, which is 12.50 percent smaller compared to GIFT-128 (runs for 40 rounds) while maintaining the same security claims against the classical attacks.
The crux of BAKSHEESH is to use a 4-bit SBox that has a non-trivial Linear Structure (LS). An SBox with one or more non-trivial LS has not been used in a cipher construction until DEFAULT (Asiacrypt'21). DEFAULT is pitched to have inherent protection against the Differential Fault Attack (DFA), thanks to its SBox having 3 non-trivial LS. BAKSHEESH, however, uses an SBox with only 1 non-trivial LS; and is a traditional cipher just like GIFT-128, with no claims against DFA.
The SBox requires a low number of AND gates, making BAKSHEESH suitable for side channel countermeasures (when compared to GIFT-128) and other niche applications. Indeed, our study on the cost of the threshold implementation shows that BAKSHEESH offers a few-fold advantage over other lightweight ciphers. The design is not much deviated from its predecessor (GIFT-128), thereby allowing for easy implementation (such as fix-slicing in software). However, BAKSHEESH opts for the full-round key XOR, compared to the half-round key XOR in GIFT.
Thus, when taking everything into account, we show how a cipher construction can benefit from the unique vantage point of using 1 LS SBox, by combining the state-of-the-art progress in classical cryptanalysis and protection against device-dependent attacks. We, therefore, create a new paradigm of lightweight ciphers, by adequate deliberation on the design choice, and solidify it with appropriate security analysis and ample implementation/benchmark.
Note on Subversion-Resilient Key Exchange
In this work, we set out to create a subversion resilient authenticated key exchange protocol. The first step was to design a meaningful security model for this primitive, and our goal was to avoid using building blocks like reverse firewalls and public watchdogs. We wanted to exclude these kinds of tools because we desired that our protocols to be self contained in the sense that we could prove security without relying on some outside, tamper-proof party. To define the model, we began by extending models for regular authenticated key exchange, as we wanted our model to retain all the properties from regular AKE.
While trying to design protocols that would be secure in this model, we discovered that security depended on more than just the protocol, but also on engineering questions like how keys are stored and accessed in memory. Moreover, even if we assume that we can find solutions to these engineering challenges, other problems arise when trying to develop a secure protocol, partly because it's hard to define what secure means in this setting.It is in particular not clear how a subverted algorithm should affect the freshness predicate inherited from trivial attacks in regular AKE. The attack variety is large, and it is not intuitive how one should treat or classify the different attacks.
In the end, we were unable to find a satisfying solution for our model, and hence we could not prove any meaningful security of the protocols we studied. This work is a summary of our attempt, and the challenges we faced before concluding it.
Towards the Links of Cryptanalytic Methods on MPC/FHE/ZK-Friendly Symmetric-Key Primitives
Symmetric-key primitives designed over the prime field $\mathbb{F}_p$ with odd characteristics, rather than the traditional $\mathbb{F}_2^{n}$, are becoming the most popular choice for MPC/FHE/ZK-protocols for better efficiencies. However, the security of $\mathbb{F}_p$ is less understood as there are highly nontrivial gaps when extending the cryptanalysis tools and experiences built on $\mathbb{F}_2^{n}$ in the past few decades to $\mathbb{F}_p$.
At CRYPTO 2015, Sun et al. established the links among impossible differential, zero-correlation linear, and integral cryptanalysis over $\mathbb{F}_2^{n}$ from the perspective of distinguishers. In this paper, following the definition of linear correlations over $\mathbb{F}_p$ by Baignéres, Stern and Vaudenay at SAC 2007, we successfully establish comprehensive links over $\mathbb{F}_p$, by reproducing the proofs and offering alternatives when necessary. Interesting and important differences between $\mathbb{F}_p$ and $\mathbb{F}_2^n$ are observed.
- Zero-correlation linear hulls can not lead to integral distinguishers for some cases over $\mathbb{F}_p$, while this is always possible over $\mathbb{F}_2^n$ proven by Sun et al..
- When the newly established links are applied to GMiMC, its impossible differential, zero-correlation linear hull and integral distinguishers can be increased by up to 3 rounds for most of the cases, and even to an arbitrary number of rounds for some special and limited cases, which only appeared in $\mathbb{F}_p$. It should be noted that all these distinguishers do not invalidate GMiMC's security claims.
The development of the theories over $\mathbb{F}_p$ behind these links, and properties identified (be it similar or different) will bring clearer and easier understanding of security of primitives in this emerging $\mathbb{F}_p$ field, which we believe will provide useful guides for future cryptanalysis and design.
Key-Range Attribute-Based Signatures for Range of Inner Product and Its Applications
In attribute-based signatures (ABS) for range of inner product (ARIP), recently proposed by Ishizaka and Fukushima at ICISC 2022, a secret-key labeled with an $n$-dimensional vector $\mathbf{x}\in\mathbb{Z}_p^n$ for a prime $p$ can be used to sign a message under an $n$-dimensional vector $\mathbf{y}\in\mathbb{Z}_p^n$ and a range $[L,R]=\{L, L+1, \cdots, R-1, R\}$ with $L,R\in\mathbb{Z}_p$ iff their inner product is within the range, i.e., $\langle \mathbf{x}, \mathbf{y} \rangle \in [L,R]\pmod p$. We consider its key-range version, named key-range ARIP (KARIP), where the range $[L,R]$ is associated with a secret-key but not with a signature. We propose three generic KARIP constructions based on linearly homomorphic signatures and non-interactive witness-indistinguishable proof, which lead to concrete KARIP instantiations secure under standard assumptions with different features in terms of efficiency. We also show that KARIP has various applications, e.g., key-range ABS for range evaluation of polynomials/weighted averages/Hamming distance/Euclidean distance, key-range time-specific signatures, and key-range ABS for hyperellipsoid predicates.
Homomorphic Signatures for Subset and Superset Mixed Predicates and Its Applications
In homomorphic signatures for subset predicates (HSSB), each message (to be signed) is a set. Any signature on a set $M$ allows us to derive a signature on any subset $M'\subseteq M$. Its superset version, which should be called homomorphic signatures for superset predicates (HSSP), allows us to derive a signature on any superset $M'\supseteq M$. In this paper, we propose homomorphic signatures for subset and superset mixed predicates (HSSM) as a simple combination of HSSB and HSSP. In HSSM, any signature on a message of a set-pair $(M, W)$ allows us to derive a signature on any $(M', W')$ such that $M'\subseteq M$ and $W'\supseteq W$. We propose an original HSSM scheme which is unforgeable under the decisional linear assumption and completely context-hiding. We show that HSSM has various applications, which include disclosure-controllable HSSB, disclosure-controllable redactable signatures, (key-delegatable) superset/subset predicate signatures, and wildcarded identity-based signatures.
PSI from ring-OLE
Private set intersection (PSI) is one of the most extensively studied instances of secure computation. PSI allows two parties to compute the intersection of their input sets without revealing anything else. Other useful variants include PSI-Payload, where the output includes payloads associated with members of the intersection, and PSI-Sum, where the output includes the sum of the payloads instead of individual ones.
In this work, we make two related contributions. First, we construct simple and efficient protocols for PSI and PSI-Payload from a ring version of oblivious linear function evaluation (ring-OLE) that can be efficiently realized using recent ring-LPN based protocols. A standard OLE over a field F allows a sender with $a,b \in \mathbb{F}$ to deliver $ax+b$ to a receiver who holds $x \in \mathbb{F}$. Ring-OLE generalizes this to a ring $\mathcal{R}$, in particular, a polynomial ring over $\mathbb{F}$. Our second contribution is an efficient general reduction of a variant of PSI-Sum to PSI-Payload and secure inner product.
Our protocols have better communication cost than state-of-the-art PSI protocols, especially when requiring security against malicious parties and when allowing input-independent preprocessing. Compared to previous maliciously secure PSI protocols that have a similar com- putational cost, our online communication is 2x better for small sets (28 − 212 elements) and 20% better for large sets (220 − 224). Our protocol is also simpler to describe and implement. We obtain even bigger improvements over the state of the art (4-5x better running time) for our variant of PSI-Sum.
On Extremal Algebraic Graphs and implementations of new cubic Multivariate Public Keys
Algebraic Constructions of Extremal Graph Theory
were efficiently used for the construction of Low Density Parity Check Codes for satellite communication, constructions of
stream ciphers and Postquantum Protocols of Noncommutative
cryptography and corresponding El Gamal type cryptosystems.
We shortly observe some results in these applications and present
idea of the usage of algebraic graphs for the development
of Multivariate Public Keys (MPK). Some MPK schemes are
presented at theoretical level, implementation of one of them is discussed.
On Sustainable Ring-based Anonymous Systems
Anonymous systems (e.g. anonymous cryptocurrencies and updatable anonymous credentials) often follow a construction template where an account can only perform a single anonymous action, which in turn potentially spawns new (and still single-use) accounts (e.g. UTXO with a balance to spend or session with a score to claim). Due to the anonymous nature of the action, no party can be sure which account has taken part in an action and, therefore, must maintain an ever-growing list of potentially unused accounts to ensure that the system keeps running correctly. Consequently, anonymous systems constructed based on this common template are seemingly not sustainable.
In this work, we study the sustainability of ring-based anonymous systems, where a user performing an anonymous action is hidden within a set of decoy users, traditionally called a ``ring''.
On the positive side, we propose a general technique for ring-based anonymous systems to achieve sustainability. Along the way, we define a general model of decentralised anonymous systems (DAS) for arbitrary anonymous actions, and provide a generic construction which provably achieves sustainability. As a special case, we obtain the first construction of anonymous cryptocurrencies achieving sustainability without compromising availability. We also demonstrate the generality of our model by constructing sustainable decentralised anonymous social networks.
On the negative side, we show empirically that Monero, one of the most popular anonymous cryptocurrencies, is unlikely to be sustainable without altering its current ring sampling strategy. The main subroutine is a sub-quadratic-time algorithm for detecting used accounts in a ring-based anonymous system.
Finding Desirable Substitution Box with SASQUATCH
This paper presents ``SASQUATCH'', an open-source tool, that aids in finding an unknown substitution box (SBox) given its properties. The inspiration of our work can be directly attributed to the DCC 2022 paper by Lu, Mesnager, Cui, Fan and Wang. Taking their work as the foundation (i.e., converting the problem of SBox search to a satisfiability modulo theory instance and then invoking a solver), we extend in multiple directions (including -- but not limiting to -- coverage of more options, imposing time limit, parallel execution for multiple SBoxes, non-bijective SBox), and package everything within an easy-to-use interface. We also present ASIC benchmarks for some of the SBoxes.
The Referendum Problem in Anonymous Voting for Decentralized Autonomous Organizations
A natural approach to anonymous voting over Ethereum assumes that there is an off-chain aggregator that performs the following task. The aggregator receives valid signatures of YES/NO preferences from eligible voters and uses them to compute a zk-SNARK proof of the fact that the majority of voters have cast a preference for YES or NO. Then, the aggregator sends to the smart contract the zk-SNARK proof, the smart contract verifies the proof and can trigger an action (e.g., a transfer of funds). As the zk-SNARK proof guarantees anonymity, the privacy of the voters is preserved by attackers not colluding with the aggregator. Moreover, if the SNARK proof verification is efficient the GAS cost will be independent on the number of participating voters and signatures submitted by voters to the aggregator.
In this paper we show that this naive approach to run referenda over Ethereum can incur severe security problems. We propose both mitigations and hardness results for achieving voting procedures in which the proofs submitted on-chain are either ZK or succinct.
Practical Robust DKG Protocols for CSIDH
A Distributed Key Generation (DKG) protocol is an essential component of threshold cryptography. DKGs enable a group of parties to generate a secret and public key pair in a distributed manner so that the secret key is protected from being exposed, even if a certain number of parties are compromised. Robustness further guarantees that the construction of the key pair is always successful, even if malicious parties try to sabotage the computation. In this paper, we construct two efficient robust DKG protocols in the CSIDH setting that work with Shamir secret sharing. Both the proposed protocols are proven to be actively secure in the quantum random oracle model and use an Information Theoretically (IT) secure Verifiable Secret Sharing (VSS) scheme that is built using bivariate polynomials. As a tool, we construct a new piecewise verifiable proof system for structured public keys, that could be of independent interest. In terms of isogeny computations, our protocols outperform the previously proposed DKG protocols CSI-RAShi and Structured CSI-RAShi. As an instance, using our DKG protocols, 4 parties can sample a PK of size 4kB, for CSI-FiSh and CSI-SharK, respectively, 3.4 and 1.7 times faster than the current alternatives. On the other hand, since we use an IT-secure VSS, the fraction of corrupted parties is limited to less than a third and the communication cost of our schemes scales slightly worse with an increasing number of parties. For a low number of parties, our scheme still outperforms the alternatives in terms of communication.
SMAUG: Pushing Lattice-based Key Encapsulation Mechanisms to the Limits
Recently, NIST has announced Kyber, a lattice-based key encapsulation mechanism (KEM), as a post-quantum standard. However, it is not the most efficient scheme among the NIST's KEM finalists. Saber enjoys more compact sizes and faster performance, and Mera et al. (TCHES '21) further pushed its efficiency, proposing a shorter KEM, Sable. As KEM are frequently used on the Internet, such as in TLS protocols, it is essential to achieve high efficiency while maintaining sufficient security.
In this paper, we further push the efficiency limit of lattice-based KEMs by proposing SMAUG, a new post-quantum KEM scheme whose IND-CCA2 security is based on the combination of MLWE and MLWR problems. We adopt several recent developments in lattice-based cryptography, targeting the \textit{smallest} and the \textit{fastest} KEM while maintaining high enough security against various attacks, with a full-fledged use of sparse secrets. Our design choices allow SMAUG to balance the decryption failure probability and ciphertext sizes without utilizing error correction codes, whose side-channel resistance remains open.
With a constant-time C reference implementation, SMAUG achieves ciphertext sizes up to 12% and 9% smaller than Kyber and Saber, with much faster running time, up to 103% and 58%, respectively. Compared to Sable, SMAUG has the same ciphertext sizes but a larger public key, which gives a trade-off between the public key size versus performance; SMAUG has 39%-55% faster encapsulation and decapsulation speed in the parameter sets having comparable security.
Extremal algebraic graphs, quadratic multivariate public keys and temporal rules
We introduce large groups of quadratic transformations of a vector space over the finite fields defined via symbolic computations with the usage of
algebraic constructions of Extremal Graph Theory. They can serve as platforms for the protocols of Noncommutative Cryptography with security based on the complexity of word decomposition problem in noncommutative polynomial transformation group.
The modifications of these symbolic computations in the case of large fields of characteristic two allow us to define quadratic bijective multivariate public keys such that the inverses of public maps has a large polynomial degree. Another family of public keys is defined over arbitrary commutative ring with unity.
We suggest the usage of constructed protocols for the private delivery of quadratic encryption maps instead of the public usage of these transformations, i.e. the idea of temporal multivariate rules with their periodical change.
Last updated: 2023-06-06
Differential properties of integer multiplication
In this paper, we study the differential properties of integer multiplication between two $w$-bit integers, resulting in a $2w$-bit integer. Our objective is to gain insights into its resistance against differential cryptanalysis and asses its suitability as a source of non-linearity in symmetric key primitives.
Private Eyes: Zero-Leakage Iris Searchable Encryption
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from oblivious symmetric searchable encryption. Approximate proximity queries are used: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric.
Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and oblivious maps which map keywords to values. One computes many LSHs of each record in the database and uses these hashes as keywords in the oblivious map with the matching biometric readings concatenated as the value. At search time with a noisy reading, one computes the LSHs and retrieves the disjunction of the resulting values from the map. The underlying oblivious map needs to answer disjunction queries efficiently.
We focus on the iris biometric which requires a large number of LSHs, approximately $1000$. Boldyreva and Tang's (PoPETS 2021) design yields a suitable map for a small number of LSHs (their application was in zero-leakage $k$-nearest-neighbor search).
Our solution is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most $6\%$ of LSHs match any stored value.
We evaluate using the ND-0405 dataset; this dataset has $356$ irises suitable for testing. To scale our evaluation, we use a generative adversarial network to produce synthetic irises. Accurate statistics on sizes beyond available datasets is crucial to optimizing the cryptographic primitives. This tool may be of independent interest.
For the largest tested parameters of a $5000$ synthetic iris database, a search requires $18$ rounds of communication and $25$ms of parallel computation.
Our scheme is implemented and open-sourced.
Towards a Privacy-preserving Attestation for Virtualized Networks
TPM remote attestation allows to verify the integrity of the boot sequence of a remote device. Deep Attestation extends that concept to virtualized platforms by allowing to attest virtual components, the hypervisor, and the link between them. In multi-tenant environments, deep attestation solution offer security and/or efficiency, but no privacy.
In this paper, we propose a privacy preserving TPM-based deep attestation solution in multi-tenant environments, which provably guarantees: (i) Inter-tenant privacy: a tenant is cannot know whether other VMs outside its own are hosted on the same machine; (ii) Configuration hiding: the hypervisor's configuration, used during attestation, remains hidden from the tenants; and (iii) Layer linking: tenants can link hypervisors with the VMs, thus obtaining a guarantee that the VMs are running on specific hardware. We also implement our scheme and show that it is efficient despite the use of complex cryptographic tools.
TLS → Post-Quantum TLS: Inspecting the TLS landscape for PQC adoption on Android
The ubiquitous use of smartphones has contributed to more and more users conducting their online browsing activities through apps, rather than web browsers. In order to provide a seamless browsing experience to the users, apps rely on a variety of HTTP-based APIs and third-party libraries, and make use of the TLS protocol to secure the underlying communication. With NIST's recent announcement of the first standards for post-quantum algorithms, there is a need to better understand the constraints and requirements of TLS usage by Android apps in order to make an informed decision for migration to the post-quantum world.
In this paper, we performed an analysis of TLS usage by highest-ranked apps from Google Play Store to assess the resulting overhead for adoption of post-quantum algorithms. Our results show that apps set up large numbers of TLS connections with a median of 94, often to the same hosts. At the same time, many apps make little use of resumption to reduce the overhead of the TLS handshake. This will greatly magnify the impact of the transition to post-quantum cryptography, and we make recommendations for developers, server operators and the mobile operating systems to invest in making more use of these mitigating features or improving their accessibility. Finally, we briefly discuss how alternative proposals for post-quantum TLS handshakes might reduce the overhead.
On implemented graph based generator of cryptographically strong pseudorandom sequences of multivariate nature
Classical Multivariate Cryptography (MP) is searching for special families of functions of kind ^nF=T_1FTT_2 on the vector space V= (F_q)^n where F is a quadratic or cubical polynomial map of the space to itself, T_1 and T^2 are affine transformations and T is the piece of information such that the knowledge of the triple T_1, T_2, T allows the computation of reimage x of given nF(x) in polynomial time O(n^ᾳ). Traditionally F is given by the list of coefficients C(^nF) of its monomial terms ordered lexicographically. We consider the Inverse Problem of MP of finding T_1, T_2, T for F given in its standard form. The solution of inverse problem is harder than finding the procedure to compute the reimage of ^nF in time O(n^ᾳ). For general quadratic or cubic maps nF this is NP hard problem. In the case of special family some arguments on its inclusion to class NP has to be given.
VerifMSI: Practical Verification of Hardware and Software Masking Schemes Implementations
Side-Channel Attacks are powerful attacks which can recover secret information in a cryptographic device by analysing physical quantities such as power consumption. Masking is a common countermeasure to these attacks which can be applied in software and hardware, and consists in splitting the secrets in several parts. Masking schemes and their implementations are often not trivial, and require the use of automated tools to check for their correctness.
In this work, we propose a new practical tool named VerifMSI which extends an existing verification tool called LeakageVerif targeting software schemes. Compared to LeakageVerif, VerifMSI includes hardware constructs, namely gates and registers, what allows to take glitch propagation into account. Moreover, it includes a new representation of the inputs, making it possible to verify three existing security properties (Non-Interference, Strong Non-Interference, Probe Isolating Non-Interference) as well as a newly defined one called Relaxed Non-Interference, compared to the unique Threshold Probing Security verified in LeakageVerif. Finally, optimisations have been integrated in VerifMSI in order to speed up the verification.
We evaluate VerifMSI on a set of 9 benchmarks from the literature, focusing on the hardware descriptions, and show that it performs well both in terms of accuracy and scalability.
Fast Exhaustive Search for Polynomial Systems over F3
Solving multivariate polynomial systems over finite fields is an important
problem in cryptography. For random F2 low-degree systems with equally many
variables and equations, enumeration is more efficient than advanced solvers for all
practical problem sizes. Whether there are others remained an open problem.
We here study and propose an exhaustive-search algorithm for low degrees systems
over F3 which is suitable for parallelization. We implemented it on Graphic Processing
Units (GPUs) and commodity CPUs. Its optimizations and differences from the F2
case are also analyzed.
We can solve 30+ quadratic equations in 30 variables on an NVIDIA GeForce GTX
980 Ti in 14 minutes; a cubic system takes 36 minutes. This well outperforms
existing solvers. Using these results, we compare Gröbner Bases vs. enumeration for
polynomial systems over small fields as the sizes go up.
The Problem of Half Round Key XOR
In the design of GIFT, half round key XOR is used. This leads to the undesired consequence that the security against the differential/linear attacks are overestimated. This comes from the observation that; in the usual DDT/LAT based analysis of the differential/linear attacks, the inherent assumption is the full round key is XORed at each round.
Compact Lattice Gadget and Its Applications to Hash-and-Sign Signatures
Lattice gadgets and the associated algorithms are the essential building blocks of lattice-based cryptography. In the past decade, they have been applied to build versatile and powerful cryptosystems. However, the practical optimizations and designs of gadget-based schemes generally lag their theoretical constructions.
For example, the gadget-based signatures have elegant design and capability of extending to more advanced primitives, but they are far less efficient than other lattice-based signatures.
This work aims to improve the practicality of gadget-based cryptosystems, with a focus on hash-and-sign signatures. To this end, we develop a compact gadget framework in which the used gadget is a square matrix instead of the short and fat one used in previous constructions. To work with this compact gadget, we devise a specialized gadget sampler, called semi-random sampler, to compute the approximate preimage. It first deterministically computes the error and then randomly samples the preimage. We show that for uniformly random targets, the preimage and error distributions are simulatable without knowing the trapdoor. This ensures the security of the signature applications. Compared to the Gaussian-distributed errors in previous algorithms, the deterministic errors have a smaller size, which lead to a substantial gain in security and enables a practically working instantiation.
As the applications, we present two practically efficient gadget-based signature schemes based on NTRU and Ring-LWE respectively. The NTRU-based scheme offers comparable efficiency to Falcon and Mitaka and a simple implementation without the need of generating the NTRU trapdoor. The LWE-based scheme also achieves a desirable overall performance. It not only greatly outperforms the state-of-the-art LWE-based hash-and-sign signatures, but also has an even smaller size than the LWE-based Fiat-Shamir signature scheme Dilithium. These results fill the long-term gap in practical gadget-based signatures.
SoK: Distributed Randomness Beacons
Motivated and inspired by the emergence of blockchains, many new protocols have recently been proposed for generating publicly verifiable randomness in a distributed yet secure fashion. These protocols work under different setups and assumptions, use various cryptographic tools, and entail unique trade-offs and characteristics. In this paper, we systematize the design of distributed randomness beacons (DRBs) as well as the cryptographic building blocks they rely on. We evaluate protocols on two key security properties, unbiasability and unpredictability, and discuss common attack vectors for predicting or biasing the beacon output and the countermeasures employed by protocols. We also compare protocols by communication and computational efficiency. Finally, we provide insights on the applicability of different protocols in various deployment scenarios and highlight possible directions for further research.
Safeguarding Physical Sneaker Sale Through a Decentralized Medium
Sneakers were designated as the most counterfeited fashion item online, with three times more risk in a trade than any other fashion purchase. As the market expands, the current sneaker scene displays several vulnerabilities and trust flaws, mostly related to the legitimacy of assets or actors. In this paper, we investigate various blockchain-based mechanisms to address these large-scale trust issues. We argue that (i) pre-certified and tracked assets through the use of non-fungible tokens can ensure the genuine nature of an asset and authenticate its owner more effectively during peer-to-peer trading across a marketplace; (ii) a game-theoretic-based system with economic incentives for participating users can greatly reduce the rate of online fraud and address missed delivery deadlines; (iii) a decentralized dispute resolution system biased in favour of an honest party can solve potential conflicts more reliably.
A Note on ``A Secure Anonymous D2D Mutual Authentication and Key Agreement Protocol for IoT''
We show that the key agreement scheme [Internet of Things, 2022(18): 100493] is flawed. (1) It neglects the structure of an elliptic curve and presents some false computations. (2) The scheme is insecure against key compromise impersonation attack.
On Perfect Linear Approximations and Differentials over Two-Round SPNs
Recent constructions of (tweakable) block ciphers with an embedded cryptographic backdoor relied on the existence of probability-one differentials or perfect (non-)linear approximations over a reduced-round version of the primitive. In this work, we study how the existence of probability-one differentials or perfect linear approximations over two rounds of a substitution-permutation network can be avoided by design. More precisely, we develop criteria on the s-box and the linear layer that guarantee the absence of probability-one differentials for all keys. We further present an algorithm that allows to efficiently exclude the existence of keys for which there exists a perfect linear approximation.
Not so Difficult in the End: Breaking the Lookup Table-based Affine Masking Scheme
The lookup table-based masking countermeasure is prevalent in real-world applications due to its potent resistance against side-channel attacks and low computational cost. The ASCADv2 dataset, for instance, ranks among the most secure publicly available datasets today due to two layers of countermeasures: lookup table-based affine masking and shuffling. Current attack approaches rely on strong assumptions. In addition to requiring access to the source code, an adversary would also need prior knowledge of random shares.
This paper forgoes reliance on such knowledge and proposes two attack approaches based on the vulnerabilities of the lookup table-based affine masking implementation. As a result, the first attack can retrieve all secret keys' reliance in less than a minute without knowing mask shares. Although the second attack is not entirely successful in recovering all keys, we believe more traces would help make such an attack fully functional.
Non-Interactive Commitment from Non-Transitive Group Actions
Group actions are becoming a viable option for post-quantum cryptography assumptions. Indeed, in recent years some works have shown how to construct primitives from assumptions based on isogenies of elliptic curves, such as CSIDH, on tensors or on code equivalence problems. This paper presents a bit commitment scheme, built on non-transitive group actions, which is shown to be secure in the standard model, under the decisional Group Action Inversion Problem. In particular, the commitment is computationally hiding and perfectly binding, and is obtained from a novel and general framework that exploits the properties of some orbit-invariant functions, together with group actions. Previous constructions depend on an interaction between the sender and the receiver in the commitment phase, which results in an interactive bit commitment. We instead propose the first non-interactive bit commitment based on group actions. Then we show that, when the sender is honest, the constructed commitment enjoys an additional feature, i.e., it is possible to tell whether two commitments were obtained from the same input, without revealing the input. We define the security properties that such a construction must satisfy, and we call this primitive linkable commitment. Finally, as an example, an instantiation of the scheme using tensors with coefficients in a finite field is provided. In this case, the invariant function is the computation of the rank of a tensor, and the cryptographic assumption is related to the Tensor Isomorphism problem.
Composing Bridges
The present work builds on previous investigations of the authors (and their collaborators) regarding bridges, a certain type of morphisms between encryption schemes, making a step forward in developing a (category theory) language for studying relations between encryption schemes. Here we analyse the conditions under which bridges can be performed sequentially, formalizing the notion of composability. One of our results gives a sufficient condition for a pair of bridges to be composable. We illustrate that composing two bridges, each independently satisfying a previously established IND-CPA security definition, can actually lead to an insecure bridge. Our main result gives a sufficient condition that a pair of secure composable bridges should satisfy in order for their composition to be a secure bridge. We also introduce the concept of a complete bridge and show that it is connected to the notion of Fully composable Homomorphic Encryption (FcHE), recently considered by Micciancio. Moreover, we show that a result of Micciancio which gives a construction of FcHE schemes can be phrased in the language of complete bridges, where his insights can be formalised in a greater generality.
A Fast RLWE-Based IPFE Library and its Application to Privacy-Preserving Biometric Authentication
With the increased use of data and communication through the internet and the abundant misuse of personal data by many organizations, people are more sensitive about their privacy. Privacy-preserving computation is becoming increasingly important in this era. Functional encryption allows a user to evaluate a function on encrypted data without revealing sensitive information. Most implementations of functional encryption schemes are too time-consuming for practical use. Mera et al. first proposed an inner product functional encryption scheme based on ring learning with errors to improve efficiency. In this work, we optimize the implementation of their work and propose a fast inner product functional encryption library. Specifically, we identify the main performance bottleneck, which is the number theoretic transformation based polynomial multiplication used in the scheme. We also identify the micro and macro level parallel components of the scheme and propose novel techniques to improve the efficiency using $\textit{open multi-processing}$ and $\textit{advanced vector extensions 2}$ vector processor. Compared to the original implementation, our optimization methods translate to $89.72\%$, $83.06\%$, $59.30\%$, and $53.80\%$ improvements in the $\textbf{Setup}$, $\textbf{Encrypt}$, $\textbf{KeyGen}$, and $\textbf{Decrypt}$ operations respectively, in the scheme for standard security level. Designing privacy-preserving applications using functional encryption is ongoing research. Therefore, as an additional contribution to this work, we design a privacy-preserving biometric authentication scheme using inner product functional encryption primitives.
MUSES: Efficient Multi-User Searchable Encrypted Database
Searchable encrypted systems enable privacy-preserving keyword search on encrypted data. Symmetric systems achieve high efficiency (e.g., sublinear search), but they mostly support single-user search. Although systems based on public-key or hybrid models support multi-user search, they incur inherent security weaknesses (e.g., keyword-guessing vulnerabilities) and scalability limitations due to costly public-key operations (e.g., pairing). More importantly, most encrypted search designs leak statistical information (e.g., search, result, and volume patterns) and thus are vulnerable to devastating leakage-abuse attacks. Some pattern-hiding schemes were proposed. However, they incur significant user bandwidth/computation costs, and thus are not desirable for large-scale outsourced databases with resource-constrained users.
In this paper, we propose MUSES, a new multi-writer encrypted search platform that addresses the functionality, security, and performance limitations in the existing encrypted search designs. Specifically, MUSES permits single-reader, multi-writer functionalities with permission revocation and hides all statistical information (including search, result, and volume patterns) while featuring minimal user overhead. In MUSES, we demonstrate a unique incorporation of various emerging distributed cryptographic protocols including Distributed Point Function, Distributed PRF, and Oblivious Linear Group Action. We also introduce novel distributed protocols for oblivious counting and shuffling on arithmetic shares for the general multi-party setting with a dishonest majority, which can be found useful in other applications. Our experimental results showed that the keyword search by MUSES is two orders of magnitude faster with up to 97× lower user bandwidth cost than the state-of-the-art.
Lower Bounds for Lattice-based Compact Functional Encryption
Functional encryption (FE) is a primitive where the holder of a master secret key can control which functions a user can evaluate on encrypted data. It is a powerful primitive that even implies indistinguishability obfuscation (iO), given sufficiently compact ciphertexts (Ananth-Jain, CRYPTO'15 and Bitansky-Vaikuntanathan, FOCS'15). However, despite being extensively studied, there are FE schemes, such as function-hiding inner-product FE (Bishop-Jain-Kowalczyk, AC'15, Abdalla-Catalano-Fiore-Gay-Ursu, CRYPTO’18) and compact quadratic FE (Baltico-Catalano-Fiore-Gay, Lin, CRYPTO’17), that can be only realized using pairings. This raises the question if there are some mathematical barriers that hinder us from realizing these FE schemes from other assumptions.
In this paper, we study the difficulty of constructing lattice-based compact FE. We generalize the impossibility results of Ünal (EC'20) for lattice-based function-hiding FE, and extend it to the case of compact FE. Concretely, we prove lower bounds for lattice-based compact FE schemes which meet some (natural) algebraic restrictions at encryption and decryption, and have ciphertexts of linear size and secret keys of minimal degree. We see our results as important indications of why it is hard to construct lattice-based FE schemes for new functionalities, and which mathematical barriers have to be overcome.
A Guide to the Design of Digital Signatures based on Cryptographic Group Actions
Cryptography based on group actions has been studied since 1990.
In recent years, however, the area has seen a revival, partially due to its role in post-quantum cryptography. For instance, several works have proposed signature schemes based on group actions, as well as a variety of techniques aimed at improving their performance and efficiency. Most of these techniques can be explained as transforming one Sigma protocol into another, while essentially preserving security. In this work, we present a unified taxonomy of such techniques. In particular, we describe all techniques in a single fashion, show how they impact the performance of the resulting protocols and analyse in detail how different techniques can be combined for optimal performance. Furthermore, to provide a tangible perspective, we apply the results of our analysis to the (group action-based) candidates in the current NIST call for digital signatures. This gives a full overview of the state of the art of signatures based on group actions, as well as a flexible tool which is easy to adapt and employ in the design of future schemes.
Generic Error SDP and Generic Error CVE
This paper introduces a new family of CVE schemes built from generic errors (GE-CVE) and identifies a vulnerability therein. To introduce the problem, we generalize the concept of error sets beyond those defined by a metric, and use the set-theoretic difference operator to characterize when these error sets are detectable or correctable by codes. We prove the existence of a general, metric-less form of the Gilbert-Varshamov bound, and show that - like in the Hamming setting - a random code corrects a generic error set with overwhelming probability. We define the generic error SDP (GE-SDP), which is contained in the complexity class of NP-hard problems, and use its hardness to demonstrate the security of GE-CVE. We prove that these schemes are complete, sound, and zero-knowledge. Finally, we identify a vulnerability of the GE-SDP for codes defined over large extension fields and without a very high rate. We show that certain GE-CVE parameters suffer from this vulnerability, notably the restricted CVE scheme.
Towards High-speed ASIC Implementations of Post-Quantum Cryptography
In this brief, we realize different architectural techniques towards improving the performance of post-quantum cryptography (PQC) algorithms when implemented as hardware accelerators on an application-specific integrated circuit (ASIC) platform. Having SABER as a case study, we designed a 256-bit wide architecture geared for high-speed cryptographic applications that incorporates smaller and distributed SRAM memory blocks. Moreover, we have adapted the building blocks of SABER to process 256-bit words. We have also used a buffer technique for efficient polynomial coefficient multiplications to reduce the clock cycle count. Finally, double-sponge functions are combined serially (one after another) in a high-speed KECCAK core to improve the hash operations of SHA/SHAKE. For key-generation, encapsulation, and decapsulation operations of SABER, our 256-bit wide accelerator with a single sponge function is 1.71x, 1.45x, and 1.78x faster compared to the raw clock cycle count of a serialized SABER design. Similarly, our 256-bit implementation with double-sponge functions takes 1.08x, 1.07x & 1.06x fewer clock cycles compared to its single-sponge counterpart. The studied optimization techniques are not specific to SABER - they can be utilized for improving the performance of other lattice-based PQC accelerators.
SOK: Research Motivations of Public-Key Cryptography
The design, proposal, and analysis of cryptographic primitives and protocols (schemes) are one of the primary research fields in cryptology. To advance this research field, it is crucial to fully understand their research motivations. In this paper, we systematically introduce the research motivations for designing and proposing new schemes in public-key cryptography. We found that all research motivations aim to produce benefits for humanity including efficiency, security, and functionality, although some of them may be not obvious or only hold conditionally. We categorize benefits in research motivations into 3 ways, 6 types, and 17 areas. As examples, we introduce 40 research strategies within these areas for exploring benefits, each presented as ``From less-adj (in the first scheme) To more-adj (in the second scheme)", where ``adj" here refers to an adjective word representing a positive outcome. This SOK paper aims to provide valuable insights into the driving forces behind advancements in public-key cryptography, facilitating future research efforts in this field.
A Two-Party Hierarchical Deterministic Wallets in Practice
The applications of Hierarchical Deterministic Wallet are rapidly growing in various areas such as cryptocurrency exchanges and hardware wallets. Improving privacy and security is more important than ever. In this study, we proposed a protocol that fully support a two-party computation of BIP32. Our protocol, similar to the distributed key generation, can generate each party’s secret share, the common chain-code, and the public key without revealing a seed and any descendant private keys. We also provided a simulation-based proof of our protocol assuming a rushing, static, and malicious adversary in the hybrid model. Our master key generation protocol produces up to total of two bit leakages from a honest party given the feature that the seeds will be re-selected after each execution. The proposed hardened child key derivation protocol leads up to a one bit leakage in the worst situation of simulation from a honest party and will be accumulated with each execution. Fortunately, in reality, this issue can be largely mitigated by adding some validation criteria of boolean circuits and masking the input shares before each execution. We then implemented the proposed protocol and ran in a single thread on a laptop which turned out with practically acceptable execution time. Lastly, the outputs of our protocol can be easily integrated with many threshold sign protocols.
KAIME : Central Bank Digital Currency with Realistic and Modular Privacy
Recently, with the increasing interest in Central Bank Digital Currency (CBDC), many countries have been working on researching and developing digital currency. The most important reasons for this interest are that CBDC eliminates the disadvantages of traditional currencies and provides a safer, faster, and more efficient payment system. These benefits also come with challenges, such as safeguarding individuals’ privacy and ensuring regulatory mechanisms. While most researches address the privacy conflict between users and regulatory agencies, they miss an important detail. Important parts of a financial system are banks and financial institutions. Some studies ignore the need for privacy and include these institutions in the CBDC system, no system currently offers a solution to the privacy conflict between banks, financial institutions, and users. In this study, while we offer a solution to the privacy conflict between the user and the regulatory agencies, we also provide a solution to the privacy conflict between the user and the banks. Our solution, KAIME has also a modular structure. The privacy of the sender and receiver can be hidden if desired. Compared to previous related research, security analysis and implementation of KAIME is substantially simpler because simple and well-known cryptographic methods are used.
Optimizing Attribute-based Encryption for Circuits using Compartmented Access Structures
Attribute-based encryption (ABE) is an asymmetric encryption method that allows expressive access granting mechanisms, with high applicability in modern IT infrastructure, such as Cloud or IoT systems. (Ezhilarasi et al., 2021; Touati and Challal, 2016) One open problem regarding ABE is using Boolean circuits as access structures. While Boolean Formulae were supported since the first ABE scheme proposed, there is still no efficient construction that supports Boolean circuits. We propose a new ABE scheme for a new access structure type, situated between Boolean formulae and Boolean circuits in terms of expressiveness. This key point in our construction is the usage of CAS-nodes, a structure modeling compartmented groups access structures. We also show that our CAS-nodes can be used to improve the efficiency of existing ABE schemes for Boolean circuits. Our construction is secure in the Selective Set Model under the bilinear Decisional Diffie-Hellman Assumption.
On the Quantum Security of HAWK
In this paper, we prove the quantum security of the signature scheme HAWK, proposed by Ducas, Postlethwaite, Pulles and van Woerden (ASIACRYPT 2022). More precisely, we reduce its strong unforgeability in the quantum random oracle model (QROM) to the hardness of the one-more SVP problem, which is the computational problem on which also the classical security analysis of HAWK relies. Our security proof deals with the quantum aspects in a rather black-box way, making it accessible also to non-quantum-experts.
PriFHEte: Achieving Full-Privacy in Account-based Cryptocurrencies is Possible
In cryptocurrencies, all transactions are public. For their adoption, it is important that these transactions, while publicly verifiable, do not leak information about the identity and the balances of the transactors.
For UTXO-based cryptocurrencies, there are well-established approaches (e.g., ZCash) that guarantee full privacy to the transactors. Full privacy in UTXO means that each transaction is anonymous within the set of all private transactions ever posted on the blockchain.
In contrast, for account-based cryptocurrencies (e.g., Ethereum) full privacy, that is, privacy within the set of all accounts, seems to be impossible to achieve within the constraints of blockchain transactions (e.g., they have to fit in a block).
Indeed, every approach proposed in the literature achieves only a much weaker privacy guarantee called $k-$anonymity where a transactor is private within a set of $k$ account holders.
$k-$anonymity is achieved by adding $k$ accounts to the transaction, which concretely limits the anonymity guarantee to a very small constant (e.g., $~$64 for QuisQuis and $~$256 for anonymous Zether), compared to the set of all possible accounts.
In this paper, we propose a completely new approach that does not achieve anonymity by including more accounts in the transaction, but instead makes the transaction itself ``smarter''.
Our key contribution is to provide a mechanism whereby a compact transaction can be used to correctly update all accounts. Intuitively, this guarantees that all accounts are equally likely to be the recipients/sender of such a transaction.
We, therefore, provide the first protocol that guarantees full privacy in account-based cryptocurrencies PriFHEte
The contribution of this paper is theoretical.
Our main objective is to demonstrate that achieving
full privacy in account-based cryptocurrency is actually possible.
We see our work as opening the door to new possibilities for anonymous account-based cryptocurrencies.
Nonetheless, in this paper, we also discuss PriFHEte's potential to be developed in practice by leveraging the power of off-chain scalability solutions such as zk rollups.
Migrating Applications to Post-Quantum Cryptography: Beyond Algorithm Replacement
Post-Quantum Cryptography (PQC) defines cryptographic algorithms designed to resist the advent of the quantum computer. Most public-key cryptosystems today are vulnerable to quantum attackers, so a global-scale transition to PQC is expected. As a result, several entities foment efforts in PQC standardization, research, development, creation of Work Groups (WGs), and issuing adoption recommendations. However, there is a long road to broad PQC adoption in practice. This position paper describes why migrating to PQC is necessary and gathers evidence that the ``hybrid mode'' can help the migration process. Finally, it stresses that there are risks yet to be considered by the literature. Quantum-safe protocols are being evaluated, but more attention (and awareness) is needed for the software and protocols at the application layer. Lastly, this position paper gives further recommendations for a smother PQC migration.
Kyber terminates
The key generation of the lattice-based key-encapsulation mechanism CRYSTALS-Kyber (or short, just Kyber) involves a rejection-sampling routine to produce coefficients modulo $q=3329$ that look uniformly random. The input to this rejection sampling is output of the SHAKE-128 extendable output function (XOF). If this XOF is modelled as a random oracle with infinite output length, it is easy to see that Kyber terminates with probability 1; also, in this model, for any upper bound on the running time, the probability of termination is strictly smaller than 1.
In this short note we show that an (unconditional) upper bound for the running time for Kyber exists. Computing a tight upper bound, however, is (likely to be) infeasible. We remark that the result has no real practical value, except that it may be useful for computer-assisted reasoning about Kyber using tools that require a simple proof of termination.
Concurrent Security of Anonymous Credentials Light, Revisited
We revisit the concurrent security guarantees of the well-known Anonymous Credentials Light (ACL) scheme (Baldimtsi and Lysyanskaya, CCS'13). This scheme was originally proven secure when executed sequentially, and its concurrent security was left as an open problem.
A later work of Benhamouda et al. (EUROCRYPT'21) gave an efficient attack on ACL when executed concurrently, seemingly resolving this question once and for all.
In this work, we point out a subtle flaw in the attack of Benhamouda et al. on ACL and show, in spite of popular opinion, that it can be proven concurrently secure.
Our modular proof in the algebraic group model uses an ID scheme as an intermediate step and leads to a major simplification of the complex security argument for Abe's Blind Signature scheme by Kastner et al. (PKC'22).
Two-Message Authenticated Key Exchange from Public-Key Encryption
In two-message authenticated key exchange (AKE), it is necessary for the initiator to keep a round state after sending the first round-message, because he/she has to derive his/her session key after receiving the second round-message. Up to now almost all two-message AKEs constructed from public-key encryption (PKE) only achieve weak security which does not allow the adversary obtaining the round state. How to support state reveal to obtain a better security called IND-AA security has been an open problem proposed by Hövelmann et al. (PKC 2020).
In this paper, we solve the open problem with a generic construction of two-message AKE from any CCA-secure Tagged Key Encapsulation Mechanism (TKEM). Our AKE supports state reveal and achieves IND-AA security. Given the fact that CCA-secure public-key encryption (PKE) implies CCA-secure TKEM, our AKE can be constructed from any CCA-secure PKE with proper message space. The abundant choices for CCA-secure PKE schemes lead to many IND-AA secure AKE schemes in the standard model. Moreover, following the online-extractability technique in recent work by Don et al. (Eurocrypt 2022), we can extend the Fujisaki-Okamoto transformation to transform any CPA-secure PKE into a CCA-secure Tagged KEM in the QROM model. Therefore, we obtain the first generic construction of IND-AA secure two-message AKE from CPA-secure PKE in the QROM model. This construction does not need any signature scheme, and this result is especially helpful in the post-quantum world, since the current quantum-secure PKE schemes are much more efficient than their signature counterparts.
Deniable Cryptosystems: Simpler Constructions and Achieving Leakage Resilience
Deniable encryption (Canetti et al. CRYPTO ’97) is an intriguing primitive, which provides security guarantee against coercion by allowing a sender to convincingly open the ciphertext into a fake message. Despite the notable result by Sahai and Waters STOC ’14 and other efforts in functionality extension, all the deniable public key encryption (DPKE) schemes suffer from intolerable overhead due to the heavy building blocks, e.g., translucent sets or indistinguishability obfuscation. Besides, none of them considers the possible damage from leakage in the real world, obstructing these protocols from practical use.
To fill the gap, in this work we first present a simple and generic approach of sender-DPKE from ciphertext-simulatable encryption, which can be instantiated with nearly all the common PKE schemes. The core of this design is a newly-designed framework for flipping a bit-string that offers inverse polynomial distinguishability. Then we theoretically expound and experimentally show how classic side-channel attacks (timing or simple power attacks), can help the coercer to break deniability, along with feasible countermeasures.
Asymmetric Multi-Party Computation
Current protocols for Multi-Party Computation (MPC) consider the setting where all parties have access to similar resources. For example, all parties have access to channels bounded by the same worst-case delay upper bound $\Delta$, and all channels have the same cost of communication. As a consequence, the overall protocol performance (resp. the communication cost) may be heavily affected by the slowest (resp. the most expensive) channel, even when most channels are fast (resp. cheap).
Given the state of affairs, we initiate a systematic study of 'asymmetric' MPC. In asymmetric MPC, the parties are divided into two categories: fast and slow parties, depending on whether they have access to high-end or low-end resources.
We investigate two different models. In the first, we consider asymmetric communication delays: Fast parties are connected via channels with small delay $\delta$ among themselves, while channels connected to (at least) one slow party have a large delay $\Delta \gg \delta$. In the second model, we consider asymmetric communication costs: Fast parties benefit from channels with cheap communication, while channels connected to a slow party have an expensive communication.
We provide a wide range of positive and negative results exploring the trade-offs between the achievable number of tolerated corruptions $t$ and slow parties $s$, versus the round complexity and communication cost in each of the models. Among others, we achieve the following results.
In the model with asymmetric communication delays, focusing on the information-theoretic (i-t) setting:
- An i-t asymmetric MPC protocol with security with abort as long as $t+s < n$ and $t<n/2$, in a constant number of slow rounds.
- We show that achieving an i-t asymmetric MPC protocol for $t+s = n$ and with number of slow rounds independent of the circuit size implies an i-t synchronous MPC protocol with round complexity independent of the circuit size, which is a major problem in the field of round-complexity of MPC.
- We identify a new primitive, \emph{asymmetric broadcast}, that allows to consistently distribute a value among the fast parties, and at a later time the same value to slow parties. We completely characterize the feasibility of asymmetric broadcast by showing that it is possible if and only if $2t + s < n$.
- An i-t asymmetric MPC protocol with guaranteed output delivery as long as $t+s < n$ and $t<n/2$, in a number of slow rounds independent of the circuit size.
In the model with asymmetric communication cost, we achieve an asymmetric MPC protocol for security with abort for $t+s<n$ and $t<n/2$, based on one-way functions (OWF). The protocol communicates a number of bits over expensive channels that is independent of the circuit size. We conjecture that assuming OWF is needed and further provide a partial result in this direction.
BQP $\neq$ QMA
The relationship between complexity classes BQP and QMA is analogous to the relationship between P and NP. In this paper, we design a quantum bit commitment problem that is in QMA, but not in BQP. Therefore, it is proved that BQP $\neq$ QMA. That is, problems that are verifiable in quantum polynomial time are not necessarily solvable in quantum polynomial time, the quantum analog of P $\neq$ NP.
Building Unclonable Cryptography: A Tale of Two No-cloning Paradigms
Unclonable cryptography builds primitives that enjoy some form of unclonability, such as quantum money, software copy protection, and bounded execution programs. These are impossible in the classical model as classical data is inherently clonable. Quantum computing, with its no-cloning principle, offers a solution. However, it is not enough to realize bounded execution programs; these require one-time memory devices that self-destruct after a single data retrieval query. Very recently, a new no-cloning technology has been introduced [Eurocrypt'22], showing that unclonable polymers---proteins---can be used to build bounded-query memory devices and unclonable cryptographic applications.
In this paper, we investigate the relation between these two technologies; whether one can replace the other, or complement each other such that combining them brings the best of both worlds. Towards this goal, we review the quantum and unclonable polymer models, and existing unclonable cryptographic primitives. Then, we discuss whether these primitives can be built using the other technology, and show alternative constructions and notions when possible. We also offer insights and remarks for the road ahead. We believe that this study will contribute in advancing the field of unclonable cryptography on two fronts: developing new primitives, and realizing existing ones using new constructions.
Differential Privacy for Free? Harnessing the Noise in Approximate Homomorphic Encryption
Homomorphic Encryption (HE) is a type of cryptography that allows computing on encrypted data, enabling computation on sensitive data to be outsourced securely. Many popular HE schemes rely on noise for their security. On the other hand, Differential Privacy seeks to guarantee the privacy of data subjects by obscuring any one individual's contribution to an output. Many mechanisms for achieving Differential Privacy involve adding appropriate noise. In this work, we investigate the extent to which the noise native to Homomorphic Encryption can provide Differential Privacy "for free".
We identify the dependence of HE noise on the underlying data as a critical barrier to privacy, and derive new results on the Differential Privacy under this constraint. We apply these ideas to a proof of concept HE application, ridge regression training using gradient descent, and are able to achieve privacy budgets of $\varepsilon \approx 2$ after 50 iterations.
PIE: $p$-adic Encoding for High-Precision Arithmetic in Homomorphic Encryption
A large part of current research in homomorphic encryption (HE) aims towards making HE practical for real-world applications. In any practical HE, an important issue is to convert the application data (type) to the data type suitable for the HE.
The main purpose of this work is to investigate an efficient HE-compatible encoding method that is generic, and can be easily adapted to apply to the HE schemes over integers or polynomials.
$p$-adic number theory provides a way to transform rationals to integers, which makes it a natural candidate for encoding rationals. Although one may use naive number-theoretic techniques to perform rational-to-integer transformations without reference to $p$-adic numbers, we contend that the theory of $p$-adic numbers is the proper lens to view such transformations.
In this work we identify mathematical techniques (supported by $p$-adic number theory) as appropriate tools to construct a generic rational encoder which is compatible with HE. Based on these techniques, we propose a new encoding scheme PIE, that can be easily combined with both AGCD-based and RLWE-based HE to perform high precision arithmetic. After presenting an abstract version of PIE, we show how it can be attached to two well-known HE schemes: the AGCD-based IDGHV scheme and the RLWE-based (modified) Fan-Vercauteren scheme. We also discuss the advantages of our encoding scheme in comparison with previous works.
Lattice-based, more general anti-leakage model and its application in decentralization
In the case of standard \LWE samples $(\mathbf{A},\mathbf{b = sA + e})$, $\mathbf{A}$ is typically uniformly over $\mathbb{Z}_q^{n \times m}$. Under the \DLWE assumption, the conditional distribution of $\mathbf{s}|(\mathbf{A}, \mathbf{b})$ and $\mathbf{s}$ is expected to be consistent. However, in the case where an adversary chooses $\mathbf{A}$ adaptively, the disparity between the two entities may be larger. In this work, our primary focus is on the quantification of the Average Conditional Min-Entropy $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e})$ of $\mathbf{s}$, where $\mathbf{A}$ is chosen by the adversary. Brakerski and D\"{o}ttling answered the question in one case: they proved that when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_q^n$, it holds that $\tilde{H}_\infty(\mathbf{s}|\mathbf{sA + e}) \varpropto \rho_\sigma(\Lambda_q(\mathbf{A}))$. We prove that for any $d \leq q$, when $\mathbf{s}$ is uniformly chosen from $\mathbb{Z}_d^n$ or is sampled from a discrete Gaussian distribution, there are also similar results.
As an independent result, we have also proved the regularity of the hash function mapped to the prime-order group and its Cartesian product. As an application of the above results, we improved the multi-key fully homomorphic encryption\cite{TCC:BraHalPol17} and answered the question raised at the end of their work positively: we have GSW-type ciphertext rather than Dual-GSW, and the improved scheme has shorter keys and ciphertexts.
Last updated: 2023-05-19
A public-key based secure quantum-communication protocol using entangled qubits
We propose a quantum algorithm that crucially involves the receiver's public-key to establish secure communication of an intended message string, using shared entangled-qubits. The public-key in question is a random bit string that proclaims the sequence of measurement basis used by the receiver. As opposed to known quantum key distribution protocols, wherein a random key string is generated at the end of the communication cycle, here the sender's intended bit string itself is communicated across securely. The quantum outlay for the proposed protocol is limited to the sender and receiver sharing pairs of entangled qubits, prepared in 𝘢 𝘱𝘳𝘪𝘰𝘳𝘪 known states, besides unitary manipulations and measurements that the sender and receiver individually perform on their respective qubits, within their confines.
NFT Trades in Bitcoin with Off-chain Receipts
Abstract. Non-fungible tokens (NFTs) are digital representations of assets stored on a blockchain. It allows content creators to certify authenticity of their digital assets and transfer ownership in a transparent and decentralized way. Popular choices of NFT marketplaces infrastructure include blockchains with smart contract functionality or layer-2 solutions. Surprisingly, researchers have largely avoided building NFT schemes over Bitcoin-like blockchains, most likely due to high transaction fees in the BTC network and the belief that Bitcoin lacks enough programmability to implement fair exchanges. In this work we fill this gap. We propose an NFT scheme where trades are settled in a single Bitcoin transaction as opposed to executing complex smart contracts. We use zero-knowledge proofs (concretely, recursive SNARKs) to prove that two Bitcoin transactions, the issuance transaction $tx_0$ and the current trade transaction $tx_n$, are linked through a unique chain of transactions. Indeed, these proofs function as “off-chain receipts” of ownership that can be transferred from the current owner to the new owner using an insecure channel. The size of the proof receipt is short, independent of the total current number of trades $n$, and can be updated incrementally by anyone at anytime. Marketplaces typically require some degree of token ownership delegation, e.g., escrow accounts, to execute the trade between sellers and buyers that are not online concurrently, and to alleviate transaction fees they resort to off-chain trades. This raises concerns on the transparency and purportedly honest behaviour of marketplaces. We achieve fair and non-custodial trades by leveraging our off-chain receipts and letting the involved parties carefully sign the trade transaction with appropriate combinations of sighash flags.
Universal Hashing Based on Field Multiplication and (Near-)MDS Matrices
In this paper we propose a new construction for building universal hash functions, a specific instance called multi-265, and provide proofs for their universality.
Our construction follows the key-then-hash parallel paradigm.
In a first step it adds a variable length input message to a secret key and splits the result in blocks.
Then it applies a fixed-length public function to each block and adds their results to form the output.
The innovation presented in this work lies in the public function: we introduce the multiply-transform-multiply-construction that makes use of field multiplication and linear transformations.
We prove upper bounds for the universality of key-then-hash parallel hash functions making use of a public function with our construction provided the linear transformation are maximum-distance-separable (MDS).
We additionally propose a concrete instantiation of our construction multi-265, where the underlying public function uses a near-MDS linear transformation and prove it to be $2^{-154}$-universal.
We also make the reference code for multi-265 available.
Non-Interactive Zero-Knowledge from Non-Interactive Batch Arguments
Zero-knowledge and succinctness are two important properties that arise in the study of non-interactive arguments. Previously, Kitagawa et al. (TCC 2020) showed how to obtain a non-interactive zero-knowledge (NIZK) argument for NP from a succinct non-interactive argument (SNARG) for NP. In particular, their work demonstrates how to leverage the succinctness property from an argument system and transform it into a zero-knowledge property.
In this work, we study a similar question of leveraging succinctness for zero-knowledge. Our starting point is a batch argument for NP, a primitive that allows a prover to convince a verifier of $T$ NP statements $x_1, \ldots, x_T$ with a proof whose size scales sublinearly with $T$. Unlike SNARGs for NP, batch arguments for NP can be built from group-based assumptions in both pairing and pairing-free groups and from lattice-based assumptions. The challenge with batch arguments is that the proof size is only amortized over the number of instances, but can still encode full information about the witness to a small number of instances.
We show how to combine a batch argument for NP with a local pseudorandom generator (i.e., a pseudorandom generator where each output bit only depends on a small number of input bits) and a dual-mode commitment scheme to obtain a NIZK for NP. Our work provides a new generic approach of realizing zero-knowledge from succinctness and highlights a new connection between succinctness and zero-knowledge.
VeriVoting: A decentralized, verifiable and privacy-preserving scheme for weighted voting
Decentralization, verifiability, and privacy-preserving are three fundamental properties of modern e-voting. In this paper, we conduct extensive investigations into them and present a novel e-voting scheme, VeriVoting, which is the first to satisfy these properties. More specifically, decentralization is realized through blockchain technology and the distribution of decryption power among competing entities, such as candidates. Furthermore, verifiability is satisfied when the public verifies the ballots and decryption keys. And finally, bidirectional unlinkability is achieved to help preserve privacy by decoupling voter identity from ballot content. Following the ideas above, we first leverage linear homomorphic encryption schemes and non-interactive zero-knowledge argument systems to construct a voting primitive, SemiVoting, which meets decentralization, decryption-key verifiability, and ballot privacy. To further achieve ballot ciphertext verifiability and anonymity, we extend this primitive with blockchain and verifiable computation to finally arrive at VeriVoting. Through security analysis and per-formance evaluations, VeriVoting offers a new trade-off between security and efficiency that differs from all previous e-voting schemes and provides a radically novel practical ap-proach to large-scale elections.
LeakyOhm: Secret Bits Extraction using Impedance Analysis
The threat of physical side-channel attacks and their countermeasures is a widely researched field.
Most physical side-channel attacks rely on the unavoidable influence of computation or storage on voltage or current fluctuations.
Such data-dependent influence can be exploited by, for instance, power or electromagnetic analysis.
In this work, we introduce a novel non-invasive physical side-channel attack, which exploits the data-dependent changes in the impedance of the chip.
Our attack relies on the fact that the temporarily stored contents in registers alter the physical characteristics of the circuit, which results in changes in the die's impedance.
To sense such impedance variations, we deploy a well-known RF/microwave method called scattering parameter analysis, in which we inject sine wave signals with high frequencies into the system's power distribution network (PDN) and measure the echo of the signals.
We demonstrate that according to the content bits and physical location of a register, the reflected signal is modulated differently at various frequency points enabling the simultaneous and independent probing of individual registers.
Such side-channel leakage violates the $t$-probing security model assumption used in masking, which is a prominent side-channel countermeasure.
To validate our claims, we mount non-profiled and profiled impedance analysis attacks on hardware implementations of unprotected and high-order masked AES.
We show that in the case of profiled attack, only a single trace is required to recover the secret key.
Finally, we discuss how a specific class of hiding countermeasures might be effective against impedance leakage.
On the Invalidity of LV16/Lin17 Obfuscation Schemes
Indistinguishability obfuscation (IO) is at the frontier of cryptography research for several years. LV16/Lin17 obfuscation schemes are famous progresses towards simplifying obfuscation mechanism. In fact, these two schemes only constructed two compact functional encryption (CFE) algorithms, while other things were taken to AJ15 IO frame or BV15 IO frame. That is, CFE algorithms are inserted into AJ15 IO frame or BV15 IO frame to form a complete IO scheme. The basic structure of two CFE algorithms can be described in the following way. The polynomial-time-computable Boolean function is transformed into a group of low-degree low-locality component functions by using randomized encoding, while some public combination of values of component functions is the value of original Boolean function. The encryptor uses constant-degree multilinear maps (rather than polynomial-degree multilinear maps) to encrypt independent variables of component functions. The decryptor uses zero-testing tool of multilinear maps to obtain values of component functions (rather than to obtain values of independent variables), and then uses public combination to obtain the value of original Boolean function.
In this paper we restrict IO to be a real white box (RWB). Under such restriction we point out that LV16/Lin17 CFE algorithms being inserted into AJ15 IO frame are invalid. More detailedly, such insertion makes the adversary gradually learn the shape of the function, therefore the scheme is not secure. In other words, such scheme is not a real IO scheme, but rather a garbling scheme. It needs to be said that RWB restriction is reasonable, which means the essential contribution of IO for cryptography research.
Weak Fiat-Shamir Attacks on Modern Proof Systems
A flurry of excitement amongst researchers and practitioners has produced modern proof systems built using novel technical ideas and seeing rapid deployment, especially in cryptocurrencies. Most of these modern proof systems use the Fiat-Shamir (F-S) transformation, a seminal method of removing interaction from a protocol with a public-coin verifier. Some prior work has shown that incorrectly applying F-S (i.e., using the so-called "weak" F-S transformation) can lead to breaks of classic protocols like Schnorr's discrete log proof; however, little is known about the risks of applying F-S incorrectly for modern proof systems seeing deployment today.
In this paper, we fill this knowledge gap via a broad theoretical and practical study of F-S in implementations of modern proof systems. We perform a survey of open-source implementations and find 36 weak F-S implementations affecting 12 different proof systems. For four of these---Bulletproofs, Plonk, Spartan, and Wesolowski's VDF---we develop novel knowledge soundness attacks accompanied by rigorous proofs of their efficacy. We perform case studies of applications that use vulnerable implementations, and demonstrate that a weak F-S vulnerability could have led to the creation of unlimited currency in a private blockchain protocol. Finally, we discuss possible mitigations and takeaways for academics and practitioners.
Invertible Quadratic Non-Linear Functions over $\mathbb F_p^n$ via Multiple Local Maps
The construction of invertible non-linear layers over $\mathbb F_p^n$ that minimize the multiplicative cost is crucial for the design of symmetric primitives targeting Multi Party Computation (MPC), Zero-Knowledge proofs (ZK), and Fully Homomorphic Encryption (FHE). At the current state of the art, only few non-linear functions are known to be invertible over $\mathbb F_p$, as the power maps $x\mapsto x^d$ for $\gcd(d,p-1)=1$. When working over $\mathbb F_p^n$ for $n\ge2$, a possible way to construct invertible non-linear layers $\mathcal S$ over $\mathbb F_p^n$ is by making use of a local map $F:\mathbb F_p^m\rightarrow \mathbb F_p$ for $m\le n$, that is, $\mathcal S_F(x_0, x_1, \ldots, x_{n-1}) = y_0\|y_1\|\ldots \|y_{n-1}$ where $y_i = F(x_i, x_{i+1}, \ldots, x_{i+m-1})$. This possibility has been recently studied by Grassi, Onofri, Pedicini and Sozzi at FSE/ToSC 2022. Given a quadratic local map $F:\mathbb F_p^m \rightarrow \mathbb F_p$ for $m\in\{1,2,3\}$, they proved that the shift-invariant non-linear function $\mathcal S_F$ over $\mathbb F_p^n$ defined as before is never invertible for any $n\ge 2\cdot m-1$.
In this paper, we face the problem by generalizing such construction. Instead of a single local map, we admit multiple local maps, and we study the creation of nonlinear layers that can be efficiently verified and implemented by a similar shift-invariant lifting. After formally defining the construction, we focus our analysis on the case $\mathcal S_{F_0, F_1}(x_0, x_1, \ldots, x_{n-1}) = y_0\|y_1\|\ldots \|y_{n-1}$ for $F_0, F_1 :\mathbb F_p^2\rightarrow \mathbb F_p$ of degree at most 2. This is a generalization of the previous construction using two alternating functions $F_0,F_1$ instead of a single $F$. As main result, we prove that (i) if $n\ge3$, then $\mathcal S_{F_0, F_1}$ is never invertible if both $F_0$ and $F_1$ are quadratic, and that (ii) if $n\ge 4$, then $\mathcal S_{F_0, F_1}$ is invertible if and only if it is a Type-II Feistel scheme.
Abraxas: Throughput-Efficient Hybrid Asynchronous Consensus
Protocols for state-machine replication (SMR) often trade off performance for resilience to network delay. In particular, protocols for asynchronous SMR tolerate arbitrary network delay but sacrifice throughput/latency when the network is fast, while partially synchronous protocols have good performance in a fast network but fail to make progress if the network experiences high delay.
Existing hybrid protocols are resilient to arbitrary network delay and have good performance when the network is fast, but suffer from high overhead (``thrashing'') if the network repeatedly switches between being fast and slow (e.g., in a network that is typically fast but has intermittent message delays).
We propose Abraxas, a generic approach for constructing a hybrid protocol based on any protocol $\Pi_\mathsf{fast}$ and any asynchronous protocol $\Pi_\mathsf{slow}$ to achieve (1)~security and performance equivalent to $\Pi_\mathsf{slow}$ under arbitrary network behavior; (2)~performance equivalent to $\Pi_\mathsf{fast}$ when conditions are favorable. We instantiate Abraxas with the best existing protocols for $\Pi_\mathsf{fast}$ (Jolteon) and $\Pi_\mathsf{slow}$ (2-chain VABA), and show experimentally that the resulting protocol significantly outperforms Ditto, the previous state-of-the-art hybrid protocol.
Applications of Timed-release Encryption with Implicit Authentication
A whistleblower is a person who leaks sensitive information on a prominent individual or organisation engaging in an unlawful or immoral activity.
Whistleblowing has the potential to mitigate corruption and fraud by identifying the misuse of capital.
In extreme cases whistleblowing can also raise awareness about unethical practices to individuals by highlighting dangerous working conditions.
Obtaining and sharing the sensitive information associated with whistleblowing can carry great risk to the individual or party revealing the data.
In this paper we extend the notion of timed-release encryption to include a new security property which we term implicit authentication, with the goal of making the practice of whistleblowing safer.
We formally define the new primitive of timed-release encryption with implicit authentication (TRE-IA), providing rigorous game-base definitions.
We then build a practical TRE-IA construction that satisfies the security requirements of this primitive, using repeated squaring in an RSA group, and the RSA-OAEP encryption scheme.
We formally prove our construction secure and provide a performance analysis of our implementation in Python along with recommendations for practical deployment and integration with an existing whistleblowing tool SecureDrop.
SoK: Delay-based Cryptography
In this work, we provide a systematisation of knowledge of delay-based cryptography, in which we discuss and compare the existing primitives within cryptography that utilise a time-delay.
We start by considering the role of time within cryptography, explaining broadly what a delay aimed to achieve at its inception and now, in the modern age.
We then move on to describing the underlying assumptions used to achieve these goals, and analyse topics including trust, decentralisation and concrete methods to implement a delay.
We then survey the existing primitives, discussing their security properties, instantiations and applications.
We make explicit the relationships between these primitives, identifying a hierarchy and the theoretical gaps that exist.
We end this systematisation of knowledge by highlighting relevant future research directions within the field of delay-based cryptography, from which this area would greatly benefit.
Efficient Accelerator for NTT-based Polynomial Multiplication
The Number Theoretic Transform (NTT) is used to efficiently execute polynomial multiplication. It has become an important part of lattice-based post-quantum methods and the subsequent generation of standard cryptographic systems. However, implementing post-quantum schemes is challenging
since they rely on intricate structures. This paper demonstrates how to develop a high-speed NTT multiplier highly optimized
for FPGAs with few logical resources. We describe a novel architecture for NTT that leverages unique precomputation. Our method efficiently maps these specific pre-computed values into the built-in Block RAMs (BRAMs), which greatly reduces the area and time required for implementation when compared to previous works. We have chosen Kyber parameters to implement the proposed architectures. Compared to the most well-known approach for implementing Kyber’s polynomial multiplication using NTT, the time is reduced by 31%, and AT (area × time) is improved by 25% as a result of the pre computation we suggest in this study. It is worth mentioning that we obtained these improvements while our method does not require DSP.
Third-Party Private Set Intersection
Private set intersection (PSI) enables two parties, each holding a private set to compute their intersection without revealing other information in the process. We introduce a variant of conventional PSI termed as third-party PSI, whereby the intersection output of the two parties is only known to an inputless third party. In this setting, the two parties who participate in the protocol have no knowledge of the intersection result or any information of the set content of the other party. In general, third-party PSI settings arise where there is a need for an external party to obtain the intersection outcome without leakage of additional information to any other party. This setting is motivated by an increasing importance in several real-world applications. We describe protocols which achieve this functionality with minimal communication overhead. To the best of knowledge, our work is the first of its kind to explore this variant of PSI.
A note on ``a lightweight mutual authentication and key agreement protocol for remote surgery application in Tactile Internet environment''
We show that the key agreement scheme [Comput. Commun., 2021(170): 1--18] is insecure against impersonation attacks, because there is a trivial equality which results in the loss of data confidentiality.
MPC with Low Bottleneck-Complexity: Information-Theoretic Security and More
The bottleneck-complexity (BC) of secure multiparty computation (MPC) protocols is a measure of the maximum number of bits which are sent and received by any party in a protocol. As the name suggests, the goal of studying BC-efficient protocols is to increase overall efficiency by making sure that the workload in the protocol is somehow "amortized'' by the protocol participants.
Orlandi et al. (PKC 2022) initiated the study of BC-efficient protocols from simple assumptions in the correlated randomness model and for semi-honest adversaries. In this work, we extend the study of Orlandi et al. in two primary directions: (a) to a larger and more general class of functions and (b) to the information-theoretic setting.
In particular, we offer semi-honest secure protocols for the useful function classes of abelian programs, 'read-$k$' non-abelian programs, and 'read-$k$' generalized formulas.
Our constructions use a novel abstraction, called 'incremental function secret-sharing' (IFSS), that can be instantiated with unconditional security or from one-way functions (with different efficiency trade-offs).
Divide and Rule: DiFA - Division Property Based Fault Attacks on PRESENT and GIFT
The division property introduced by Todo in Crypto 2015 is one of the most versatile tools in the arsenal of a cryptanalyst which has given new insights into many ciphers primarily from an algebraic perspective. On the other end of the spectrum we have fault attacks which have evolved into the deadliest of all physical attacks on cryptosystems. The current work aims to combine these seemingly distant tools to come up with a new type of fault attack. We show how fault invariants are formed under special input division multi-sets and are independent of the fault injection location. It is further shown that the same division trail can be exploited as a multi-round Zero-Sum distinguisher to reduce the key-space to practical limits. As a proof of concept division trails of PRESENT and GIFT are exploited to mount practical key-recovery attacks based on the random nibble fault model. For GIFT-64, we are able to recover the unique master-key with 30 nibble faults with faults injected at rounds 21 and 19. For PRESENT-80, DiFA reduces the key-space from $2^{80}$ to $2^{16}$ with 15 faults in round 25 while for PRESENT-128, the unique key is recovered with 30 faults in rounds 25 and 24. This constitutes the best fault attacks on these ciphers in terms of fault injection rounds. We also report an interesting property pertaining to fault induced division trails which shows its inapplicability to attack GIFT-128. Overall, the usage of division trails in fault based cryptanalysis showcases new possibilities and reiterates the applicability of classical cryptanalytic tools in physical attacks.
Benchmarking ZK-Circuits in Circom
Zero-knowledge proofs and arithmetic circuits are essential building blocks in modern cryptography, but comparing their efficiency across different implementations can be challenging. In this paper, we address this issue by presenting comprehensive benchmarking results for a range of signature schemes and hash functions implemented in Circom, a popular circuit language that has not been extensively benchmarked before. Our benchmarking statistics include prover time, verifier time, and proof size, and cover a diverse set of schemes including Poseidon, Pedersen, MiMC, SHA-256, ECDSA, EdDSA, Sparse Merkle Tree, and Keccak-256. We also introduce a new Circom circuit and a full JavaScript test suite for the Schnorr signature scheme. Our results offer valuable insights into the relative strengths and weaknesses of different schemes and frameworks, and confirm the theoretical predictions with precise real-world data. Our findings can guide researchers and practitioners in selecting the most appropriate scheme for their specific applications, and can serve as a benchmark for future research in this area.
Private Polynomial Commitments and Applications to MPC
Polynomial commitment schemes allow a prover to commit to a polynomial and later reveal the evaluation of the polynomial on an arbitrary point along with proof of validity. This object is central in the design of many cryptographic schemes such as zero-knowledge proofs and verifiable secret sharing. In the standard definition, the polynomial is known to the prover whereas the evaluation points are not private. In this paper, we put forward the notion of private polynomial commitments that capture additional privacy guarantees, where the evaluation points are hidden from the verifier while the polynomial is hidden from both.
We provide concretely efficient constructions that allow simultaneously batch the verification of many evaluations with a small additive overhead. As an application, we design a new concretely efficient multi-party private set-intersection with malicious security and improved asymptotic communication and space complexities.
We demonstrate the concrete efficiency of our construction via an implementation. Our scheme can prove $2^{10}$ evaluations of a private polynomial of degree $2^{10}$ in 157s. The proof size is only 169KB and the verification time is 11.8s. Moreover, we also implemented the multi-party private set intersection protocol and scale it to 1000 parties (which has not been shown before). The total running time for $2^{14}$ elements per party is 2,410 seconds. While existing protocols offer better computational complexity, our scheme offers significantly smaller communication and better scalability (in the number of parties) owing to better memory usage.
ParBFT: Faster Asynchronous BFT Consensus with a Parallel Optimistic Path
To reduce latency and communication overhead of asynchronous Byzantine Fault Tolerance (BFT) consensus, an optimistic path is often added, with Ditto and BDT as state-of-the-art representatives. These protocols first attempt to run an optimistic path that is typically adapted from partially-synchronous BFT and promises good performance in good situations. If the optimistic path fails to make progress, these protocols switch to a pessimistic path after a timeout, to guarantee liveness in an asynchronous network. This design crucially relies on an accurate estimation of the network delay Δ to set the timeout parameter correctly. A wrong estimation of Δ can lead to either premature or delayed switching to the pessimistic path, hurting the protocol's efficiency in both cases.
To address the above issue, we propose ParBFT, which employs a parallel optimistic path. As long as the leader of the optimistic path is non-faulty, ParBFT ensures low latency without requiring an accurate estimation of the network delay. We propose two variants of ParBFT, namely ParBFT1 and ParBFT2, with a trade-off between latency and communication. ParBFT1 simultaneously launches the two paths, achieves lower latency under a faulty leader, but has a quadratic message complexity even in good situations. ParBFT2 reduces the message complexity in good situations by delaying the pessimistic path, at the cost of a higher latency under a faulty leader. Experimental results demonstrate that ParBFT outperforms Ditto or BDT. In particular, when the network condition is bad, ParBFT can reach consensus through the optimistic path, while Ditto and BDT suffer from path switching and have to make progress using the pessimistic path.
A 334µW 0.158mm2 ASIC for Post-Quantum Key-Encapsulation Mechanism Saber with Low-latency Striding Toom-Cook Multiplication Extended Version
The hard mathematical problems that assure the security of our current public-key cryptography (RSA, ECC) are broken if and when a quantum computer appears rendering them ineffective for use in the quantum era. Lattice based cryptography is a novel approach to public key cryptography, of which the mathematical investigation (so far) resists attacks from quantum computers. By choosing a module learning with errors (MLWE) algorithm as the next standard, National Institute of Standard \& Technology (NIST) follows this approach. The multiplication of polynomials is the central bottleneck in the computation of lattice based cryptography. Because public key cryptography is mostly used to establish common secret keys, focus is on compact area, power and energy budget and to a lesser extent on throughput or latency. While most other work focuses on optimizing number theoretic transform (NTT) based multiplications, in this paper we highly optimize a Toom-Cook based multiplier. We demonstrate that a memory-efficient striding Toom-Cook with lazy interpolation, results in a highly compact, low power implementation, which on top enables a very regular memory access scheme. To demonstrate the efficiency, we integrate this multiplier into a Saber post-quantum accelerator, one of the four NIST finalists. Algorithmic innovation to reduce active memory, timely clock gating and shift-add multiplier has helped to achieve 38\% less power than state-of-the art PQC core, 4 $\times$ less memory, 36.8\% reduction in multiplier energy and 118$\times$ reduction in active power with respect to state-of-the-art Saber accelerator (not silicon verified). This accelerator consumes $0.158mm^2$ active area which is lowest reported till date despite process disadvantages of the state-of-the-art designs.
Secure Context Switching of Masked Software Implementations
Cryptographic software running on embedded devices requires protection against physical side-channel attacks such as power analysis. Masking is a widely deployed countermeasure against these attacksand is directly implemented on algorithmic level. Many works study the security of masked cryptographic software on CPUs, pointing out potential problems on algorithmic/microarchitecture-level, as well as corresponding solutions, and even show masked software can be implemented efficiently and with strong (formal) security guarantees. However, these works also make the implicit assumption that software is executed directly on the CPU without any abstraction layers in-between, i.e., they focus exclusively on the bare-metal case. Many practical applications, including IoT and automotive/industrial environments, require multitasking embedded OSs on which masked software runs as one out of many concurrent tasks. For such applications, the potential impact of events like context switches on the secure execution of masked software has not been studied so far at all.
In this paper, we provide the first security analysis of masked cryptographic software spanning all three layers (SW, OS, CPU). First, we apply a formal verification approach to identify leaks within the execution of masked software that are caused by the embedded OS itself, rather than on algorithmic or microarchitecture level. After showing that these leaks are primarily caused by context switching, we propose several different strategies to harden a context switching routine against such leakage, ultimately allowing masked software from previous works to remain secure when being executed on embedded OSs. Finally, we present a case study focusing on FreeRTOS, a popular embedded OS for embedded devices, running on a RISC-V core, allowing us to evaluate the practicality and ease of integration of each strategy.
From Unbalanced to Perfect: Implementation of Low Energy Stream Ciphers
Low energy is an important aspect of hardware implementation. For energy-limited battery-powered devices, low energy stream ciphers can play an important role. In \texttt{IACR ToSC 2021}, Caforio et al. proposed the Perfect Tree energy model for stream cipher that links the structure of combinational logic circuits with state update functions to energy consumption. In addition, a metric given by the model shows a negative correlation with energy consumption, i.e., the higher the balance of the perfect tree, the lower the energy consumption. However, Caforio et al. didn't give a method that eliminate imbalances of the unrolled strand tree for the existing stream ciphers.
In this paper, based on the Perfect Tree energy model, we propose a new redundant design model that improve the balances of the unrolled strand tree for the purpose of reducing energy consumption. In order to obtain the redundant design, we propose a search algorithm for returning the corresponding implementation scheme. For the existing stream ciphers, the proposed model and search method can be used to provide a low-power redundancy design scheme. To verify the effectiveness, we apply our redundant model and search method in the stream ciphers (e.g., \texttt{Trivium} and \texttt{Kreyvium}) and conducted a synthetic test. The results of the energy measurement demonstrate that the proposed model and search method can obtain lower energy consumption.
Efficient and Secure Quantile Aggregation of Private Data Streams
Computing the quantile of a massive data stream has been a crucial task in networking and data management. However, existing solutions assume a centralized model where one data owner has access to all data. In this paper, we put forward a study of secure quantile aggregation between private data streams, where data streams owned by different parties would like to obtain a quantile of the union of their data without revealing anything else about their inputs. To this end, we designed efficient cryptographic protocols that are secure in the semi-honest setting as well as the malicious setting. By incorporating differential privacy, we further improve the efficiency by 1.1× to 73.1×. We implemented our protocol, which shows practical efficiency to aggregate real-world data streams efficiently.
An Efficient Strategy to Construct a Better Differential on Multiple-Branch-Based Designs: Application to Orthros
As low-latency designs tend to have a small number of rounds to decrease latency, the differential-type cryptanalysis can become a significant threat to them.
In particular, since a multiple-branch-based design, such as Orthros can have the strong clustering effect on differential attacks due to its large internal state, it is crucial to investigate the impact of the clustering effect in such a design.
In this paper, we present a new SAT-based automatic search method for evaluating the clustering effect in the multiple-branch-based design.
By exploiting an inherent trait of multiple-branch-based designs, our method enables highly efficient evaluations of clustering effects on this-type designs. % that a conventional method by automatic search tools.
We apply our method to the low-latency PRF Orthros, and show a best differential distinguisher reaching up to 7 rounds of Orthros with $2^{116.806}$ time/data complexity and 9-round distinguisher for each underlying permutation which is 2 more rounds than known longest distinguishers.
Besides, we update the designer's security bound for differential attacks based on the lower bounds for the number of active S-boxes, and obtain the optimal differential characteristic of Orthros, Branch 1, and Branch 2 for the first time.
Consequently, we improve the designer's security bound from 9/12/12 to 7/10/10 rounds for Orthros/Branch 1/Branch 2 based on a single differential characteristic.
Tracing Quantum State Distinguishers via Backtracking
We show the following results:
- The post-quantum equivalence of indistinguishability obfuscation and differing inputs obfuscation in the restricted setting where the outputs differ on at most a polynomial number of points. Our result handles the case where the auxiliary input may contain a quantum state; previous results could only handle classical auxiliary input.
- Bounded collusion traitor tracing from general public key encryption, where the decoder is allowed to contain a quantum state. The parameters of the scheme grow polynomially in the collusion bound.
- Collusion-resistant traitor tracing with constant-size ciphertexts from general public key encryption, again for quantum state decoders. The public key and secret keys grow polynomially in the number of users.
- Traitor tracing with embedded identities in the keys, again for quantum state decoders, under a variety of different assumptions with different parameter size trade-offs.
Traitor tracing and differing inputs obfuscation with quantum decoders / auxiliary input arises naturally when considering the post-quantum security of these primitives. We obtain our results by abstracting out a core algorithmic model, which we call the Back One Step (BOS) model. We prove a general theorem, reducing many quantum results including ours to designing classical algorithms in the BOS model. We then provide simple algorithms for the particular instances studied in this work.
SigRec: Automatic Recovery of Function Signatures in Smart Contracts
Millions of smart contracts have been deployed onto Ethereum for providing various services, whose functions can be invoked. For this purpose, the caller needs to know the function signature of a callee, which includes its function id and parameter types. Such signatures are critical to many applications focusing on smart contracts, e.g., reverse engineering, fuzzing, attack detection, and profiling. Unfortunately, it is challenging to recover the function signatures from contract bytecode, since neither debug information nor type information is present in the bytecode. To address this issue, prior approaches rely on source code, or a collection of known signatures from incomplete databases or incomplete heuristic rules, which, however, are far from adequate and cannot cope with the rapid growth of new contracts. In this paper, we propose a novel solution that leverages how functions are handled by Ethereum virtual machine (EVM) to automatically recover function signatures. In particular, we exploit how smart contracts determine the functions to be invoked to locate and extract function ids, and propose a new approach named type-aware symbolic execution (TASE) that utilizes the semantics of EVM operations on parameters to identify the number and the types of parameters. Moreover, we develop SigRec , a new tool for recovering function signatures from contract bytecode without the need of source code and function signature databases. The extensive experimental results show that SigRec outperforms all existing tools, achieving an unprecedented 98.7 percent accuracy within 0.074 seconds. We further demonstrate that the recovered function signatures are useful in attack detection, fuzzing and reverse engineering of EVM bytecode.