All papers in 2023 (Page 14 of 1971 results)
Proving knowledge of isogenies – A survey
Isogeny-based cryptography is an active area of research in post-quantum public key cryptography. The problem of proving knowledge of an isogeny is a natural problem that has several applications in isogeny-based cryptography, such as allowing users to demonstrate that they are behaving honestly in a protocol. It is also related to isogeny-based digital signatures. Over the last few years, there have been a number of advances in this area, but there are still many open problems. This paper aims to give an overview of the topic and highlight some open problems and directions for future research.
Behemoth: transparent polynomial commitment scheme with constant opening proof size and verifier time
Polynomial commitment schemes are fundamental building blocks in numerous cryptographic protocols such as verifiable secret sharing, zero-knowledge succinct non-interactive arguments, and many more. The most efficient polynomial commitment schemes rely on a trusted setup which is undesirable in trust-minimized applications, e.g., cryptocurrencies. However, transparent polynomial commitment schemes are inefficient (polylogarithmic opening proofs and/or verification time) compared to their trusted counterparts. It has been an open problem to devise a transparent, succinct polynomial commitment scheme or prove an impossibility result in the transparent setting. In this work, for the first time, we create a transparent, constant-size polynomial commitment scheme called Behemoth with constant-size opening proofs and a constant-time verifier. The downside of Behemoth is that it employs a cubic prover in the degree of the committed polynomial. We prove the security of our scheme in the generic group model and discuss parameter settings in which it remains practical even for the prover.
Classical substitution ciphers and group theory
Uncategorized
Uncategorized
We explore some connections between classical substitution ciphers, both monoalphabetic and polyalphabetic, and mathematical group theory. We try to do this in a way that is accessible to cryptographers who are not familiar with group theory, and to mathematicians who are not familiar with classical ciphers.
Statement-Oblivious Threshold Witness Encryption
The notion of witness encryption introduced by Garg et al. (STOC'13) allows to encrypt a message under a statement $x$ from some NP-language $\mathcal{L}$ with associated relation $(x,w) \in \mathcal{R}$, where decryption can be carried out with the corresponding witness $w$. Unfortunately, known constructions for general-purpose witness encryption rely on strong assumptions, and are mostly of theoretical interest. To address these shortcomings, Goyal et al. (PKC'22) recently introduced a blockchain-based alternative, where a committee decrypts ciphertexts when provided with a valid witness w. Blockchain-based committee solutions have recently gained broad interest to offer security against more powerful adversaries and construct new cryptographic primitives.
We follow this line of work, and propose a new notion of statement-oblivious threshold witness encryption. Our new notion offers the functionality of committee-based witness encryption while additionally hiding the statement used for encryption. We present two ways to build statement-oblivious threshold witness encryption, one generic transformation based on anonymous threshold identity-based encryption (A-TIBE) and one direct construction based on bilinear maps. Due to the lack of efficient A-TIBE schemes, the former mainly constitutes a feasibility result, while the latter yields a concretely efficient scheme.
Last updated: 2023-05-11
New Bounds on the Accuracy of Majority Voting for Multi-Class Classification
Majority voting is a simple mathematical function that returns the value that appears most often in a set. As a popular decision fusion technique, the majority voting function (MVF) finds applications in resolving conflicts, where a number of independent voters report their opinions on a classification problem. Despite its importance and its various applications in ensemble learning, data crowd-sourcing, remote sensing, and data oracles for blockchains, the accuracy of the MVF for the general multi-class classification problem has remained unknown. In this paper, we derive a new upper bound on the accuracy of the MVF for the multi-class classification problem. More specifically, we show that under certain conditions, the error rate of the MVF exponentially decays toward zero as the number of voters increases. Conversely, the error rate of the MVF exponentially grows towards one if these conditions are not met.
We first explore the problem for independent and identically distributed voters where we assume that every voter follows the same conditional probability distribution for voting for different classes, given the true classification of the data point. Next, we extend our results for the case where the voters are independent but non-identically distributed. Using the derived results, we then provide a discussion on the accuracy of the truth discovery algorithms. We show that in the best-case scenarios, truth discovery algorithms operate as an amplified MVF and thereby achieve a small error rate only when the MVF achieves a small error rate, and vice versa, achieve a large error rate when the MVF also achieves a large error rate. In the worst-case scenario, the truth discovery algorithms may achieve a higher error rate than the MVF. Finally, we confirm our theoretical results using simulations.
Arithmetization of predicates into Halo 2 using application specific trace types
This note provides an update on the Open Specification Language (OSL) circuit compiler. OSL is a language based on predicate logic which is amenable to compilation to arithmetic constraint systems for use in constructing (zk-)SNARKs. This system provides an alternative to universal zk-VMs and low level ad hoc constructions of arithmetic constraint systems, which is potentially more efficient than universal zk-VMs but more cost effective as a development approach than low level ad hoc constructions.
On the Feasibility of Identity-based Encryption with Equality Test against Insider Attacks
Public key encryption with equality test, proposed by Yang et al. (CT-RSA 2010), allows anyone to check whether two ciphertexts of distinct public keys are encryptions of the same plaintext or not using trapdoors, and identity-based encryption with equality test (IBEET) is its identity-based variant. As a variant of IBEET, IBEET against insider attacks (IBEETIA) was proposed by Wu et al. (ACISP 2017), where a token is defined for each identity and is used for encryption. Lee et al. (ACISP 2018) and Duong et al. (ProvSec 2019) proposed IBEETIA schemes constructed by identity-based encryption (IBE) related complexity assumptions. Later, Emura and Takayasu (IEICE Transactions 2023) demonstrated that symmetric key encryption and pseudo-random permutations are sufficient to construct IBEETIA which is secure in the previous security definition.
In this paper, we demonstrate a sufficient condition that IBEETIA implies IBE. We define one-wayness against chosen-plaintext/ciphertext attacks for the token generator (OW-TG-CPA/CCA) and for token holders (OW-TH-CPA/CCA), which were not considered in the previous security definition. We show that OW-TG-CPA secure IBEETIA with additional conditions implies OW-CPA secure IBE. On the other hand, we propose a generic construction of OW-TH-CCA secure IBEETIA from public key encryption.
Our results suggest a design principle to efficiently construct IBEETIA without employing IBE-related complexity assumptions.
MPC in the head for isomorphisms and group actions
In this paper, we take inspiration from an invited talk presented at CBCrypto'23 to design identification protocols and signature schemes from group actions using the MPC-in-the-head paradigm. We prove the security of the given identification schemes and rely on the Fiat-Shamir transformation to turn them into signatures.
We also establish a parallel with the technique used for the MPC-in-the-head approach and the seed tree method that has been recently used in some signature and ring signatures algorithms based on group action problems.
NTWE: A Natural Combination of NTRU and LWE
Lattice-based cryptosystems are some of the primary post-quantum secure alternatives to the asymmetric cryptography that is used today. These lattice-based cryptosystems typically rely on the hardness of some version of either the NTRU or the LWE problem. In this paper, we present the NTWE problem, a natural combination of the NTRU and LWE problems, and construct a new lattice-based cryptosystem based on the hardness of the NTWE problem.
As with the NTRU and LWE problems, the NTWE problem naturally corresponds to a problem in a $q$-ary lattice. This allows the hardness of the NTWE problem to be estimated in the same way as it is estimated for the LWE and NTRU problems. We parametrize our cryptosystem from such a hardness estimate and the resulting scheme has performance that is competitive with that of typical lattice-based schemes.
In some sense, our NTWE-based cryptosystem can be seen as a less structured and more compact version of a cryptosystem based on the module-NTRU problem. Thus, parameters for our cryptosystem can be selected with the flexibility of a module-LWE-based scheme, while other properties of our system are more similar to those in an NTRU-based system.
Unconditionally Secure Multiparty Computation for Symmetric Functions with Low Bottleneck Complexity
Bottleneck complexity is an efficiency measure of secure multiparty computation (MPC) introduced by Boyle et al. (ICALP 2018) to achieve load-balancing. Roughly speaking, it is defined as the maximum communication complexity required by any player within the protocol execution. Since it was shown to be impossible to achieve sublinear bottleneck complexity in the number of players $n$ for all functions, a prior work constructed MPC protocols with low bottleneck complexity for specific functions. However, the previous protocol for symmetric functions needs to assume a computational primitive of garbled circuits and its unconditionally secure variant has exponentially large bottleneck complexity in the depth of an arithmetic formula computing the function, which limits the class of symmetric functions the protocol can compute with sublinear bottleneck complexity in $n$. In this work, we make the following contributions to unconditionally secure MPC protocols for symmetric functions with sublinear bottleneck complexity in $n$.
\begin{itemize}
\item We propose for the first time unconditionally secure MPC protocols computing \textit{any} symmetric function with sublinear bottleneck complexity in $n$. Technically, our first protocol is inspired by the one-time truth-table protocol by Ishai et al. (TCC 2013) but our second and third protocols use a novel technique to express the one-time truth-table as an array of two or higher dimensions and achieve better trade-offs.
\item We propose an unconditionally secure protocol tailored to the AND function with lower bottleneck complexity. It avoids pseudorandom functions used by the previous protocol for the AND function, preserving bottleneck complexity up to a logarithmic factor in $n$.
\item By combining our protocol for the AND function with Bloom filters, we construct an unconditionally secure protocol for private set intersection (PSI), which computes the intersection of players' private sets. This is the first PSI protocol with sublinear bottleneck complexity in $n$ and to the best of our knowledge, there has been no such protocol even under cryptographic assumptions.
\end{itemize}
Study of Arithmetization Methods for STARKs
This technical paper explores two solutions for arithmetization of computational integrity statements in STARKs, namely the algebraic intermediate representation, AIR, and is preprocessed variant, PAIR. The work then focuses on their soundness implications for Reed-Solomon proximity testing. It proceeds by presenting a comparative study of these methods, providing their theoretical foundations and deriving the degree bounds for low-degree proximity testing. The study shows that using PAIR increases the degree bound for Reed-Solomon proximity testing, which affects its soundness and complexity. However, the possibility of reducing the degree bound with multiple selector columns is also explored, namely by following an approach based on the decomposition of the selector values. Focusing on performance optimization, the work proceeds by qualitatively comparing computational demands of the components of both arithmetization methods, particularly their impact on the low-degree extensions. The paper concludes that, while PAIR might simplify constraint enforcement, it can be easily translated to AIR, and system testing with benchmarks is necessary to determine the application-specific superiority of either method. This work should provide insight into the strengths and limitations of each method, helping researchers and practitioners in the field of STARKs make informed design choices.
FESTA: Fast Encryption from Supersingular Torsion Attacks
We introduce FESTA, an efficient isogeny-based public-key encryption (PKE) protocol based on a constructive application of the SIDH attacks.
At its core, FESTA is based on a novel trapdoor function, which uses an improved version of the techniques proposed in the SIDH attacks to develop a trapdoor mechanism. Using standard transformations, we construct an efficient PKE that is IND-CCA secure in the QROM. Additionally, using a different transformation, we obtain the first isogeny-based PKE that is IND-CCA secure in the standard model.
Lastly, we propose a method to efficiently find parameters for FESTA, and we develop a proof-of-concept implementation of the protocol. We expect FESTA to offer practical performance that is competitive with existing isogeny-based constructions.
Exploring Decryption Failures of BIKE: New Class of Weak Keys and Key Recovery Attacks
Code-based cryptography has received a lot of attention recently because it is considered secure under quantum computing. Among them, the QC-MDPC based scheme is one of the most promising due to its excellent performance. QC-MDPC based scheme is usually subject to a small rate of decryption failure, which can leak information about the secret key. This raises two crucial problems: how to accurately estimate the decryption failure rate and how to use the failure information to recover the secret key. However, the two problems are challenging due to the difficulty of geometrically characterizing the bit-flipping decoder employed in QC-MDPC, such as using decoding radius.
In this work, we introduce the gathering property and show that it is strongly connected with the decryption failure rate of QC-MDPC. Based on the gathering property, we present two results for QC-MDPC based schemes. The first is a new construction of weak keys obtained by extending the keys that have gathering property via ring isomorphism. For the set of weak keys, we present a rigorous analysis of the probability, as well as experimental simulation of the decryption failure rates. Considering BIKE's parameter set targeting $128$-bit security, our result eventually indicates that the average decryption failure rate is lower bounded by $DFR_{avg} \ge 2^{-122.57}$. The second is a key recovery attack against CCA secure QC-MDPC schemes using decryption failures in a multi-target setting. By decrypting ciphertexts with errors satisfying the gathering property, we show that a single decryption failure can be used to identify whether a target's secret key satisfies the gathering property. Then using the gathering property as extra information, we present a modified information set decoding algorithm that efficiently retrieves the target's secret key. For BIKE's parameter set targeting $128$-bit security, a key recovery attack with complexity $2^{119.88}$ can be expected by using extrapolated decryption failure rates.
A note on ``faster and efficient cloud-server-aided data de-duplication scheme with an authenticated key agreement for Industrial Internet-of-Things''
We show that the data de-duplication scheme [Internet of Things, 2021(14): 100376] is flawed.
(1) There are some inconsistent notations and false equations, which should be corrected.
(2) The scheme fails to keep user anonymity, not as claimed.
(3) The scheme could fail to keep data confidentiality.
Ou: Automating the Parallelization of Zero-Knowledge Protocols
A zero-knowledge proof (ZKP) is a powerful cryptographic primitive used in many decentralized or privacy-focused applications. However, the high overhead of ZKPs can restrict their practical applicability. We design a programming language, Ou, aimed at easing the programmer's burden when writing efficient ZKPs, and a compiler framework, Lian, that automates the analysis and distribution of statements to a computing cluster. Lian uses programming language semantics, formal methods, and combinatorial optimization to automatically partition an Ou program into efficiently sized chunks for parallel ZK-proving and/or verification.
We contribute:
• A front-end language where users can write proof statements as imperative programs in a familiar syntax;
• A compiler architecture and implementation that automatically analyzes the program and compiles it into an optimized IR that can be lifted to a variety of ZKP constructions; and
• A cutting algorithm, based on Pseudo-Boolean optimization and Integer Linear Programming, that reorders instructions and then partitions the program into efficiently sized chunks for parallel evaluation and efficient state reconciliation.
Formalizing Soundness Proofs of SNARKs
Succinct Non-interactive Arguments of Knowledge (SNARKs) have seen interest and development from the cryptographic community over recent years, and there are now constructions with very small proof size designed to work well in practice. A SNARK protocol can only be widely accepted as secure, however, if a rigorous proof of its security properties has been vetted by the community. Even then, it is sometimes the case that these security proofs are flawed, and it is then necessary for further research to identify these flaws and correct the record.
To increase the rigor of these proofs, we turn to formal methods. Focusing on the soundness aspect of a widespread class of SNARKs, we formalize proofs for six different constructions, including the well-known Groth '16. Our codebase is written in the Lean 3 theorem proving language, and uses a variety of techniques to simplify and automate these proofs as much as possible.
TandaPay Whistleblowing Communities: Shifting Workplace Culture Towards Zero-Tolerance Sexual Harassment Policies
Abstract—Corporate sexual harassment policies often prioritize liability mitigation over the creation of a corporate culture free of harassment. Victims of sexual harassment are often required to report claims individually to HR. This can create an environment of self-censorship when employees feel that they cannot trust HR to act as an unbiased mediator. This problem is compounded when corporations have a culture that is tolerant of certain types of harassment. Forcing employees to report incidents to HR does nothing to address employees’ fear of bias and uncertainty. This paper presents TandaPay, a decentralized grievance reporting protocol designed to address sexual harassment. TandaPay empowers whistleblowing communities to collectively approve their own harassment claims. TandaPay reduces self-censorship by allowing employees to take ownership of the reporting process, as employees no longer need to rely on HR to act as an intermediary. The protocol employs a novel method of using financial incentives to guard against collusion. This provides corporations with a guarantee that employees can only approve valid claims. Using TandaPay, corporations can give employees greater autonomy with the goal of minimizing self-censorship. This increases the reporting of incidents, enabling workers to change the corporate culture to one of respect and accountability.
Griffin: Towards Mixed Multi-Key Homomorphic Encryption
This paper presents Griffin, an extension of the mixed-scheme single-key homomorphic encryption framework Pegasus (Lu et al., IEEE S&P’21) to a Multi-Key Homomorphic Encryption (MKHE) scheme with applications to secure computation. MKHE is a generalized notion of Homomorphic Encryption (HE) that allows for operations on ciphertexts encrypted under different keys. However, an efficient approach to evaluate both polynomial and non-polynomial functions on encrypted data in MKHE has not yet been developed, hindering the deployment of HE to real-life applications. Griffin addresses this challenge by introducing a method for transforming between MKHE ciphertexts of different schemes. The practicality of Griffin is demonstrated through benchmarks with multiple applications, including the sorting of sixty four 45-bit fixed point numbers with a precision of 7 bits in 21 minutes, and evaluating arbitrary functions with a one-time setup communication of 1.4 GB per party and 2.34 MB per ciphertext. Moreover, Griffin could compute the maximum of two numbers in 3.2 seconds, a 2× improvement over existing MKHE approaches that rely on a single scheme.
Muckle+: End-to-End Hybrid Authenticated Key Exchanges
End-to-end authenticity in public networks plays a significant role. Namely, without authenticity, the adversary might be able to retrieve even confidential information straight away by impersonating others. Proposed solutions to establish an authenticated channel cover pre-shared key-based, password-based, and certificate-based techniques. To add confidentiality to an authenticated channel, authenticated key exchange (AKE) protocols usually have one of the three solutions built in. As an amplification, hybrid AKE (HAKE) approaches are getting more popular nowadays and were presented in several flavors to incorporate classical, post-quantum, or quantum-key-distribution components. The main benefit is redundancy, i.e., if some of the components fail, the primitive still yields a confidential and authenticated channel. However, current HAKE instantiations either rely on pre-shared keys (which yields inefficient end-to-end authenticity) or only support one or two of the three above components (resulting in reduced redundancy and flexibility).
In this work, we present an extension of a modular HAKE framework due to Dowling, Brandt Hansen, and Paterson (PQCrypto'20) that does not suffer from the above constraints. While their instantiation, dubbed Muckle, requires pre-shared keys (and hence yields inefficient end-to-end authenticity), our extended instantiation called Muckle+ utilizes post-quantum digital signatures. While replacing pre-shared keys with digital signatures is rather straightforward in general, this turned out to be surprisingly non-trivial when applied to HAKE frameworks (resulting in a significant model change with adapted proof techniques).
ScionFL: Efficient and Robust Secure Quantized Aggregation
Secure aggregation is commonly used in federated learning (FL) to alleviate privacy concerns related to the central aggregator seeing all parameter updates in the clear. Unfortunately, most existing secure aggregation schemes ignore two critical orthogonal research directions that aim to (i) significantly reduce client-server communication and (ii) mitigate the impact of malicious clients. However, both of these additional properties are essential to facilitate cross-device FL with thousands or even millions of (mobile) participants.
In this paper, we unite both research directions by introducing ScionFL, the first secure aggregation framework for FL that operates efficiently on quantized inputs and simultaneously provides robustness against malicious clients. Our framework leverages (novel) multi-party computation (MPC) techniques and supports multiple linear (1-bit) quantization schemes, including ones that utilize the randomized Hadamard transform and Kashin's representation.
Our theoretical results are supported by extensive evaluations.
We show that with no overhead for clients and moderate overhead for the server compared to transferring and processing quantized updates in plaintext, we obtain comparable accuracy for standard FL benchmarks. Moreover, we demonstrate the robustness of our framework against state-of-the-art poisoning attacks.
Stealth Key Exchange and Confined Access to the Record Protocol Data in TLS 1.3
We show how to embed a covert key exchange sub protocol within a regular TLS 1.3 execution, generating a stealth key in addition to the regular session keys. The idea, which has appeared in the literature before, is to use the exchanged nonces to transport another key value. Our contribution is to give a rigorous model and analysis of the security of such embedded key exchanges, requiring that the stealth key remains secure even if the regular key is under adversarial control. Specifically for our stealth version of the TLS 1.3 protocol we show that this extra key is
secure in this setting under the common assumptions about the TLS protocol.
As an application of stealth key exchange we discuss sanitizable channel protocols, where a designated party can partly access and modify payload data in a channel protocol. This may be, for instance, an intrusion detection system monitoring the incoming traffic for malicious content and putting suspicious parts in quarantine. The noteworthy feature, inherited from the stealth key exchange part, is that the sender and receiver can use the extra key to still communicate securely and covertly within the sanitizable channel, e.g., by pre-encrypting confidential parts and making only dedicated parts available to the sanitizer. We discuss how such sanitizable channels can be implemented with authenticated encryption schemes like GCM or ChaChaPoly. In combination with our stealth key exchange protocol, we thus derive a full-fledged sanitizable connection protocol, including key establishment, which perfectly complies with regular TLS 1.3 traffic on the network level. We also assess the potential effectiveness of the approach for the intrusion detection system Snort.
Pseudorandom Correlation Functions from Variable-Density LPN, Revisited
Pseudorandom correlation functions (PCF), introduced in
the work of (Boyle et al., FOCS 2020), allow two parties to locally gen-
erate, from short correlated keys, a near-unbounded amount of pseu-
dorandom samples from a target correlation. PCF is an extremely ap-
pealing primitive in secure computation, where they allow to confine
all preprocessing phases of all future computations two parties could
want to execute to a single short interaction with low communication
and computation, followed solely by offline computations. Beyond in-
troducing the notion, Boyle et al. gave a candidate construction, using
a new variable-density variant of the learning parity with noise (LPN)
assumption. Then, to provide support for this new assumption, the au-
thors showed that it provably resists a large class of linear attacks, which
captures in particular all known attacks on LPN.
In this work, we revisit the analysis of the VDLPN assumption. We make
two key contributions:
– First, we observe that the analysis of Boyle et al is purely asymp-
totic: they do not lead to any concrete and efficient PCF instanti-
ation within the bounds that offer security guarantees. To improve
this state of affairs, we combine a new variant of a VDLPN assump-
tion with an entirely new, much tighter security analysis, which we
further tighten using extensive computer simulations to optimize pa-
rameters. This way, we manage to obtain for the first time a set of
provable usable parameters (under a simple combinatorial conjec-
ture which is easy to verify experimentally), leading to a concretely
efficient PCF resisting all linear tests.
– Second, we identify a flaw in the security analysis of Boyle et al.,
which invalidates their proof that VDLPN resists linear attacks. Us-
ing several new non-trivial arguments, we repair the proof and fully
demonstrate that VDLPN resists linear attack; our new analysis is
more involved than the original (flawed) analysis.
Our parameters set leads to PCFs with keys around 3MB allowing ∼
500 evaluations per second on one core of a standard laptop for 110
bits of security; these numbers can be improved to 350kB keys and ∼
3950 evaluations/s using a more aggressive all-prefix variant. All numbers
are quite tight: only within a factor 3 of the best bounds one could
heuristically hope for.
FinTracer: A privacy-preserving mechanism for tracing electronic money
Information sharing between financial institutions can uncover complex financial crimes such as money laundering and fraud. However, such information sharing is often not possible due to privacy and commercial considerations, and criminals can exploit this intelligence gap in order to hide their activities by distributing them between institutions, a form of the practice known as ``layering''.
We describe an algorithm that allows financial intelligence analysts to trace the flow of funds in suspicious transactions across financial institutions, without this impinging on the privacy of uninvolved individuals and without breaching the tipping off offence provisions between financial institutions. The algorithm is lightweight, allowing it to work even at nation-scale, as well as for it to be used as a building-block in the construction of more sophisticated algorithms for the detection of complex crime typologies within the financial data. We prove the algorithm's scalability by timing measurements done over a full-sized deployment.
Collatz Computation Sequence for Sufficient Large Integers is Random
The main results in the paper are as follows: (1) We randomly select an
extremely large integer and verify whether it can return to 1. The
largest one has been verified has length of 6000000 bits, which is
overwhelmingly much larger than currently known and verified, e.g.,
128 bits, and its Collatz computation sequence consists of 28911397
`I' and `O', only by an ordinary laptop. (2) We propose an dedicated
algorithm that can compute 3x+1 for extremely large integers in
million bit scale, by replacing multiplication with bit addition,
and further only by logical condition judgement. (3) We discovery
that the ratio - the count of `O' over the count of `I' in
computation sequence goes to 1 asymptotically with the growth of
starting integers. (4) We further discover that once the length of
starting integer is sufficient large, e.g., 500000 bits, the
corresponding computation sequence (in which `I' is replaced with 1
and `O' is replaced with 0), presents sufficient randomness as a bit
sequence. We firstly obtain the computation sequence of randomly
selected integer with L bit length, where L is 500000, 1000000,
2000000, 3000000, 4000000, 5000000, 6000000, by our proposed
algorithm for extremely large integers. We evaluate the randomness
of all computation sequences by both NIST SP 800-22 and GM/T
0005-2021. All sequences can pass the tests, and especially, the
larger the better. (5) We thus propose an algorithm for random bit
sequence generator by only using logical judgement (e.g., logic
gates) and less than 100 lines in ANSI C. The throughput of the
generator is about 625.693 bits/s over an ordinary laptop with Intel
Core i7 CPU (1.8GHz).
Efficient FHE-based Privacy-Enhanced Neural Network for AI-as-a-Service
AI-as-a-Service has emerged as an important trend for supporting
the growth of the digital economy. Digital service providers make
use of their vast amount of user data to train AI models (such as
image recognitions, financial modelling and pandemic modelling
etc.) and offer them as a service on the cloud. While there are convincing advantages for using such third-party models, the fact that
users need to upload their data to the cloud is bound to raise serious
privacy concerns, especially in the face of increasingly stringent
privacy regulations and legislations.
To promote the adoption of AI-as-a-Service while addressing
the privacy issues, we propose a practical approach for constructing privacy-enhanced neural networks by designing an efficient
implementation of fully homomorphic encryption. With this approach, an existing neural network can be converted to process
FHE-encrypted data and produce encrypted output which are only
accessible by the model users, and more importantly, within an operationally acceptable time (e.g. within 1 second for facial recognition
in typical border control systems). Experimental results show that
in many practical tasks such as facial recognition, text classification
and so on, we obtained the state-of-the-art inference accuracy in
less than one second on a 16 cores CPU.
A Note on ``Secure Multifactor Authenticated Key Agreement Scheme for Industrial IoT''
We remark that the key agreement scheme [IEEE Internet Things J., 8(5), 2021, 3801--3811] is flawed. (1) It is insecure against internal attack, because any unauthorized sensing device (not revoked) can retrieve the final session key. (2) It could be insecure against external attack.
Fast and Accurate: Efficient Full-Domain Functional Bootstrap and Digit Decomposition for Homomorphic Computation
The functional bootstrap in FHEW/TFHE allows for fast table lookups on ciphertexts and is a powerful tool for privacy-preserving computations. However, the functional bootstrap suffers from two limitations: the negacyclic constraint of the lookup table (LUT) and the limited ability to evaluate large-precision LUTs. To overcome the first limitation, several full-domain functional bootstraps (FDFB) have been developed, enabling the evaluation of arbitrary LUTs. Meanwhile, algorithms based on homomorphic digit decomposition have been proposed to address the second limitation. Although these algorithms provide effective solutions, they are yet to be optimized. This paper presents four new FDFB algorithms and two new homomorphic decomposition algorithms that improve the state-of-the-art. Our FDFB algorithms reduce the output noise, thus allowing for more efficient and compact parameter selection. Across all parameter settings, our algorithms reduce the runtime by up to $39.2\%$. Our homomorphic decomposition algorithms also run at 2.0x and 1.5x the speed of prior algorithms. We have implemented and benchmarked all previous FDFB and homomorphic decomposition algorithms and our methods in OpenFHE.
Improved Distributed RSA Key Generation Using the Miller-Rabin Test
Secure distributed generation of RSA moduli (e.g., generating $N=pq$ where none of the parties learns anything about $p$ or $q$) is an important cryptographic task, that is needed both in threshold implementations of RSA-based cryptosystems and in other, advanced cryptographic protocols that assume that all the parties have access to a trusted RSA modulo. In this paper, we provide a novel protocol for secure distributed RSA key generation based on the Miller-Rabin test. Compared with the more commonly used Boneh-Franklin test (which requires many iterations), the Miller-Rabin test has the advantage of providing negligible error after even a single iteration of the test for large enough moduli (e.g., $4096$ bits).
From a technical point of view, our main contribution is a novel divisibility test which allows to perform the primality test in an efficient way, while keeping $p$ and $q$ secret.
Our semi-honest RSA generation protocol uses any underlying secure multiplication protocol in a black-box way, and our protocol can therefore be instantiated in both the honest or dishonest majority setting based on the chosen multiplication protocol. Our semi-honest protocol can be upgraded to protect against active adversaries at low cost using existing compilers.
Finally, we provide an experimental evaluation showing that for the honest majority case, our protocol is much faster than Boneh-Franklin.
Privacy-Preserving Regular Expression Matching using Nondeterministic Finite Automata
Motivated by the privacy requirements in network intrusion detection and DNS policy checking, we have developed a suite of protocols and algorithms for regular expression matching with enhanced privacy:
- A new regular expression matching algorithm that is oblivious to the input strings, of which the complexity is only $O(mn)$ where $m$ and $n$ are the length of strings and the regular expression respectively. It is achieved by exploiting the structure of the Thompson nondeterministic automata.
- A zero-knowledge proof of regular expression pattern matching in which a prover generates a proof to demonstrate that a public regular expression matches her input string without revealing the string itself.
-Two secure-regex protocols that ensure the privacy of both the string and regular expression. The first protocol is based on the oblivious stack and reduces the complexity of the state-of-the-art from $O(mn^2)$ to $O(mn\log n)$. The second protocol relies on the oblivious transfer and performs better empirically when the size of regular expressions is smaller than $2^{12}$.
We also evaluated our protocols in the context of encrypted DNS policy checking and intrusion detection and achieved 4.5X improvements over the state-of-the-art. These results also indicate the practicality of our approach in real-world applications.
PELTA -- Shielding Multiparty-FHE against Malicious Adversaries
Multiparty fully homomorphic encryption (MFHE) schemes enable multiple parties to efficiently compute functions on their sensitive data while retaining confidentiality. However, existing MFHE schemes guarantee data confidentiality and the correctness of the computation result only against honest-but-curious adversaries. In this work, we provide the first practical construction that enables the verification of MFHE operations in zero-knowledge, protecting MFHE from malicious adversaries. Our solution relies on a combination of lattice-based commitment schemes and proof systems which we adapt to support both modern FHE schemes and their implementation optimizations. We implement our construction in PELTA. Our experimental evaluation shows that PELTA is one to two orders of magnitude faster than existing techniques in the literature.
Hardware-Accelerated Encrypted Execution of General-Purpose Applications
Fully Homomorphic Encryption (FHE) is a cryptographic method that guarantees the privacy and security of user data during computation. FHE algorithms can perform unlimited arithmetic computations directly on encrypted data without decrypting it. Thus, even when processed by untrusted systems, confidential data is never exposed. In this work, we develop new techniques for accelerated encrypted execution and demonstrate the significant performance advantages of our approach. Our current focus is the Fully Homomorphic Encryption over the Torus (CGGI) scheme, which is a current state-of-the-art method for evaluating arbitrary functions in the encrypted domain. CGGI represents a computation as a graph of homomorphic logic gates and each individual bit of the plaintext is transformed into a polynomial in the encrypted domain. Arithmetic on such data becomes very expensive: operations on bits become operations on entire polynomials. Therefore, evaluating even relatively simple nonlinear functions with the CGGI cryptosystem, such as a sigmoid, can take thousands of seconds on a single CPU thread. Using our novel framework for end-to-end accelerated encrypted execution called ArctyrEX, developers with no knowledge of complex FHE libraries can simply describe their computation as a C program that is evaluated 18x faster on average relative to the GPU-accelerated Concrete library for multiplication-intensive benchmarks.
A Direct Key Recovery Attack on SIDH
We present an attack on SIDH utilising isogenies between polarized products of two supersingular elliptic curves. In the case of arbitrary starting curve, our attack (discovered independently from [CD22]) has subexponential complexity, thus significantly reducing the security of SIDH and SIKE. When the endomorphism ring of the starting curve is known, our attack (here derived from [CD22]) has polynomial-time complexity assuming the generalised Riemann hypothesis. Our attack applies to any isogeny-based cryptosystem that publishes the images of points under the secret isogeny, for example SÉTA and B-SIDH. It does not apply to CSIDH, CSI-FiSh, or SQISign.
OPRFs from Isogenies: Designs and Analysis
Oblivious Pseudorandom Functions (OPRFs) are an elementary building block in cryptographic and privacy-preserving applications. However, while there are numerous pre-quantum secure OPRF constructions, few options exist in a post-quantum secure setting, and of those even fewer are practical for modern-day applications. In this work, we focus on isogeny group actions, as the associated low bandwidth leads to efficient constructions. Our results focus on the Naor-Reingold OPRF. We introduce OPUS, a novel OPRF from isogenies without oblivious transfer, and show efficient evaluations of the Naor-Reingold PRF using CSIDH and CSI-FiSh. Additionally, we analyze a previous proposal of a CSIDH-based OPRF and that the straightforward instantiation of the protocol leaks the server’s private key. As a result, we propose mitigations to address those shortcomings, which require additional hardness assumptions. Our results report a very competitive protocol when combined with lattices for Oblivious Transfer.
Our comparison against the state of the art shows that OPUS and the repaired, generic construction are competitive with other proposals in terms of speed and communication size. More concretely, OPUS achieves almost two orders of magnitude less communication overhead compared to the next-best lattice-based OPRF at the cost of higher latency and higher computational cost, and the repaired construction. Finally, we demonstrate the efficiency of OPUS and the generic NR-OT in two use cases: first, we instantiate OPAQUE, a protocol for asymmetric authenticated key exchange. Compared to classical elliptic curve cryptography, this results in less than 100 × longer computation on average and around 1000× more communication overhead. Second, we perform an unbalanced private set intersection and show that the communication overhead can be roughly the same when using isogenies or elliptic curves, at the cost of much higher runtime. Conversely, for sets of the size 210 , we report a runtime around 200× slower than the elliptic curve PSI. This concretizes the overhead of performing PSI and using OPAQUE with isogenies for the first time
Classification of All $t$-Resilient Boolean Functions with $t+4$ Variables
We apply Siegenthaler's construction, along with several techniques, to classify all $(n-4)$-resilient Boolean functions with $n$ variables, for all values of $n \geq 4$, up to extended variable-permutation equivalence. We show that, up to this equivalence, there are only 761 functions for any $n$ larger than or equal to 10, and for smaller values of $n$, i.e., for $n$ increasing from 4 to 9, there are 58, 256, 578, 720, 754, and 760 functions, respectively. Furthermore, we classify all 1-resilient 6-variable Boolean functions and show that there are 1035596784 such functions up to extended variable-permutation equivalence.
Padding-based forgeries in the mode XOCB
In this note, we identify a minor flaw in the design of the XOCB mode, presented at Eurocrypt '23. This vulnerability enables trivial tag forgeries and arises from the padding applied to messages. We examine the security proof and pinpoint the presence of the flaw within it. Furthermore, we propose a simple fix for this issue, drawing upon the features of OCB3, and discuss the implications of this modification on the proof of security.
Multi-Armed SPHINCS+
Hash-based signatures are a type of Digital Signature Algorithms
that are positioned as one of the most solid quantum-resistant
constructions. As an example SPHINCS+, has been selected as a standard
during the NIST Post-Quantum Cryptography competition. However,
hash-based signatures suffer from two main drawbacks: signature
size and slow signing process. In this work, we give a solution to the latter
when it is used in a mobile device. We take advantage of the fact that
hash-based signatures are highly parallelizable. More precisely, we provide
an implementation of SPHINCS+ on the Snapdragon 865 Mobile
Platform taking advantage of its eight CPUs and their vector extensions.
Our implementation shows that it is possible to have a speed-up
of 15 times when compared to a purely sequential and non-vectorized
implementation. Furthermore, we evaluate the performance impact of
side-channel protection using vector extensions in the SPHINCS+ version
based on SHAKE.
Cassiopeia: Practical On-Chain Witness Encryption
Witness Encryption is a holy grail of cryptography that remains elusive. It asks that a secret is only revealed when a particular computational problem is solved. Modern smart contracts and blockchains make assumptions of “honest majority”, which allow for a social implementation of Witness Encryption. The core idea is to make use of a partially trusted committee to carry out the responsibilities mandated by these functionalities – such as keeping the secret private, and then releasing it publicly after a solution to the computational puzzle is presented. We implement Witness Encryption (with public witness security) in the form of an open source smart contract that can be utilized as an oracle by others within the broader DeFi ecosystem. We devise a cryptoeconomic scheme to incentivize honest participation, and analyze its security under the honest majority and rational majority settings. We conclude by measuring and optimizing gas costs and illustrating the practicality of our scheme.
Polynomial Hashing over Prime Order Fields
This paper makes a comprehensive study of two important strategies for polynomial hashing over a prime order field $\mathbb{F}_p$, namely usual polynomial based hashing and hashing based on Bernstein-Rabin-Winograd (BRW) polynomials, and the various ways to combine them. Several hash functions are proposed and upper bounds on their differential probabilities are derived. Concrete instantiations are provided for the primes $p=2^{127}-1$ and $p=2^{130}-5$. A major contribution of the paper is an extensive 64-bit implementation of all the proposed hash functions in assembly targeted at modern Intel processors. The timing results suggest that using the prime $2^{127}-1$ is significantly faster than using the prime $2^{130}-5$. Further, a judicious mix of the usual polynomial based hashing and BRW-polynomial based hashing can provide a significantly faster alternative to only usual polynomial based hashing. In particular, the timing results of our implementations show that our final hash function proposal for the prime $2^{127}-1$ is much faster than the well known Poly1305 hash function defined over the prime $2^{130}-5$, achieving speed improvements of up to 40%.
From Substitution Box To Threshold
With the escalating demand for lightweight ciphers as well as side channel protected implementation of those ciphers in recent times, this work focuses on two related aspects. First, we present a tool for automating the task of finding a Threshold Implementation (TI) of a given Substitution Box (SBox). Our tool returns `with decomposition' and `without decomposition' based TI. The `with decomposition' based implementation returns a combinational SBox; whereas we get a sequential SBox from the `without decomposition' based implementation. Despite being high in demand, it appears that this kind of tool has been missing so far. In the process, we report new decomposition for the PRESENT SBox (improving from Poschmann et al.'s JoC'11 paper) and that of the GIFT SBox (improving from Jati et al.'s TIFS'20 paper). Second, we show an algorithmic approach where a given cipher implementation can be tweaked (without altering the cipher specification) so that its TI cost can be significantly reduced. We take the PRESENT cipher as our case study (our methodology can be applied to other ciphers as well). Indeed, we show over 31 percent reduction in area and over 52 percent reduction in depth compared to the basic threshold implementation.
Batch Inference on Deep Convolutional Neural Networks With Fully Homomorphic Encryption Using Channel-By-Channel Convolutions
Secure Machine Learning as a Service (MLaaS) is a viable solution where clients seek secure ML computation delegation while protecting sensitive data. We propose an efficient method to securely evaluate deep standard convolutional neural networks based on residue number system variant of Cheon-Kim-Kim-Song (RNS-CKKS) scheme in the manner of batch inference. In particular, we introduce a packing method called Channel-By-Channel Packing that maximizes the slot compactness and Single-Instruction-Multiple-Data (SIMD) capabilities in ciphertexts. We also propose a new method for homomorphic convolution evaluation called Channel-By-Channel Convolution, which minimizes the additional heavy operations during convolution layers.
Simulation results show that our work has improvements in amortized runtime for inference, with a factor of $5.04$ and $5.20$ for ResNet-20 and ResNet-110, respectively, compared to the previous results. We note that our results almost simulate the original backbone models, with classification accuracy differing from the backbone within $0.02$%p. Furthermore, we show that the client's rotation key size generated and transmitted can be reduced from $105.6$GB to $6.91$GB for ResNet models during an MLaaS scenario. Finally, we show that our method can be combined with previous methods, providing flexibility for selecting batch sizes for inference.
Last updated: 2023-07-26
Optimization of Functional Bootstrap with Large LUT and Packing Key Switching
Homomorphic encryption can perform calculations on encrypted data, which can protect the privacy of data during the usage of data. Functional Bootstraps algorithm proposed by I. Chillotti et al. can compute arbitrary functions represented as lookup table whilst bootstrapping, but the computational efficiency of F unctional Bootstraps with large lookup table or highly precise functions is not high enough. To tackle this issue, we propose a new Tree-BML algorithm. Our Tree-BML algorithm accelerates the computation of F unctional Bootstraps with large LUT by compressing more LWE ciphertexts to a TRLWE ciphertext and utilizing the PBSmanyLUT algorithm which was proposed by I. Chillotti et al. in 2021. The Tree-BML algorithm reduces the running time of LUT computation by 72.09% compared to Tree-based method(Antonio Guimarães et al., 2021). Additionally, we introduce a new TLWE-to-TRLWE Packing Key Switching algorithm which reduces the storage space required and communication overhead of homomorphic encryption algorithm by only generating one of those key-switching key ciphertexts of polynomials with the same non-zero coefficient values but only those values located in different slots. Our algorithm reduces the key-switching key size by 75% compared to Base-aware TLWE-to-TRLWE Key Switching algorithm. Finally, we obtained that our algorithms does not introduce new output error through theoretical and experiment result.
Proximity Testing with Logarithmic Randomness
A fundamental result dating to Ligero (Des. Codes Cryptogr. '23) establishes that each fixed linear block code exhibits proximity gaps with respect to the collection of affine subspaces, in the sense that each given subspace either resides entirely close to the code, or else contains only a small portion which resides close to the code. In particular, any given subspace's failure to reside entirely close to the code is necessarily witnessed, with high probability, by a uniformly randomly sampled element of that subspace. We investigate a variant of this phenomenon in which the witness is not sampled uniformly from the subspace, but rather from a much smaller subset of it. We show that a logarithmic number of random field elements (in the dimension of the subspace) suffice to effect an analogous proximity test, with moreover only a logarithmic (multiplicative) loss in the possible prevalence of false witnesses. We discuss applications to recent noninteractive proofs based on linear codes, including Brakedown (CRYPTO '23).
Publicly Auditable Functional Encryption
We introduce the notion of publicly auditable functional encryption (PAFE).
Compared to standard functional encryption, PAFE operates in an extended setting that includes an entity called auditor, besides key-generating authority, encryptor, and decryptor.
The auditor requests function outputs from the decryptor and wishes to check their correctness with respect to the ciphertexts produced by the encryptor, without having access to the functional secret key that is used for decryption.
This is in contrast with previous approaches for result verifiability and consistency in functional encryption that aim to ensure decryptors about the legitimacy of the results they decrypt. We propose four different flavors of public auditability with respect to different sets of adversarially controlled parties (only decryptor, encryptor-decryptor, authority-decryptor, and authority-encryptor-decryptor) and provide constructions for building corresponding secure PAFE schemes from standard functional encryption, commitment schemes, and non-interactive witness-indistinguishable proof systems.
At the core of our constructions lies the notion of a functional public key, that works as the public analog of the functional secret key of functional encryption and is used for verification purposes by the auditor.
Crucially, in order to ensure that these new keys cannot be used to infer additional information about plaintext values (besides the requested decryptions by the auditor), we propose a new indistinguishability-based security definition for PAFE to accommodate not only functional secret key queries (as in standard functional encryption) but also functional public key and decryption queries.
Finally, we propose a publicly auditable multi-input functional encryption scheme (MIFE) that supports inner-product functionalities and is secure against adversarial decryptors. Instantiated with existing MIFE using ``El Gamal''-like ciphertexts and Σ-protocols, this gives a lightweight publicly auditable scheme.
SEC: Symmetric Encrypted Computation via Fast Look-ups
Encrypted computation allows a client to securely outsource the storage and processing of sensitive private data to an untrusted third party cloud server. Fully Homomorphic Encryption (FHE) and Garbled Circuit (GC) are state-of-the-art general-purpose primitives that support encrypted computation. FHE enables arbitrary encrypted computation ensuring data-privacy but suffers from huge computation overhead and poor scalability. GC additionally provides function privacy, but is often not suitable for practical deployment because its translation tables need to be refreshed for every evaluation. We extend Searchable Symmetric Encryption (SSE), beyond encrypted searches, to perform arbitrary Boolean circuit evaluations over symmetrically encrypted data via look-ups, while ensuring data privacy.
In this work, we propose Symmetric Encrypted Computation (SEC), the first practically efficient and provably secure lookup-based construction that supports evaluation of arbitrary Boolean circuits over symmetrically encrypted data, while ensuring data-privacy guarantees. With a single setup of encrypted look-up tables, SEC supports $O(n!)$ re-evaluations of a circuit of size $n$. This is an improvement on GC that supports a single evaluation with a single setup. While SEC supports bounded re-evaluations with a single setup (unlike FHE), it is asymptotically large enough to scale to practical deployments of realistic circuits. Moreover, SEC completely bypasses bootstrapping thereby significantly improving on performance efficiency. SEC achieves this by relying on purely symmetric-key crypto primitives by extending and generalizing the functional capabilities of SSE, while inheriting its desirable performance benefits. The leakages incurred by underlying SSE scheme are rendered inconsequential with respect to SEC's security guarantees due to its meticulous design choices.
We provide a concrete construction of SEC and analyze its security with respect to a rigorous leakage profile. We also experimentally validate its practical efficiency. SEC outperforms state-of-the-art FHE schemes (such as Torus FHE) substantially, with around $1000\times$ speed-up in basic Boolean gate evaluations. We further showcase the scalability of SEC for functions with multi-bit inputs via experiments performing encrypted evaluation of the entire AES-128 circuit, as well as three max-pooling layers of AlexNet architecture. For both sets of experiments, SEC outperforms state-of-the-art and accelerated FHE implementations by $1000 \times$ in terms of processing time, while incurring $250 \times$ lower storage.
Conflict Checkable and Decodable Codes and Their Applications
Let $C$ be an error-correcting code over a large alphabet $q$ of block length $n$, and assume that, a possibly corrupted, codeword $c$ is distributively stored among $n$ servers where the $i$th entry is being held by the $i$th server. Suppose that every pair of servers publicly announce whether the corresponding coordinates are ``consistent'' with some legal codeword or ``conflicted''. What type of information about $c$ can be inferred from this consistency graph? Can we check whether errors occurred and if so, can we find the error locations and effectively decode? We initiate the study of conflict-checkable and conflict-decodable codes and prove the following main results:
(1) (Almost-MDS conflict-checkable codes:) For every distance $d\leq n$, there exists a code that supports conflict-based error-detection whose dimension $k$ almost achieves the singleton bound, i.e., $k\geq n-d+0.99$. Interestingly, the code is non-linear, and we give some evidence that suggests that this is inherent. Combinatorially, this yields an $n$-partite graph over $[q]^n$ that contains $q^k$ cliques of size $n$ whose pair-wise intersection is at most $n-d\leq k-0.99$ vertices, generalizing a construction of Alon (Random Struct. Algorithms, '02) that achieves a similar result for the special case of triangles ($n=3$).
(2) (Conflict Decodable Codes below half-distance:) For every distance $d\leq n$ there exists a linear code that supports conflict-based error-decoding up to half of the distance. The code's dimension $k$ ``half-meets'' the singleton bound, i.e., $k=(n-d+2)/2$, and we prove that this bound is tight for a natural class of such codes. The construction is based on symmetric bivariate polynomials and is rooted in the literature on verifiable secret sharing (Ben-Or, Goldwasser and Wigderson, STOC '88; Cramer, Damgård, and Maurer, EUROCRYPT '00).
(3) (Robust Conflict Decodable Codes:) We show that the above construction also satisfies a non-trivial notion of robust decoding/detection even when the number of errors is unbounded and up to $d/2$ of the servers are Byzantine and may lie about their conflicts. The resulting conflict-decoder runs in exponential time in this case, and we present an alternative construction that achieves quasipolynomial complexity at the expense of degrading the dimension to $k=(n-d+3)/3$. Our construction is based on trilinear polynomials, and the algorithmic result follows by showing that the induced conflict graph is structured enough to allow efficient recovery of a maximal vertex cover.
As an application of the last result, we present the first polynomial-time statistical two-round Verifiable Secret Sharing (resp., three-round general MPC protocol) that remains secure in the presence of an active adversary that corrupts up to $t<n/3.001$ of the parties. We can upgrade the resiliency threshold to $n/3$, which is known to be optimal in this setting, at the expense of increasing the computational complexity to be quasipolynomial. Previous solutions (Applebaum, Kachlon, and Patra, TCC'20) suffered from an exponential-time complexity even when the adversary corrupts only $n/4$ of the parties.
Sprints: Intermittent Blockchain PoW Mining
Cryptocurrencies and decentralized platforms have been rapidly gaining traction since Nakamoto's discovery of Bitcoin's blockchain protocol. Prominent systems use Proof of Work (PoW) to achieve unprecedented security for digital assets. However, the significant carbon footprint due to the manufacturing and operation of PoW mining hardware is leading policymakers to consider stark measures against them and various systems to explore alternatives. But these alternatives imply stepping away from key security aspects of PoW.
We present Sprints, a blockchain protocol that achieves almost the same security guarantees as PoW blockchains, but with an order-of-magnitude lower carbon footprint while increasing the number of mining rigs by a factor 1.27x. Our conservative estimate of environmental footprint uses common metrics, taking into account both power and hardware. To achieve this reduction, Sprints forces miners to mine intermittently. It interleaves Proof of Delay (PoD, e.g., using a Verifiable Delay Function) and PoW, where only the latter bears a significant resource expenditure. We prove that in Sprints the attacker's success probability is the same as that of legacy PoW. To evaluate practical performance, we analyze the effect of shortened PoW duration, showing a minor reduction in resilience (49% instead of 50%). We confirm the results with a full implementation using 100 patched Bitcoin clients in an emulated network.
Efficient Information-Theoretic Distributed Point Function with General Output Groups
An $n$-server information-theoretic Distributed Point Function (DPF) allows a client to secret-share a point function $f_{\alpha,\beta}(x)$ with domain $[N]$ and output group $\mathbb{G}$ among $n$ servers such that each server learns no information about the function from its share (called a key) but can compute an additive share of $f_{\alpha,\beta}(x)$ for any $x$. DPFs with small key sizes and general output groups are preferred. In this paper, we propose a new transformation from share conversions to information-theoretic DPFs. By applying it to the share conversions from Efremenko's PIR and Dvir-Gopi PIR, we obtain both an 8-server DPF with key size $ O(2^{10\sqrt{\log N\log\log N}}+\log p)$ and output group $\mathbb{Z}_p$ and a 4-server DPF with key size $O(\tau \cdot 2^{6\sqrt{\log N\log\log N}})$ and output group $\mathbb{Z}_{2^\tau}$. The former allows us to partially answer an open question by Boyle, Gilboa, Ishai, and Kolobov (ITC 2022) and the latter allows us to build the first DPFs that may take any finite Abelian groups as output groups. We also discuss how to further reduce the key sizes by using different PIRs, how to reduce the number of servers by resorting to statistical security or using nice integers, and how to obtain DPFs with $t$-security. We show the applications of the new DPFs by constructing new efficient PIR protocols with result verification.
HAETAE: Shorter Lattice-Based Fiat-Shamir Signatures
We present HAETAE (Hyperball bimodAl modulE rejecTion signAture schemE), a new lattice-based signature scheme. Like the NIST-selected Dilithium signature scheme, HAETAE is based on the Fiat-Shamir with Aborts paradigm, but our design choices target an improved complexity/compactness compromise that is highly relevant for many space-limited application scenarios. We primarily focus on reducing signature and verification key sizes so that signatures fit into one TCP or UDP datagram while preserving a high level of security against a variety of attacks. As a result, our scheme has signature and verification key sizes up to 39% and 25% smaller, respectively, compared than Dilithium. We provide a portable, constant-time reference implementation together with an optimized implementation using AVX2 instructions and an implementation with reduced stack size for the Cortex-M4. Moreover, we describe how to efficiently protect HAETAE against implementation attacks such as side-channel analysis, making it an attractive candidate for use in
IoT and other embedded systems.
Toward Practical Lattice-based Proof of Knowledge from Hint-MLWE
In the last decade, zero-knowledge proof of knowledge protocols have been extensively studied to achieve active security of various cryptographic protocols. However, the existing solutions simply seek zero-knowledge for both message and randomness, which is an overkill in many applications since protocols may remain secure even if some information about randomness is leaked to the adversary.
We develop this idea to improve the state-of-the-art proof of knowledge protocols for RLWE-based public-key encryption and BDLOP commitment schemes. In a nutshell, we present new proof of knowledge protocols without using noise flooding or rejection sampling which are provably secure under a computational hardness assumption, called Hint-MLWE. We also show an efficient reduction from Hint-MLWE to the standard MLWE assumption.
Our approach enjoys the best of two worlds because it has no computational overhead from repetition (abort) and achieves a polynomial overhead between the honest and proven languages. We prove this claim by demonstrating concrete parameters and compare with previous results. Finally, we explain how our idea can be further applied to other proof of knowledge providing advanced functionality.
CLAASP: a Cryptographic Library for the Automated Analysis of Symmetric Primitives
This paper introduces CLAASP, a Cryptographic Library for the Automated Analysis of Symmetric Primitives. The library is designed to be modular, extendable, easy to use, generic, efficient and fully automated. It is an extensive toolbox gathering state-of-the-art techniques aimed at simplifying the manual tasks of symmetric primitive designers and analysts. CLAASP is built on top of Sagemath and is open-source under the GPLv3 license. The central input of CLAASP is the description of a cryptographic primitive as a list of connected components in the form of a directed acyclic graph. From this representation, the library can automatically: (1) generate the Python or C code of the primitive evaluation function, (2) execute a wide range of statistical and avalanche tests on the primitive, (3) generate SAT, SMT, CP and MILP models to search, for example, differential and linear trails, (4) measure algebraic properties of the primitive, (5) test neural-based distinguishers. In this work, we also present a comprehensive survey and comparison of other software libraries aiming at similar goals as CLAASP.
On APN functions whose graphs are maximal Sidon sets
The graphs ${\cal G}_F=\{(x,F(x)); x\in \mathbb{F}_2^n\}$ of those $(n,n)$-functions $F:\mathbb{F}_2^n\mapsto \mathbb{F}_2^n$ that are almost perfect nonlinear (in brief, APN; an important notion in symmetric cryptography) are, equivalently to their original definition by K. Nyberg, those Sidon sets (an important notion in combinatorics) $S$ in $({\Bbb F}_2^n\times {\Bbb F}_2^n,+)$ such that, for every $x\in {\Bbb F}_2^n$, there exists a unique $y\in {\Bbb F}_2^n$ such that $(x,y)\in S$. Any subset of a Sidon set being a Sidon set, an important question is to determine which Sidon sets are maximal relatively to the order of inclusion. In this paper, we study whether the graphs of APN functions are maximal (that is, optimal) Sidon sets. We show that this question is related to the problem of the existence / non-existence of pairs of APN functions lying at distance 1 from each others, and to the related problem of the existence of APN functions of algebraic degree $n$. We revisit the conjectures that have been made on these latter problems.
ProtoStar: Generic Efficient Accumulation/Folding for Special Sound Protocols
Accumulation is a simple yet powerful primitive that enables incrementally verifiable computation (IVC) without the need for recursive SNARKs. We provide a generic, efficient accumulation (or folding) scheme for any $(2k-1)$-move special-sound protocol with a verifier that checks $\ell$ degree-$d$ equations. The accumulation verifier only performs $k+2$ elliptic curve multiplications and $k+d+O(1)$ field/hash operations. Using the compiler from BCLMS21 (Crypto 21), this enables building efficient IVC schemes where the recursive circuit only depends on the number of rounds and the verifier degree of the underlying special-sound protocol but not the proof size or the verifier time. We use our generic accumulation compiler to build ProtoStar. ProtoStar is a non-uniform IVC scheme for Plonk that supports high-degree gates and (vector) lookups. The recursive circuit is dominated by $3$ group scalar multiplications and a hash of $d^*$ field elements, where $d^*$ is the degree of the highest gate. The scheme does not require a trusted setup or pairings, and the prover does not need to compute any FFTs. The prover in each accumulation/IVC step is also only logarithmic in the number of supported circuits and independent of the table size in the lookup.
Fast Enumeration Algorithm for Multivariate Polynomials over General Finite Fields
The enumeration of all outputs of a given multivariate polynomial is a fundamental mathematical problem and is incorporated in some algebraic attacks on multivariate public key cryptosystems. For a degree-$d$ polynomial in $n$ variables over the finite field with $q$ elements, solving the enumeration problem classically requires $O\left(\tbinom{n+d}{d} \cdot q^n\right)$ operations. At CHES 2010, Bouillaguet et al. proposed a fast enumeration algorithm over the binary field $\mathbb{F}_2$. Their proposed algorithm covers all the inputs of a given polynomial following the order of Gray codes and is completed by $O(d\cdot2^n)$ bit operations. However, to the best of our knowledge, a result achieving the equivalent efficiency in general finite fields is yet to be proposed.
In this study, we propose a novel algorithm that enumerates all the outputs of a degree-$d$ polynomial in $n$ variables over $\mathbb{F}_q$ with a prime number $q$ by $O(d\cdot q^n)$ operations. The proposed algorithm is constructed by using a lexicographic order instead of Gray codes to cover all the inputs. This result can be seen as an extension of the result of Bouillaguet et al. to general finite fields and is almost optimal in terms of time complexity. We can naturally apply the proposed algorithm to the case where $q$ is a prime power. Notably, our enumeration algorithm differs from the algorithm by Bouillaguet et al. even in the case of $q=2$.
Hardware Acceleration of FHEW
The magic of Fully Homomorphic Encryption (FHE) is that it allows operations on encrypted data without decryption. Unfortunately, the slow computation time limits their adoption. The slow computation time results from the vast memory requirements (64Kbits per ciphertext), a bootstrapping key of 1.3 GB, and sizeable computational overhead (10240 NTTs, each NTT requiring 5120 32-bit multiplications). We accelerate the FHEW bootstrapping in hardware on a high-end U280 FPGA.
To reduce the computational complexity, we propose a fast hardware NTT architecture modified from with support for negatively wrapped convolution. The IP module includes large I/O ports to the NTT accelerator and an index bit-reversal block. The total architecture requires less than 225000 LUTs and 1280 DSPs.
Assuming that a fast interface to the FHEW bootstrapping key is available, the execution speed of FHEW bootstrapping can increase by at least 7.5 times.
Last updated: 2023-05-11
Quantum Implementation of ASCON Linear Layer
In this paper, we show an in-place implementation of the ASCON linear layer. An in-place implementation is important in the context of quantum computing, we expect our work will be useful in quantum implementation of ASCON. In order to get the implementation, we first write the ASCON linear layer as a binary matrix; then apply two legacy algorithms (Gauss-Jordan elimination and PLU factorization) as well as our modified version of Xiang et al.'s algorithm/source-code (published in ToSC/FSE'20). Our in-place implementation takes 1595 CNOT gates and 119 quantum depth; and this is the first in-place implementation of the ASCON linear layer, to the best of our knowledge.
vetKeys: How a Blockchain Can Keep Many Secrets
We propose a new cryptographic primitive called "verifiably encrypted threshold key derivation" (vetKD) that extends identity-based encryption with a decentralized way of deriving decryption keys. We show how vetKD can be leveraged on modern blockchains to build scalable decentralized applications (or "dapps") for a variety of purposes, including preventing front-running attacks on decentralized finance (DeFi) platforms, end-to-end encryption for decentralized messaging and social networks (SocialFi), cross-chain bridges, as well as advanced cryptographic primitives such as witness encryption and one-time programs that previously could only be built from secure hardware or using a trusted third party. And all of that by secret-sharing just a single secret key...
Multi-Client Inner Product Encryption: Function-Hiding Instantiations Without Random Oracles
In a Multi-Client Functional Encryption (MCFE) scheme, $n$ clients each obtain a secret encryption key from a trusted authority. During each time step $t$, each client $i$ can encrypt its data using its secret key. The authority can use its master secret key to compute a functional key given a function $f$, and the functional key can be applied to a collection of $n$ clients’ ciphertexts encrypted to the same time step, resulting in the outcome of $f$ on the clients’ data. In this paper, we focus on MCFE for inner-product computations.
If an MCFE scheme hides not only the clients’ data, but also the function $f$, we say it is function hiding. Although MCFE for inner-product computation has been extensively studied, how to achieve function privacy is still poorly understood. The very recent work of Agrawal et al. showed how to construct a function-hiding MCFE scheme for inner-product assuming standard bilinear group assumptions; however, they assume the existence of a random oracle and prove only a relaxed, selective security notion. An intriguing open question is whether we can achieve function-hiding MCFE for inner-product without random oracles.
In this work, we are the first to show a function-hiding MCFE scheme for inner products, relying on standard bilinear group assumptions. Further, we prove adaptive security without the use of a random oracle. Our scheme also achieves succinct ciphertexts, that is, each coordinate in the plaintext vector encrypts to only $O(1$) group elements.
Our main technical contribution is a new upgrade from single-input functional encryption for inner-products to a multi-client one. Our upgrade preserves function privacy, that is, if the original single-input scheme is function-hiding, so is the resulting multi-client construction. Further, this new upgrade allows us to obtain a conceptually simple construction.
Comprehensive Preimage Security Evaluations on Rijndael-based Hashing
The Meet-in-the-Middle (MITM) attack is one of the most powerful cryptanalysis techniques, as seen by its use in preimage attacks on MD4, MD5, Tiger, HAVAL, and Haraka-512 v2 hash functions and key recovery for full-round KTANTAN. An efficient approach to constructing MITM attacks is automation, which refers to modeling MITM characteristics and objectives into constraints and using optimizers to search for the best attack configuration. This work focuses on the simplification and renovation of the most advanced superposition framework based on Mixed-Integer Linear Programming (MILP) proposed at CRYPTO 2022. With the refined automation model, this work provides the first comprehensive analysis of the preimage security of hash functions based on all versions of the Rijndael block cipher, the origin of the Advanced Encryption Standard (AES), and improves the best-known results. Specifically, this work has extended the attack rounds of Rijndael 256-192 and 256-256, reduced the attack complexity of Rijndael 256-128 and 128-192 (AES192), and filled the gap of preimage security evaluation on Rijndael versions with a block size of 192 bits.
Computational Quantum Secret Sharing
Quantum secret sharing (QSS) allows a dealer to distribute a secret quantum state among a set of parties in such a way that certain authorized subsets can reconstruct the secret, while unauthorized subsets obtain no information about it. Previous works on QSS for general access structures focused solely on the existence of perfectly secure schemes, and the share size of the known schemes is necessarily exponential even in cases where the access structure is computed by polynomial size monotone circuits. This stands in stark contrast to the classical setting, where polynomial-time computationally-secure secret sharing schemes have been long known for all access structures computed by polynomial-size monotone circuits under standard hardness assumptions, and one can even obtain shares which are much shorter than the secret (which is impossible with perfect security).
While QSS was introduced over twenty years ago, previous works only considered information-theoretic privacy. In this work, we initiate the study of computationally-secure QSS and show that computational assumptions help significantly in building QSS schemes, just as in the classical case. We present a simple compiler and use it to obtain a large variety results: We construct polynomial-time computationally-secure QSS schemes under standard hardness assumptions for a rich class of access structures. This includes many access structures for which previous results in QSS necessarily required exponential share size. In fact, we can go even further: We construct QSS schemes for which the size of the quantum shares is significantly smaller than the size of the secret. As in the classical setting, this is impossible with perfect security.
We also apply our compiler to obtain results beyond computational QSS.
In the information-theoretic setting, we improve the share size of perfect QSS schemes for a large class of $n$-party access structures to $1.5^{n+o(n)}$, improving upon best known schemes and matching the best known result for general access structures in the classical setting. Finally, among other things, we study the class of access structures which can be efficiently implemented when the quantum secret sharing scheme has access to a given number of copies of the secret, including all such functions in $\mathsf{P}$ and $\mathsf{NP}$.
Cryptanalysis of SPEEDY
SPEEDY is a family of ultra-lightweight block ciphers designed by Leander et al. at CHES 2021. There are three recommended variants denoted as SPEEDY-$r$-192 with $r$∈{5,6,7}. All of them support the 192-bit block and the 192-bit key. The main focus during its design is to ensure hardware-aware low latency, thus, whether it is designed to have enough security is worth to be studied. Recently, the full-round security of SPEEDY-7-192 is announced to be broken by Boura et al. at EUROCRYPT 2023 under the chosen-ciphertext setting, where a round-reduced attack on SPEEDY-6-192 is also proposed. However, no valid attack on SPEEDY-5-192 is given due to its more restricted security parameters. Up to now, the best key recovery attack on this variant only covers 3 rounds proposed by Rohit et al. at AFRICACRYPT 2022. In this paper, we give three full-round attacks on SPEEDY-7-192. Using the divide-and-conquer strategy and other new proposed techniques, we found a 5.5-round differential distinguisher which can be used to mount the first chosen-plaintext full-round key recovery attack. With a similar strategy, we also found a 5-round linear distinguisher which leads to the first full-round attack under the known-plaintext setting. Meanwhile, the 5.5-round differential distinguisher also helps us slightly improve the full-round attack in the chosen-ciphertext setting compared with the previous result. Besides, we also present a 4-round differential attack on SPEEDY-5-192, which is the best attack on this variant in terms of the number of rounds so far. A faster key recovery attack covering the same rounds is also given using a differential-linear distinguisher. Both attacks cannot threaten the full round security of SPEEDY-5-192.
A Comparison of Multi-task learning and Single-task learning Approaches
In this paper, we provide experimental evidence for the benefits of multi-task learning in the context of masked AES implementations (via the ASCADv1-r and ASCADv2 databases). We develop an approach for comparing single-task and multi-task approaches rather than comparing specific resulting models: we do this by training many models with random hyperparameters (instead of comparing a few highly tuned models). We find that multi-task learning has significant practical advantages that make it an attractive option in the context of device evaluations: the multi-task approach leads to performant networks quickly in particular in situations where knowledge of internal randomness is not available during training.
Last updated: 2023-04-28
A Needle in the Haystack: Inspecting Circuit Layout to Identify Hardware Trojans
Distributed integrated circuit (IC) supply chain has resulted in a myriad of security vulnerabilities including that of hardware Trojan (HT). An HT can perform malicious modifications on an IC design with potentially disastrous consequences, such as leaking secret information in cryptographic applications or altering operation instructions in processors. Due to the emergence of outsourced fabrication, an untrusted foundry is considered the most potent adversary in introducing an HT. This can be attributed to the asymmetric business model between the design house and the foundry; the design house is completely oblivious to the fabrication process, whereas the design IP is
transparent to the foundry, thereby having full control over the layout. In order to address this issue, in this paper, we—for the first time—introduce a layout-level HT detection algorithm utilizing low-confidence classification and providing Trojan localization. We convert the IC layout to a graph and utilize Graph Neural Network (GNN)-based learning frameworks to flag any unrecognized suspicious region in the layout. The proposed framework is evaluated on AES and RS232 designs from the Trusthub benchmark suite, where it has been demonstrated to detect all nine HT-inserted designs. Finally, we open-source the full code-base for the research community at large.
Enabling Two-Party Secure Computation on Set Intersection
In this paper, we propose the first linear two-party secure-computation private set intersection (PSI) protocol, in the semi-honest adversary model, computing the following functionality. One of the parties ($P_X$) inputs a set of items $X = \{x_j \mid 1 \le j \le n_X\}$, whereas the other party ($P_Y$) inputs a set of items $Y = \{y_i \mid 1\le i \le n_Y \}$ and a set of corresponding data pairs $D_Y = \{ (d_i^0,d_i^1) \mid 1 \le i \le n_Y\}$ having the same cardinality with $Y$. While $P_Y$ outputs nothing, $P_X$ outputs a set of data $D_X = \{ d_i^{b_i} \mid b_i = 1 \text{ if } y_i \in X, b_i = 0 \text{ otherwise}\}$. This functionality is generally required when the PSI protocol is used as a part of a larger secure two-party computation such as threshold PSI or any function of the intersection in general. In literature, there are linear circuit and secure-computation PSI proposals, such as Pinkas et al. PSI protocol (Eurocrypt 2019), our PSI protocol (CANS 2020) and Chandran et al. PSI protocol (PETS 2022), for similar functionalities but having a cuckoo table mapping in the functionality, which complicates the application of different secure computation techniques on top of the output of the PSI protocol. We also show that the idea in the construction of our secure-computation PSI protocol having the functionality mentioned above can be utilized to convert the existing circuit PSI and secure-computation PSI protocols into the protocols realizing the functionality not having the cuckoo table mapping. We provide this conversion method as a separate protocol, which is one of the main contributions of this work. While creating the protocol, as a side contribution, we provide a one-time batch oblivious programmable pseudo-random function based on garbled Bloom filters.
Publicly Verifiable Auctions with Privacy
Online auctions have a steadily growing market size, creating billions of US dollars in sales value every year. To ensure fairness and auditability while preserving the bidder's privacy is the main challenge of an auction scheme. At the same time, utility driven blockchain technology is picking up the pace, offering transparency and data integrity to many applications. In this paper, we present a blockchain-based first price sealed-bid auction scheme. Our scheme offers privacy and public verifiability. It can be built on any public blockchain, which is leveraged to provide transparency, data integrity, and hence auditability. The inability to double spend on a blockchain is used to prevent bid replay attacks. Moreover, our scheme can achieve non-repudiation for both bidders and the auctioneer without revealing the bids and we encapsulate this concept inside the public verification of the auction. We propose to use ElGamal encryption and Bulletproofs to construct an efficient instantiation of our scheme. We also propose to use recursive zkSNARKs to reduce the number of comparison proofs from $N-1$ to $1$, where $N$ is the number of bidders.
Security analysis of the Milenage-construction based on a PRF
This paper analyses the security of the so-called Milenage construction, developed by ETSI SAGE, when it is based on a non-one-to-one pseudo-random function (PRF) rather than a one-to-one pseudo-random permutation (PRP). It is shown that Milenage based on an $n$-bit random function and producing $t$ $n$-bit outputs, is indistinguishable from a random $tn$-bit function up to $q = O(2^{n/2}/t)$ queries. We also extend the existing security proof for PRP-based Milenage due to Gilbert by generalising the model and incorporating the Milenage message authentication function in the proof.
Novel Approach to Cryptography Implementation using ChatGPT
ChatGPT, which emerged at the end of 2022, has gained significant attention as a highly advanced conversational artificial intelligence service. Developed by OpenAI, ChatGPT is a natural language processing model. There are instances where individuals might want to attempt programming using ChatGPT. In this paper, we utilized the ChatGPT to implement a cryptographic algorithms. Despite numerous trial and error efforts, it was possible to implement cryptography through ChatGPT. This implies that even without extensive coding skill or programming knowledge, one can implement cryptography through ChatGPT if they understand the cryptographic structure. However, the ability to analyze the source code is essential, as it is necessary to identify incorrect parts within the implemented code.
The Principal–Agent Problem in Liquid Staking
Proof-of-stake systems require stakers to lock up their funds in order to participate in consensus validation. This leads to capital inefficiency, as locked capital cannot be invested in Decentralized Finance (DeFi). Liquid staking rewards stakers with fungible tokens in return for staking their assets. These fungible tokens can in turn be reused in the DeFi economy. However, liquid staking introduces unexpected risks, as all delegated stake is now fungible. This exacerbates the already existing Principal–Agent problem faced during any delegation, in which the interests of the delegator (the Principal) are not aligned with the interests of the validator (the Agent). In this paper, we study the Principal–Agent problem in the context of liquid staking. We highlight the dilemma between the choice of proportional representation (having one's stake delegated to one's validator of choice) and fair punishment (being economically affected only when one's choice is misinformed). We put forth an attack illustrating that these two notions are fundamentally incompatible in an adversarial setting. We then describe the mechanism of exempt delegations, used by some staking systems today, and devise a precise formula for quantifying the correct choice of exempt delegation which allows balancing the two conflicting virtues in the rational model.
Pushing the Limit of Vectorized Polynomial Multiplication for NTRU Prime
We conduct a systematic examination of vector arithmetic for polynomial multiplications in software. Vector instruction sets and extensions typically specify a fixed number of registers, each holding a power-of-two number of bits, and support a wide variety of vector arithmetic on registers. Programmers then try to align mathematical computations with the vector arithmetic supported by the designated instruction set or extension. We delve into the intricacies of this process for polynomial multiplications. In particular, we introduce “vectorization- friendliness” and “permutation-friendliness”, and review “Toeplitz matrix- vector product” to systematically identify suitable mappings from homo- morphisms to vectorized implementations.
To illustrate how the formalization works, we detail the vectorization of polynomial multiplication in $\mathbb{Z}_{4591}[x]/\left\langle x^{761} − x − 1 \right\rangle$ used in the parameter set sntrup761 of the NTRU Prime key encapsulation mechanism. For practical evaluation, we implement vectorized polynomial multipliers for the ring $\mathbb{Z}_{4591}[x]/\left\langle \Phi_{17} (x^{96}) \right\rangle$ with AVX2 and Neon. We benchmark our AVX2 implementation on Haswell and Skylake and our Neon implementation on Cortex-A72 and the “Firestorm” core of Apple M1 Pro.
Our AVX2-optimized implementation is 1.99−2.16 times faster than the state-of-the-art AVX2-optimized implementation by [Bernstein, Brum- ley, Chen, and Tuveri, USENIX Security 2022] on Haswell and Skylake, and our Neon-optimized implementation is 1.29−1.36 times faster than the state-of-the-art Neon-optimized implementation by [Hwang, Liu, and Yang, ACNS 2024] on Cortex-A72 and Apple M1 Pro.
For the overall scheme with AVX2, we reduce the batch key generation cycles (amortized with batch size 32) by 7.9%−12.0%, encapsulation cycles by 7.1%−10.3%, and decapsulation cycles by 10.7%−13.3% on Haswell and Skylake. For the overall performance with Neon, we reduce the encapsulation cycles by 3.0%−6.6% and decapsulation cycles by 12.8%−15.1% on Cortex-A72 and Apple M1 Pro.
TFHE Public-Key Encryption Revisited
This note introduces a public-key variant of TFHE. The output ciphertexts are of LWE type. Interestingly, the public key is shorter and the resulting ciphertexts are less noisy. The security of the scheme holds under the standard RLWE assumption. Several variations and extensions are also described.
Threshold BBS+ Signatures for Distributed Anonymous Credential Issuance
We propose a secure multiparty signing protocol for the BBS+ signature scheme; in other words, an anonymous credential scheme with threshold issuance. We prove that due to the structure of the BBS+ signature, simply verifying the signature produced by an otherwise semi-honest protocol is sufficient to achieve composable security against a malicious adversary. Consequently, our protocol is extremely simple and efficient: it involves a single request from the client (who requires a signature) to the signing parties, two exchanges of messages among the signing parties, and finally a response to the client; in some deployment scenarios the concrete cost bottleneck may be the client's local verification of the signature that it receives. Furthermore, our protocol can be extended to support the strongest form of blind signing and to serve as a distributed evaluation protocol for the Dodis-Yampolskiy Oblivious VRF. We validate our efficiency claims by implementing and benchmarking our protocol.
Threshold Cryptosystems Based on $2^k$-th Power Residue Symbols
In this paper we introduce a novel version of the Joye-Libert cryptosystem that allows users to decrypt without knowing the factorisation of the composite modulus. Then we use our construction as a building block for a threshold decryption protocol of the homomorphic Joye-Libert encryption scheme. Finally, we present several extensions of the threshold cryptosystem.
Improving and Automating BFV Parameters Selection: An Average-Case Approach
The Brakerski/Fan-Vercauteren (BFV) scheme is a state-of-the-art scheme in Fully Homomorphic Encryption based on the Ring Learning with Errors (RLWE) problem.
Thus, ciphertexts contain an error that increases with each homomorphic operation and has to stay below a certain threshold for correctness. This can be achieved by setting the ciphertext modulus big enough. On the other hand, a larger ciphertext modulus decreases the level of security and computational efficiency, making parameter selection challenging. Our work aims to improve the bound on the ciphertext modulus,
minimizing it.
Our main contributions are the following. Primarily, we perform the first average-case analysis of the error growth for the BFV scheme, significantly improving its estimation. For a circuit with a multiplicative depth of only 5, our bounds are up to 25.2 bits tighter than previous analyses and within 1.2 bits of the experimentally observed values.
%
Secondly, we give a general way to bound the ciphertext modulus for correct decryption that allows closed formulas.
%
Finally, we use our theoretical advances and propose the first parameter generation tool for the BFV scheme.
Here, we add support for arbitrary but use-case-specific circuits, as well as the ability to generate easy-to-use code snippets, making our theoretical work accessible to both researchers and practitioners.
A Note on a CBC-Type Mode of Operation
In this paper we formally introduce a novel mode of operation based on the cipher block chaining mode. The main idea of this mode is to use a stateful block cipher instead of a stateless one. Afterwards, we show how to implement our proposal and present a performance analysis of our mode. Next, we provide a concrete security analysis by computing a tight bound on the success of adversaries based on their resources. The results of our performance and security analyses are that this novel mode is more secure than the cipher block chaining mode for large files, but the encryption/decryption time doubles/triples. Therefore, our novel mode is suitable for encrypting large files, when higher security is required, but speed is not paramount. Note that the changes required to transform the software implementations of the cipher block chaining mode into this new mode are minimal, and therefore transitioning to this new mode is straightforward.
Threshold Signatures from Inner Product Argument: Succinct, Weighted, and Multi-threshold
Threshold signatures protect the signing key by sharing it among a group of signers so that an adversary must corrupt a threshold number of signers to be able to forge signatures. Existing threshold signatures with succinct signatures and constant verification times do not work if signers have different weights. Such weighted settings are seeing increasing importance in decentralized systems, especially in the Proof-of-Stake blockchains. This paper presents a new paradigm for threshold signatures for pairing- and discrete logarithm-based cryptosystems. Our scheme has a compact verification key consisting of only 7 group elements, and a signature consisting of 8 group elements. Verifying the signature requires 1 exponentiation and 13 bilinear pairings. Our scheme supports arbitrary weight distributions among signers and arbitrary thresholds. It requires non-interactive preprocessing after a universal powers-of-tau setup. We prove the security of our scheme in the Algebraic Group Model and implement it using golang. Our evaluation shows that our scheme achieves a comparable signature size and verification time to a standard (unweighted) threshold signature. Compared to existing multisignature schemes, our scheme has a much smaller public verification key.
FedVS: Straggler-Resilient and Privacy-Preserving Vertical Federated Learning for Split Models
In a vertical federated learning (VFL) system consisting of a central server and many distributed clients, the training data are vertically partitioned such that different features are privately stored on different clients. The problem of split VFL is to train a model split between the server and the clients. This paper aims to address two major challenges in split VFL: 1) performance degradation due to straggling clients during training; and 2) data and model privacy leakage from clients’ uploaded data embeddings. We propose FedVS to simultaneously address these two challenges. The key idea of FedVS is to design secret sharing schemes for the local data and models, such that information-theoretical privacy against colluding clients and curious server is guaranteed, and the aggregation of all clients’ embeddings is reconstructed losslessly, via decrypting computation shares from the non- straggling clients. Extensive experiments on various types of VFL datasets (including tabular, CV, and multi-view) demonstrate the universal advantages of FedVS in straggler mitigation and privacy protection over baseline protocols.
Time Complexities of Multiple-precision Modular Operations and Related Ratios
Modular arithmetic used for cryptography includes modular adding, modular subtracting, modular multiplying, modular inverting, modular exponentiating etc. In this paper, the authors well analyze the bit complexity of a bitwise modular operation and the time complexity of a non-bitwise modular operation. Besides discuss the clock cycles for one bytewise modular operation utilizing directives from the ATmel 8-bit AVR instruction set. Last, reveal that the ratio of derivate numbers of clock cycles for two modular operations under different modulus lengths is almost a constant.
SPDH-Sign: towards Efficient, Post-quantum Group-based Signatures
In this paper, we present a new diverse class of post-quantum group-based Digital Signature Schemes (DSS). The approach is significantly different from previous examples of group-based digital signatures and adopts the framework of group action-based cryptography: we show that each finite group defines a group action relative to the semidirect product of the group by its automorphism group, and give security bounds on the resulting signature scheme in terms of the group-theoretic computational problem known as the Semidirect Discrete Logarithm Problem (SDLP). Crucially, we make progress towards being able to efficiently compute the novel group action, and give an example of a parameterised family of groups for which the group action can be computed for any parameters, thereby negating the need for expensive offline computation or inclusion of redundancy required in other schemes of this type.
Semidirect Product Key Exchange: the State of Play
Of the many families of cryptographic schemes proposed to be post-quantum, a relatively unexplored set of examples comes from group-based cryptography. One of the more central schemes from this area is the so-called Semidirect Product Key Exchange (SDPKE), a generalisation of Diffie-Hellman Key Exchange that is plausibly post-quantum. In this report we survey the state of the literature relating to SDPKE, providing a high-level discussion of security, as well as a comprehensive overview of the proposed platforms and the main cryptanalytic ideas relevant to each.
Implementing and Optimizing Matrix Triples with Homomorphic Encryption
In today’s interconnected world, data has become a valuable asset, leading to a growing interest in protecting it through techniques such as privacy-preserving computation. Two well-known approaches are multi-party computation and homomorphic encryption with use cases such as privacy-preserving machine learning evaluating or training neural networks. For multi-party computation, one of the fundamental arithmetic operations is the secure multiplication in the malicious security model and by extension the multiplication of matrices which is expensive to compute in the malicious model. Transferring the problem of secure matrix multiplication to the homomorphic domain enables savings in communication complexity, reducing the main bottleneck.
In this work, we implement and optimize the homomorphic generation of matrix triples. We provide an open-source implementation for the leveled BGV (Brakerski Gentry Vaikuntanathan) scheme supporting plaintext moduli of arbitrary size using state-of-the-art implementation techniques. We also provide a new, use-case specific approach to parameter generation for leveled BGV-like schemes heuristically optimizing for computation time and taking into account architecture-specific constraints. Finally, we provide an in-depth analysis of the homomorphic circuit enabling the re-use of key switching keys and eliminating constant multiplications, combining our results in an implementation to generate homomorphic matrix triples for arbitrary plaintext moduli.
Our implementation is publicly available and up to $2.1\times$ faster compared to previous work while also providing new time-memory trade-offs for different computing environments. Furthermore, we implement and evaluate additional, use-case specific optimization opportunities such as matrix slicing for the matrix triple generation.
Blockchain Large Language Models
This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions. The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System. Unlike traditional methods, BlockGPT is designed to offer an unrestricted search space and does not rely on predefined rules or patterns, enabling it to detect a broader range of anomalies. We demonstrate the effectiveness of BlockGPT through its use as an anomaly detection tool for Ethereum transactions. In our experiments, it effectively identifies abnormal transactions among a dataset of 68M transactions and has a batched throughput of 2284 trans- actions per second on average. Our results show that, BlockGPT identifies abnormal transactions by ranking 49 out of 124 attacks among the top-3 most abnormal transactions interacting with their victim contracts. This work makes contributions to the field of blockchain transaction analysis by introducing a custom data encoding compatible with the transformer architecture, a domain-specific tokenization technique, and a tree encoding method specifically crafted for the Ethereum Virtual Machine (EVM) trace representation.
Post-Quantum Public-key Authenticated Searchable Encryption with Forward Security: General Construction, and Applications
Public-key encryption with keyword search (PEKS) was first proposed by Boneh et al. (EUROCRYPT 2004), achieving the ability to search for ciphertext files. Nevertheless, it is vulnerable to inside keyword guessing attacks (IKGA). Public-key authenticated encryption with keyword search (PAEKS), introduced by Huang et al. (Inf. Sci. 2017), on the other hand, is secure against IKGA. Nonetheless, it is susceptible to quantum computing attacks. Liu et al. and Cheng et al. addressed this problem by reducing to the lattice hardness (AsiaCCS 2022, ESORICS 2022). Furthermore, several scholars pointed out that the threat of secret key exposure delegates a severe and realistic concern, potentially leading to privacy disclosure (EUROCRYPT 2003, Compt. J. 2022). As a result, research focusing on mitigating key exposure and resisting quantum attacks for the PAEKS primitive is far-reaching.
In this work, we present the first generic construction and instantiation of forward-secure PAEKS primitive based on lattice hardness without trusted authorities, mitigating the secret key exposure while ensuring quantum-safe properties. We extend the scheme of Liu et al. (AsiaCCS 2022), and formalize a novel post-quantum PAEKS construction, namely FS-PAEKS. To begin with, we introduce the binary tree structure to represent the time periods, along with a lattice basis extension algorithm, and SamplePre algorithm to obtain the post-quantum one-way secret key evolution, allowing users to update their secret keys periodically. Furthermore, our scheme is proven to be IND-CKA and IND-IKGA secure in a quantum setting. In addition, we also compare the security of our primitive in terms of computational complexity and communication overhead with other top-tier schemes. Ultimately, we demonstrate two potential applications of FS-PAEKS.
Reconsidering Generic Composition: the modes A10, A11 and A12 are insecure
Authenticated Encryption (AE) achieves privacy and authenticity
with a single scheme. It is possible to obtain an AE scheme
gluing together an encryption scheme (privacy secure) and a Message Authentication
Code (authenticity secure). This approach is called generic
composition and its security has been studied by Namprempre et al. [NRS14].
They looked into all the possible gluings of an encryption scheme with a
secure MAC to obtain a nonce-based AE-scheme. The encryption scheme
is either IV-based (that is, with an additional random input, the initialization
vector [IV]) or nonce-based (with an input to be used once, the
nonce). Nampremepre et al. assessed the security/insecurity of all possible
composition combinations except for 4 (N4, A10, A11 and A12).
Berti et al. [BPP18a] showed that N4 is insecure and that the remaining
modes (A10, A11, and A12) are either all secure or insecure.
Here, we prove that these modes are all insecure with a counterexample.
$\texttt{CryptographicEstimators}$: a Software Library for Cryptographic Hardness Estimation
The estimation of the computational complexity of hard problems is essential for determining secure parameters for cryptographic systems. To date, those estimations are often performed in an ad-hoc manner. This led to a scattered landscape of available estimation scripts, with multiple scripts for the same problem with varying outputs. Overall, this complicates the task of reaching consensus on the hardness of cryptographic problems. Furthermore, for designers it makes it difficult to gather precise information on the concrete difficulty of the underlying problems. Especially in the light of the still ongoing NIST PQC standardization effort and the upcoming call for post-quantum secure digital signature schemes there is a pressing need for a reliable point of access for concrete security estimates.
In this work we present the first open-source software library entirely dedicated to cryptographic hardness estimation, the $\texttt{CryptographicEstimators}$ library. In contrast to most previous estimators, this library follows a modern object-oriented software architecture, which provides a wide variety of features. Overall the design is optimized to ease extending existing estimators by new algorithms and makes it simple to integrate completely new estimators.
In this work we further specify the algorithmic cost model underlying the estimators. In order to provide a starting point for the project, we gathered and integrated estimators for six different hardness assumptions, including the syndrome decoding problem, the multivariate quadratic problem, the code equivalence problem, the permuted kernel problem and different flavors thereof. In our effort of gathering those estimation scripts, we also normalized those estimates to fit into the cost model and to measure the same unit operations.
Wave Parameter Selection
Wave is a provably EUF-CMA (existential unforgeability under adaptive chosen message attacks) digital signature scheme based on codes \cite{DST19a}. It is an hash-and-sign primitive and its security is built according to a GPV-like framework \cite{GPV08} under two assumptions related to coding theory: (i) the hardness of finding a word of prescribed Hamming weight and prescribed syndrome, and (ii) the pseudo-randomness of ternary generalized $(U|U+V)$ codes. Forgery attacks (i)---or message attacks---consist in solving the ternary decoding problem for large weight \cite{BCDL19}, while, to the best of our knowledge, key attacks (ii) will try to exhibit words that are characteristic of $(U|U+V)$ codes, which are called type-U or type-V codewords in the present paper. In the current state-of-the-art, the best known attacks both reduce to various flavours of Information Set Decoding (ISD) algorithms for different regime of parameters. In this paper we give estimates for the complexities of the best known ISD variants for those regimes. Maximizing the computational effort, thus the security, for both attacks lead to conflicting constraints on the parameters. We provide here a methodology to derive optimal trade-offs for selecting parameters for the Wave signature scheme achieving a given security. We apply this methodology to the current state-of-the-art and propose some effective parameters for Wave. For $\lambda=128$ bits of classical security, the signature is $737$ bytes long, scaling linearly with the security, and the public key size is $3.6$ Mbytes, scaling quadratically with the security.
Proof-Carrying Data From Arithmetized Random Oracles
Proof-carrying data (PCD) is a powerful cryptographic primitive that allows mutually distrustful parties to perform distributed computation in an efficiently verifiable manner. Known constructions of PCD are obtained by recursively-composing SNARKs or related primitives. SNARKs with desirable properties such as transparent setup are constructed in the random oracle model. However, using such SNARKs to construct PCD requires heuristically instantiating the oracle and using it in a non-black-box way. Chen, Chiesa and Spooner (EC'22) constructed SNARKs in the low-degree random oracle model, circumventing this issue, but instantiating their model in the real world appears difficult.
In this paper, we introduce a new model: the arithmetized random oracle model (AROM). We provide a plausible standard-model (software-only) instantiation of the AROM, and we construct PCD in the AROM, given only a standard-model collision-resistant hash function. Furthermore, our PCD construction is for arbitrary-depth compliance predicates. We obtain our PCD construction by showing how to construct SNARKs in the AROM for computations that query the oracle, given an accumulation scheme for oracle queries in the AROM. We then construct such an accumulation scheme for the AROM.
We give an efficient "lazy sampling" algorithm (an emulator) for the ARO up to some error. Our emulator enables us to prove the security of cryptographic constructs in the AROM and that zkSNARKs in the ROM also satisfy zero-knowledge in the AROM. The algorithm is non-trivial, and relies on results in algebraic query complexity and the combinatorial nullstellensatz.
A Novel Preprocessing-Free Proofless Verifiable Computation Scheme from Integer Factoring
Verifiable Computation (VC) schemes provide a mechanism for verifying the output of a remotely executed program.
These are used to support computing paradigms wherein a computationally restricted client, the Verifier, wishes to delegate work to a
more powerful but untrusted server, the Prover. The Verifier wishes to detect any incorrect results, be they accidental or malicious.
The current state-of-the-art is only close-to-practical, usually because of a computationally demanding setup which must be amortised
across repeat executions. We present a VC scheme for verifying the output of arithmetic circuits with a small one-time setup, KGen,
independent of the size of the circuit being verified, and a insignificantly small constant program specific setup, ProbGen. To our
knowledge our VC scheme is the first built from the hardness of integer factoring, a standard cryptographic assumption. Our scheme
has the added novelty that the proofs are simply the raw output of the target computation, and the Prover is in effect blind to the
fact they are taking part in a VC scheme at all. Although our scheme has worse asymptotic performance than the state-of-the-art it is
particularly well suited for verifying one-shot programs and the output of large integer polynomial evaluation.
Two Party Fair Exchange
Fair Exchange (FE) protocols are a class of cryptographic
protocol in which two parties, X and Y , exchange some secret data, where the ability of each party to receive secret data is contingent on having sent secret data of their own. When exchanging secret data without the support of FE protocols, whoever sends their secret first makes themselves vulnerable to the possibility that the other participant will cheat and won’t send their secret in return. It is widely believed that FE protocols satisfying the soundness notion of Strong Fairness require a third party arbitrator trusted by both X and Y , however in this paper we construct an FE scheme satisfying a notion of Strong Fairness with no third party involvement. This is achieved by embracing nondeterminism and slightly relaxing the definitions of security from the perfect to the computational domain. The protocol presented in this paper allows two parties running interactive processes to exchange secrets, safe in the knowledge that either both secrets will change hands, or neither will, without the involvement of a trusted third party. The security of the protocol presented in this work reduces to the strength of an underlying symmetric authenticated encryption scheme.
General-Purpose Secure Conflict-free Replicated Data Types
Conflict-free Replicated Data Types (CRDTs) are a very popular class of distributed data structures that strike a compromise between strong and eventual consistency. Ensuring the protection of data stored within a CRDT, however, cannot be done trivially using standard encryption techniques, as secure CRDT protocols would require replica-side computation.
This paper proposes an approach to lift general-purpose implementations of CRDTs to secure variants using secure multiparty computation (MPC). Each replica within the system is realized by a group of MPC parties that compute its functionality.
Our results include: i) an extension of current formal models used for reasoning over the security of CRDT solutions to the MPC setting; ii) a MPC language and type system to enable the construction of secure versions of CRDTs and; iii) a proof of security that relates the security of CRDT constructions designed under said semantics to the underlying MPC library. We provide an open-source system implementation with an extensive evaluation, which compares different designs with their baseline throughput and latency.
Reusable, Instant and Private Payment Guarantees for Cryptocurrencies
Despite offering numerous advantages, public decentralized cryptocurrencies such as Bitcoin suffer from scalability issues such as high transaction latency and low throughput. The vast array of so-called Layer-2 solutions tackling the scalability problem focus on throughput, and consider latency as a secondary objective. However, in the context of retail payments, instant finality of transactions is arguably a more pressing concern, besides the overarching concern for privacy.
In this paper, we provide an overlay network that allows privacy-friendly low latency payments in a retail market. Our approach follows that of a recent work called Snappy, which achieved low latency but exposed identities of customers and their transaction histories. Our construction ensures this data is kept private, while providing merchants with protection against double-spending attacks. Although our system is still based upon customers registering with a collateral, crucially this collateral is reusable over time.
The technical novelty of our work comes from randomness-reusable threshold encryption (RRTE), a cryptographic primitive we designed specifically for the following features: our construction provably guarantees payments to merchants, preserves the secret identity of honest customers and prevents their transactions from being linked. We also present an implementation of our construction, showing its capacity for fast global payments in a retail setting with a delay of less than 1 second.
New NTRU Records with Improved Lattice Bases
The original NTRU cryptosystem from 1998 can be considered the starting point of the great success story of lattice-based cryptography. Modern NTRU versions like NTRU-HPS and NTRU-HRSS are round-3 finalists in NIST's selection process, and also Crystals-Kyber and especially Falcon are heavily influenced by NTRU.
Coppersmith and Shamir proposed to attack NTRU via lattice basis reduction, and variations of the Coppersmith-Shamir lattice have been successfully applied to solve official NTRU challenges by Security Innovations, Inc. up to dimension $n=173$.
In our work, we provide the tools to attack modern NTRU versions, both by the design of a proper lattice basis, as well as by tuning the modern BKZ with lattice sieving algorithm from the G6K library to NTRU needs.
Let $n$ be prime, $\Phi_n := (X^n-1)/(X-1)$, and let $\mathbb{Z}_q[X]/(\Phi_n)$ be the cyclotomic ring. As opposed to the common belief, we show that switching from the Coppersmith-Shamir lattice to a basis for the cyclotomic ring provides benefits. To this end, we slightly enhance the LWE with Hints framework by Dachman-Soled, Ducas, Gong, Rossi with the concept of projections against almost-parallel hints.
Using our new lattice bases, we set the first cryptanalysis landmarks for NTRU-HPS with $n \in [101,171]$ and for NTRU-HRSS with $n \in [101,211]$. As a numerical example, we break our largest HPS-171 instance using the cyclotomic ring basis within $83$ core days, whereas the Coppersmith-Shamir basis requires $172$ core days.
We also break one more official NTRU challenges by Security Innovation, Inc., originally worth 1000\$, in dimension $n=181$ in $20$ core years.
A security analysis on MQ-Sign
MQ-Sign is a variant of the UOV singature scheme proposed by Shim et al. It has been suggested as a candidate for the standardization of post-quantum cryptography in Republic of Korea (known as KpqC). However, recently Aulbach et al. proposed a practical key recovery attack against MQ-Sign-RS and MQ-Sign-SS with a simple secret key $\mathcal{S}$. In this paper, we propose another attack that is valid for the case of a general secret key $\mathcal{S}$.
Neural-Linear Attack Based on Distribution Data and Its Application on DES
The neural-differential distinguisher proposed by Gohr boosted the development of neural aided differential attack. As another significant cryptanalysis technique, linear attack has not been developing as rapidly in combination with deep learning technology as differential attack. In 2020, Hou et al. proposed the first neural-linear attack with one bit key recovery on 3, 4 and 5-round DES and restricted multiple bits recovery on 4 rounds, where the effective bits in one plain-ciphertext pair are spliced as one data sample. In this paper, we compare the neural-linear cryptanalysis with neural-differential cryptanalysis and propose a new data preprocessing algorithm depending on their similarities and differences. We call the new data structure distribution data. Basing on it, we mount our key recovery on round-reduced DES—first, we raise the accuracy of the neural-linear distinguisher by about 50%. Second, our distinguisher improves the effectiveness of one bit key recovery against 3, 4 and 5-round DES than the former one, and attack 6-round DES with success rate of 60.6% using 2048 plain-ciphertext pairs. Third, we propose a real multiple bit key recovery algorithm, leading neural-linear attack from theory to practice.
Revealing the Secrets of Radio-Enabled Embedded Systems: on extraction of raw information from any on-board signal through RF
In this work we are interested in evaluating the possibility of extracting information from radio-enabled embedded-systems from a long distance. That is, our focus is capturing information from sources in the micrometer to tens of centimeters scale, such as intra- or inter- device busses, board-level routing traces etc. Moreover, we focus on distances in the range of millimeters to tens of centimeters from the (on-chip or on-board) embedded-system Tx Antenna to the signal source.
Side-channels denotes presence of information in illegitimate channels. Side-channel analysis (SCA) attacks typically require statistical analysis and many leakage traces, focusing on micrometer level signals (sources) which emanate direct Near-Field information up to centimeters-level distances. In the same context (Near-Field and micrometer-level) simple power analysis (SPA) like attacks typically extract either direct raw information from one or few leakages or utilize statistical analysis on various samples from the same trace, similarly to horizontal attacks. Lately, radio-enabled systems were shown to emanate to a large distance (Far-Field), information from micrometer level sources, such as CPU processing, through the RF Tx Antenna: so far, SCA-like statistical analysis were shown. On the other hand, various reports exist on direct information eavesdropping/ sniffing or data exfiltration, emanated from centimeter to tens of centimeters scale sources, e.g., SATA, USB, Power-lines, Serial interface, Air-Gap systems, Screens and even optical fibers. All these elements are typically being used as a source and a direct Tx Antenna (huge, several to tens of centimeters) of the sensitive information. These antennas typically transmit information to short distances and the decay is very steep (proportional to $r^{-2}$-$r^{-3}$ depending on various factors and models).
To the best of our knowledge, we report here for the first time an alarming security challenge: any signal in the embedded system, from serial ports, DMA-controlled memory-access, JTAG and SPI interfaces, on-board signals with galvanic connection to the Tx Antenna-chip and \emph{on-board signals without galvanic connection to the Tx Antenna-chip itself, all leak direct information up to tens of centimeters from source to the Tx Antenna}. This alarming situation induce signal-integrity implications within the embedded system, and significant implications relating to device-isolation and user-isolation, it may also affect standards and specifications for e.g., electromagnetic compatibility (EMC), on-board signal shielding, electromagnetic and RF interference (EMI, RFI), cross-talk, and generally design-for-manufacturing (DFM) guidelines for both intra-IC and PCB board. We demonstrate such direct readout of signals with commercial and low-cost equipment indicating how problematic the situation is. The existence of such leakage is demonstrated both over an ultra-low-cost platform such as the nRF52832(nRF) embedded-system and on a more advanced ESP32-c3-devkitc-02 board which is far more widespread in ISM radio applications and meets certification like FCC and CE (as compared to the nRF device). We have constructed an experiment to demonstrate leakage scenarios from (1) on- and (2) off-chip, on-board or (3) signals without galvanic connection to the RF front-end chip, showing the severity of the leakage, repetitively and systematic nature of the phenomena over various devices.
We further demonstrate how sophisticated adversaries can build a code-injection Gadget which can carry sensitive-data and modulate it to be best extracted by the RF-channel.
The main observation we push forward is that unless concrete interference and isolation standards appear with security metrics in mind, which are significantly different than ones needed for communication, it would be hard to prevent such leakages.
DORAM revisited: Maliciously secure RAM-MPC with logarithmic overhead
Distributed Oblivious Random Access Memory (DORAM) is a secure multiparty protocol that allows a group of participants holding a secret-shared array to read and write to secret-shared locations within the array. The efficiency of a DORAM protocol is measured by the amount of communication and computation required per read/write query into the array. DORAM protocols are a necessary ingredient for executing Secure Multiparty Computation (MPC) in the RAM model.
Although DORAM has been widely studied, all existing DORAM protocols have focused on the setting where the DORAM servers are semi-honest. Generic techniques for upgrading a semi-honest DORAM protocol to the malicious model typically increase the asymptotic communication complexity of the DORAM scheme.
In this work, we present a 3-party DORAM protocol which requires $O((\kappa + D)\log N)$ communication and computation per query, for a database of size $N$ with $D$-bit values, where $\kappa$ is the security parameter. Our hidden constants in a big-O nation are small. We show that our protocol is UC-secure in the presence of a malicious, static adversary. This matches the communication and computation complexity of the best semi-honest DORAM protocol, and is the first malicious DORAM protocol with this complexity.
Exploring Formal Methods for Cryptographic Hash Function Implementations
Cryptographic hash functions are used inside many applications that critically rely on their resistance against cryptanalysis attacks and the correctness of their implementations. Nevertheless, vulnerabilities in cryptographic hash function implementations can remain unnoticed for more than a decade, as shown by the recent discovery of a buffer overflow in the implementation of SHA-3 in the eXtended Keccak Code Package (XKCP), impacting Python, PHP, and several other software projects. This paper explains how this buffer overflow vulnerability in XKCP was found. More generally, we explore the application of formal methods to the five finalist submission packages to the NIST SHA-3 competition, allowing us to (re-)discover vulnerabilities in the implementations of Keccak and BLAKE, and also discover a previously undisclosed vulnerability in the implementation of Grøstl. We also show how the same approach rediscovers a vulnerability affecting 11 out of the 12 implemented cryptographic hash functions in Apple's CoreCrypto library. Our approach consists of removing certain lines of code and then using KLEE as a tool to prove functional equivalence. We discuss the advantages and limitations of our approach and hope that our attempt to consolidate some earlier approaches can lead to new insights.
IGD-ScoreChain: A Lightweight and Scalable Blockchain Based on Node Sharding for the Internet of Things
Due to the significant development of the intelligence industry worldwide, various initiatives have increasingly recognized the value of the Internet of Things (IoT). IoT systems, however, are often hin- dered by fundamental challenges, such as the need for a central server to manage them. Decentralizing these systems can be achieved through the use of blockchains. Recently, there has been an increase in the popularity of blockchain in various fields, such as banking, IoT, and the intelligence industry, and human societies have taken notice of it. One of the main problems is with the scalability of such systems as the network size grows.
This paper examines how to overcome this challenge in blockchain-based IoT systems. We introduce a sharding-based blockchain that is lightweight and scalable. In the proposed method, the nodes are assigned to a number of shards based on their history of activity. As part of this study, the Improved Byzantine Fault Tolerance with Graceful performance Degradation (IGDBFT) consensus algorithm is introduced within the proposed scheme for intra-shard consensus. A solution to storing blocks and cross-shard transactions has been developed using a global chain containing parent blocks in the cloud layer. Finally, we analyze the security and efficiency of our scheme and compare our sharding-based protocol with previous protocols.
On Central Bank Digital Currency: A composable treatment
Central Bank Digital Currency (CBDC) is in the phase of discussion in most of countries. In this paper, we consider the security issues of centralized retail CBDC. Our focus is on the design and analysis of the underlying cryptographic protocol. The main security requirements against the protocol are transaction anonymity and protection against tax evasion. The protocol provides security guarantees in case of the strongest model of an execution environment which is the general concurrent environment. We apply the Universal Composition (UC) methodology of Canetti [3],[4]. At the time of this writing, we are not aware of any published CBDC protocol with an aim to provide secure compositional guarantees.
Last updated: 2023-04-23
A Randomized Bit Generator using Algebraic Number Theory
There are lots of Random Key Generators, In this paper, we gave a new construction of Randomized Bit Generator by using Algebraic number theory, which is quite easy to compute and also we keep the security of this generator in our mind. we discussed its applications as a secret key generator being a randomized bit generator in encryption schemes and hash functions. We tried to make it Quantumly secure by randomizing it and extending its parameters to see it as a Quantum Random key generator.
HyperNova: Recursive arguments for customizable constraint systems
We introduce HyperNova, a new recursive argument for proving incremental computations whose steps are expressed with CCS (Setty et al. ePrint 2023/552), a customizable constraint system that simultaneously generalizes Plonkish, R1CS, and AIR without overheads. HyperNova makes four contributions, each resolving a major problem in the area of recursive arguments.
First, it provides a folding scheme for CCS where the prover’s cryptographic cost is a single multi-scalar multiplication (MSM) of size equal to the number of variables in the constraint system, which is optimal when using an MSM-based commitment scheme. The folding scheme can fold multiple instances at once, making it easier to build generalizations of IVC such as PCD. Second, when proving program executions on stateful machines (e.g., EVM, RISC-V), the cost of proving a step of a program is proportional only to the size of the circuit representing the instruction invoked by the program step ("a la carte" cost profile). Third, we show how to achieve zero-knowledge for "free" and without the need to employ zero-knowledge SNARKs: we use a folding scheme to "randomize" IVC proofs. This highlights a new application of folding schemes. Fourth, we show how to efficiently instantiate HyperNova over a cycle of elliptic curves. For this, we provide a general technique, which we refer to as CycleFold, that applies to all modern folding-scheme-based recursive arguments.
Scalable Private Signaling
Private messaging systems that use a bulletin board, like privacy-preserving blockchains, have been a popular topic during the last couple of years. In these systems, typically a private message is posted on the board for a recipient and the privacy requirement is that no one can determine the sender and the recipient of the message. Until recently, the efficiency of these recipients was not considered, and the party had to perform a naive scan of the board to retrieve their messages.
More recently, works like Fuzzy Message Detection (FMD), Private Signaling (PS), and Oblivious Message Retrieval (OMR) have studied the problem of protecting recipient privacy by outsourcing the message retrieval process to an untrusted server. However, FMD only provides limited privacy guarantees, and PS and OMR greatly lack scalability.
In this work, we present a new construction for private signaling which is both asymptotically superior and concretely orders of magnitude faster than all prior works while providing full privacy. Our constructions make use of a trusted execution environment (TEE) and an Oblivious RAM to improve the computation complexity of the server. We also improve the privacy guarantees by keeping the recipient hidden even during the retrieval of signals from the server. Our proof-of-concept open-source implementation shows that for a server serving a million recipients and ten million messages, it only takes $< 60$ milliseconds to process a sent message, and $< 6$ seconds to process a retrieval (of 100 signals) request from a recipient.