All papers in 2023 (Page 9 of 1971 results)
Communication and Round Efficient Parallel Broadcast Protocols
This work focuses on the parallel broadcast primitive, where each of the $n$ parties wish to broadcast their $\ell$-bit input in parallel. We consider the authenticated model with PKI and digital signatures that is secure against $t < n/2$ Byzantine faults under a synchronous network.
We show a generic reduction from parallel broadcast to a new primitive called graded parallel broadcast and a single instance of validated Byzantine agreement. Using our reduction, we obtain parallel broadcast protocols with $O(n^2 \ell + \kappa n^3)$ communication ($\kappa$ denotes a security parameter) and expected constant rounds. Thus, for inputs of size $\ell = \Omega(n)$ bits, our protocols are asymptotically free.
Our graded parallel broadcast uses a novel gradecast protocol with multiple grades with asymptotically optimal communication complexity of $O(n \ell + \kappa n^2)$ for inputs of size $\ell$ bits. We also present a multi-valued validated Byzantine agreement protocol with asymptotically optimal communication complexity of $O(n \ell + \kappa n^2)$ for inputs of size $\ell$ bits in expectation and expected constant rounds. Both of these primitives are of independent interest.
Arena: Multi-leader Synchronous Byzantine Fault Tolerance
Byzantine fault-tolerant state machine replication (BFT-SMR) replicates a state machine across a set of replicas, and processes requests as a single machine even in the presence of Byzantine faults. Recently, synchronous BFT-SMRs have received tremendous attention due to their simple design and high fault-tolerance threshold.
In this paper, we propose Arena, the first multi-leader synchronous BFT-SMR. Thanks to the synchrony assumption, Arena gains the performance benefit from multi-leader with a much simpler design (compared to other partially synchronous multi-leader designs). Furthermore, it is more robust: ``no progress'' of a leader will not trigger a view-change. Our experimental results show that Arena achieves a peak throughput of up to 7.7$\times$ higher than the state-of-the-art.
Two-Round Adaptively Secure MPC from Isogenies, LPN, or CDH
We present a new framework for building round-optimal (two-round) $adaptively$ secure MPC. We show that a relatively weak notion of OT that we call $indistinguishability \ OT \ with \ receiver \ oblivious \ sampleability$ (r-iOT) is enough to build two-round, adaptively secure MPC against $malicious$ adversaries in the CRS model. We then show how to construct r-iOT from CDH, LPN, or isogeny-based assumptions that can be viewed as group actions (such as CSIDH and CSI-FiSh). This yields the first constructions of two-round adaptively secure MPC against malicious adversaries from CDH, LPN, or isogeny-based assumptions. We further extend our non-isogeny results to the plain model, achieving (to our knowledge) the first construction of two-round adaptively secure MPC against semi-honest adversaries in the plain model from LPN.
Our results allow us to build a two-round adaptively secure MPC against malicious adversaries from essentially all of the well-studied assumptions in cryptography. In addition, our constructions from isogenies or LPN provide the first post-quantum alternatives to LWE-based constructions for round-optimal adaptively secure MPC. Along the way, we show that r-iOT also implies non-committing encryption(NCE), thereby yielding the first constructions of NCE from isogenies or LPN.
Efficient Oblivious Evaluation Protocol and Conditional Disclosure of Secrets for DFA
Uncategorized
Uncategorized
In oblivious finite automata evaluation, one party holds a private automaton, and the other party holds a private string of characters. The objective is to let the parties know whether the string is accepted by the automaton or not, while keeping their inputs secret. The applications include DNA searching, pattern matching, and more. Most of the previous works are based on asymmetric cryptographic primitives, such as homomorphic encryption and oblivious transfer. These primitives are significantly slower than symmetric ones. Moreover, some protocols also require several rounds of interaction. As our main contribution, we propose an oblivious finite automata evaluation protocol via conditional disclosure of secrets (CDS), using one (potentially malicious) outsourcing server. This results in a constant-round protocol, and no heavy asymmetric-key primitives are needed. Our protocol is based on a building block called "an oblivious CDS scheme for deterministic finite automata'' which we also propose in this paper. In addition, we propose a standard CDS scheme for deterministic finite automata as an independent interest.
Evolving Homomorphic Secret Sharing for Hierarchical Access Structures
Uncategorized
Uncategorized
Secret sharing is a cryptographic primitive that divides a secret into several shares, and allows only some combinations of shares to recover the secret. As it can also be used in secure multi-party computation protocol with outsourcing servers, several variations of secret sharing are devised for this purpose. Most of the existing protocols require the number of computing servers to be determined in advance. However, in some situations we may want the system to be "evolving". We may want to increase the number of servers and strengthen the security guarantee later in order to improve availability and security of the system. Although evolving secret sharing schemes are available, they do not support computing on shares. On the other hand, "homomorphic" secret sharing allows computing on shares with small communication, but they are not evolving. As the contribution of our work, we give the definition of "evolving homomorphic" secret sharing supporting both properties. We propose two schemes, one with hierarchical access structure supporting multiplication, and the other with partially hierarchical access structure supporting computation of low degree polynomials. Comparing to the work with similar functionality of Choudhuri et al. (IACR ePrint 2020), our schemes have smaller communication costs.
Constructive $t$-secure Homomorphic Secret Sharing for Low Degree Polynomials
Uncategorized
Uncategorized
This paper proposes $t$-secure homomorphic secret sharing schemes for low degree polynomials. Homomorphic secret sharing is a cryptographic technique to outsource the computation to a set of servers while restricting some subsets of servers from learning the secret inputs. Prior to our work, at Asiacrypt 2018, Lai, Malavolta, and Schröder proposed a $1$-secure scheme for computing polynomial functions. They also alluded to $t$-secure schemes without giving explicit constructions; constructing such schemes would require solving set cover problems, which are generally NP-hard. Moreover, the resulting implicit schemes would require a large number of servers. In this paper, we provide a constructive solution for threshold-$t$ structures by combining homomorphic encryption with the classic secret sharing scheme for general access structure by Ito, Saito, and Nishizeki. Our scheme also quantitatively improves the number of required servers from $O(t^2)$ to $O(t)$, compared to the implicit scheme of Lai et al. We also suggest several ideas for future research directions.
Malicious Secure, Structure-Aware Private Set Intersection
Structure-Aware private set intersection (sa-PSI) is a variant of PSI where Alice's input set $A$ has some publicly known structure, Bob's input $B$ is an unstructured set of points, and Alice learns the intersection $A \cap B$. sa-PSI was recently introduced by Garimella et al. (Crypto 2022), who described a semi-honest protocol with communication that scales with the description size of Alice's set, instead of its cardinality. In this paper, we present the first sa-PSI protocol secure against malicious adversaries.
sa-PSI protocols are built from function secret sharing (FSS) schemes, and the main challenge in our work is ensuring that multiple FSS sharings encode the same structured set. We do so using a cut-and-choose approach. In order to make FSS compatible with cut-and-choose, we introduce a new variant of function secret sharing, called derandomizable FSS (dFSS).
We show how to construct dFSS for union of geometric balls, leading to a malicious-secure sa-PSI protocol where Alice's input is a union of balls. We also improve prior FSS constructions, giving asymptotic improvements to semi-honest sa-PSI.
On the Security of Universal Re-Encryption
A universal re-encryption (URE) scheme is a public-key encryption scheme enhanced with an algorithm that on input a ciphertext, outputs another ciphertext which is still a valid encryption of the underlying plaintext. Crucially, such a re-encryption algorithm does not need any key as input, but the ciphertext is guaranteed to be valid under the original key-pair. Therefore, URE schemes lend themselves naturally as building blocks of mixnets: A sender transmits the encryption of a message under the receivers public-key to a mixer, which re-encrypts it, and the receiver later retrieves the re-encrypted ciphertext, which will decrypt successfully to the original message.
Young and Yung (SCN 2018) argued that the original definition of URE by Golle et al. (CT-RSA 2004) was flawed, because it did not consider anonymity of encryption. This motivated them to claim that they finally put URE on solid grounds by presenting four formal security notions which they argued a URE should satisfy.
As our first contribution, we introduce a framework that allows to compactly define and relate security notions as substitutions of systems. Using such framework, as our second contribution we show that Young and Yung's four notions are not minimal, and therefore do not properly capture the essence of a secure URE scheme. We provide three definitions that imply (and are implied by) theirs. Using the constructive cryptography framework, our third contribution is to capture the essence of URE from an application point of view by providing a composable security notion that expresses the ideal use of URE in a mixnet. Finally, we show that the composable notion is implied by our three minimal notions.
Swiper: a new paradigm for efficient weighted distributed protocols
The majority of fault-tolerant distributed algorithms are designed assuming a nominal corruption model, in which at most a fraction $f_n$ of parties can be corrupted by the adversary. However, due to the infamous Sybil attack, nominal models are not sufficient to express the trust assumptions in open (i.e., permissionless) settings. Instead, permissionless systems typically operate in a weighted model, where each participant is associated with a weight and the adversary can corrupt a set of parties holding at most a fraction $f_w$ of the total weight.
In this paper, we suggest a simple way to transform a large class of protocols designed for the nominal model into the weighted model. To this end, we formalize and solve three novel optimization problems, which we collectively call the weight reduction problems, that allow us to map large real weights into small integer weights while preserving the properties necessary for the correctness of the protocols. In all cases, we manage to keep the sum of the integer weights to be at most linear in the number of parties, resulting in extremely efficient protocols for the weighted model. Moreover, we demonstrate that, on weight distributions that emerge in practice, the sum of the integer weights tends to be far from the theoretical worst case and, sometimes, even smaller than the number of participants.
While, for some protocols, our transformation requires an arbitrarily small reduction in resilience (i.e., $f_w = f_n - \epsilon$), surprisingly, for many important problems, we manage to obtain weighted solutions with the same resilience ($f_w = f_n$) as nominal ones. Notable examples include erasure-coded distributed storage and broadcast protocols, verifiable secret sharing, and asynchronous consensus. Although there are ad-hoc weighted solutions to some of these problems, the protocols yielded by our transformations enjoy all the benefits of nominal solutions, including simplicity, efficiency, and a wider range of possible cryptographic assumptions.
Evaluating KpqC Algorithm Submissions: Balanced and Clean Benchmarking Approach
In 2022, a Korean domestic Post Quantum Cryptography contest called KpqC held, and the standard for Post Quantum Cryptography is set to be selected in 2024. In Round 1 of this competition, 16 algorithms have advanced and are competing. Algorithms submitted to KpqC introduce their performance, but direct performance comparison is difficult because all algorithms were measured in different environments. In this paper, we present the benchmark results of all KpqC algorithms in a single environment. To benchmark the algorithms, we removed the
external library dependency of each algorithm. By removing dependencies, performance deviations due to external libraries can be eliminated, and source codes that can conveniently operate the KpqC algorithm can be provided to users who have difficulty setting up the environment.
Reduction of Search-LWE Problem to Integer Programming Problem
Let $(A,t)$ be an instance of the search-LWE problem, where $A$ is a matrix and $t$ is a vector. This paper constructs an integer programming problem using $A$ and $t$, and shows that it is possible to derive a solution of the instance $(A,t)$ (perhaps with high probability) using its optimal solution or its tentative solution of small norm output by an integer programming solver. In other words, the LWE-search problem can be reduced to an integer programming problem. In the reduction, only basic linear algebra and finite field calculation are required. The computational complexity of the integer programming problem obtained is still unknown.
Benchmarking the Setup of Updatable zk-SNARKs
Subversion-resistant zk-SNARKs allow the provers to verify the Structured Reference String (SRS), via an SRS Verification (SV) algorithm and bypass the need for a Trusted Third Party (TTP). Pairing-based zk-SNARKs with \(updatable\) and \(universal\) SRS are an extension of subversion-resistant ones which additionally allow the verifiers to update the SRS, via an SRS Updating (SU) algorithm, and similarly bypass the need for a TTP. In this paper, we examine the setup of these zk-SNARKs by benchmarking the efficiency of the SV and SU algorithms within the \(\textsf{Arkworks}\) library. The benchmarking covers a range of updatable zk-SNARKs, including Sonic, Plonk, Marlin, Lunar, and Basilisk. Our analysis reveals that relying solely on the standard Algebraic Group Model (AGM) may not be sufficient in practice, and we may need a model with weaker assumptions. Specifically, we find that while Marlin is secure in the AGM, additional elements need to be added to its SRS to formally prove certain security properties in the updatable CRS model. We demonstrate that the SV algorithms become inefficient for mid-sized circuits with over 20,000 multiplication gates and 100 updates. To address this, we introduce Batched SV algorithms (BSV) that leverage standard batching techniques and offer significantly improved performance. As a tool, we propose an efficient verification approach that allows the parties to identify a malicious SRS updater with logarithmic verification in the number of updates. In the case of Basilisk, for a circuit with \(2^{20}\) multiplication gates, a \(1000\)-time updated SRS can be verified in less than 30 sec, a malicious updater can be identified in less than 4 min (improvable by pre-computation), and each update takes less than 6 min.
Not optimal but efficient: a distinguisher based on the Kruskal-Wallis test
Research about the theoretical properties of side channel distinguishers revealed the rules by which to maximise the probability of first order success (``optimal distinguishers'') under different assumptions about the leakage model and noise distribution. Simultaneously, research into bounding first order success (as a function of the number of observations) has revealed universal bounds, which suggest that (even optimal) distinguishers are not able to reach theoretically possible success rates. Is this gap a proof artefact (aka the bounds are not tight) or does a distinguisher exist that is more trace efficient than the ``optimal'' one? We show that in the context of an unknown (and not linear) leakage model there is indeed a distinguisher that outperforms the ``optimal'' distinguisher in terms of trace efficiency: it is based on the Kruskal-Wallis test.
Semi-Honest 2-Party Faithful Truncation from Two-Bit Extraction
As a fundamental operation in fixed-point arithmetic, truncation can bring the product of two fixed-point integers back to the fixed-point representation. In large-scale applications like privacy-preserving machine learning, it is essential to have faithful truncation that accurately eliminates both big and small errors. In this work, we improve and extend the results of the oblivious transfer based faithful truncation protocols initialized by Cryptflow2 (Rathee et al., CCS 2020). Specifically, we propose a new notion of two-bit extraction that is tailored for faithful truncation and demonstrate how it can be used to construct an efficient faithful truncation protocol. Benefiting from our efficient construction for two-bit extraction, our faithful truncation protocol reduces the communication complexity of Cryptflow2 from growing linearly with the fixed-point precision to logarithmic complexity.
This efficiency improvement is due to the fact that we reuse the intermediate results of eliminating the big error to further eliminate the small error. Our reuse strategy is effective, as it shows that while eliminating the big error, it is possible to further eliminate the small error at a minimal cost, e.g., as low as communicating only an additional 160 bits in one round.
Improved Polynomial Secret-Sharing Schemes
Despite active research on secret-sharing schemes for arbitrary access structures for more than 35 years, we do not understand their share size $-$ the best known upper bound for an arbitrary n-party access structure is $2^{O(n)}$ while the best known lower bound is $\Omega(n/\log(n))$. Consistent with our knowledge, the share size can be anywhere between these bounds. To better understand this question, one can study specific families of secret-sharing schemes. For example, linear secret-sharing schemes, in which the sharing and reconstruction are computed by linear mappings, have been studied in many papers, e.g., it is known that they require shares of size at least $2^{0.5n}$. Secret-sharing schemes in which the sharing and/or reconstruction are computed by low-degree polynomials have been recently studied by Paskin-Cherniavsky and Radune [ITC 2020] and by Beimel, Othman, and Peter [CRYPTO 2021]. It was shown that secret-sharing schemes with sharing and reconstruction computed by polynomials of degree 2 are more efficient than linear schemes (i.e., schemes in which the sharing and reconstruction are computed by polynomials of degree one).
Prior to our work, it was not known if using polynomials of higher degree can reduce the share size. We show that this is indeed the case, i.e., we construct secret-sharing schemes with reconstruction by degree-$d$ polynomials, where as the reconstruction degree $d$ increases, the share size for arbitrary access structures decreases. As a step in our construction, we construct conditional disclosure of secrets (CDS) protocols. For example, we construct 2-server CDS protocols for functions $f : [N ] \times [N ] \to \{0, 1\}$ with reconstruction computed by degree-d polynomials with message size $N^{O(\log \log d/ \log d)}$. Combining our results with a lower bound of Beimel et al. [CRYPTO 2021], we show that increasing the degree of the reconstruction function in CDS protocols provably reduces the message size. To construct our schemes, we define sparse matching vectors, show constructions of such vectors, and design CDS protocols and secret-sharing schemes with degree-$d$ reconstruction from sparse matching vectors.
Quantum Cryptanalysis of OTR and OPP: Attacks on Confidentiality, and Key-Recovery
In this paper, we analyze the security of authenticated encryption modes OTR (Minematsu, Eurocrypt 2014) and OPP (Granger, Jovanovic, Mennink, and Neves, Eurocrypt 2016) in a setting where an adversary is allowed to make encryption queries in quantum superposition. Starting with OTR -- or more technically, AES-OTR, a third-round CAESAR candidate -- we extend prior quantum attacks on the mode's unforgeability in the literature to provide the first attacks breaking confidentiality, i.e., IND-qCPA security, of AES-OTR in different settings depending on how the associated data is processed. On a technical level, one of our IND-qCPA attacks involves querying the quantum encryption oracle on a superposition of data with unequal length; to the best of our knowledge, such an attack has never been modelled before in the (post-)quantum cryptographic literature, and we hence believe our technique is of independent interest. Coming to OPP, we present the first key-recovery attack against the scheme which uses only a single quantum encryption query.
Instant Zero Knowledge Proof of Reserve
We present a non-interactive and public verifier scheme that
allows one to assert the asset of a financial organization instantly and incrementally in zero knowledge with high throughput. It is enabled by the recent breakthrough in lookup argument, where the prover cost can
be independent of the lookup table size after a pre-processing step. We extend the cq protocol and develop an aggregated non-membership proof for zero knowledge sets. Based on it, we design a non-intrusive protocol that works for pseudo-anonymous cryptocurrencies such as BTC.
It has O(n log(n)) prover complexity and O(1) proof size, where n is the platform throughput (instead of anonymity set size). We implement and evaluate the protocol. Running on a 56-core server, it supports 1024 transactions per second.
Secure Function Extensions to Additively Homomorphic Cryptosystems
The number-theoretic literature has long studied the question of distributions of sequences of quadratic residue symbols modulo a prime number. In this paper, we present an efficient algorithm for generating primes containing chosen sequences of quadratic residue symbols and use it as the basis of a method extending the functionality of additively homomorphic cryptosystems.
We present an algorithm for encoding a chosen Boolean function into the public key and an efficient two-party protocol for evaluating this function on an encrypted sum. We demonstrate concrete parameters for secure function evaluation on encrypted sums up to eight bits at standard key sizes in the integer factorization setting. Although the approach is limited to applications involving small sums, it is a practical way to extend the functionality of existing secure protocols built on partially homomorphic encryption schemes.
Quantum Secure Threshold Private Set Intersection Protocol for IoT-Enabled Privacy Preserving Ride-Sharing Application
The Internet of Things (IoT)-enabled ride sharing
is one of the most transforming and innovative technologies
in the transportation industry. It has myriads of advantages,
but with increasing demands there are security concerns as
well. Traditionally, cryptographic methods are used to address
the security and privacy concerns in a ride sharing system.
Unfortunately, due to the emergence of quantum algorithms,
these cryptographic protocols may not remain secure. Hence,
there is a necessity for privacy-preserving ride sharing protocols
which can resist various attacks against quantum computers.
In the domain of privacy preserving ride sharing, a threshold
private set intersection (TPSI) can be adopted as a viable solution
because it enables the users to determine the intersection of
private data sets if the set intersection cardinality is greater than
or equal to a threshold value. Although TPSI can help to alleviate
privacy concerns, none of the existing TPSI is quantum secure.
Furthermore, the existing TPSI faces the issue of long-term
security. In contrast to classical and post quantum cryptography,
quantum cryptography (QC) provides a more robust solution,
where QC is based on the postulates of quantum physics (e.g.,
Heisenberg uncertainty principle, no cloning theorem, etc.) and it
can handle the prevailing issues of quantum threat and long-term
security. Herein, we propose the first QC based TPSI protocol
which has a direct application in privacy preserving ride sharing.
Due to the use of QC, our IoT-enabled ride sharing scheme
remains quantum secure and achieves long-term security as well.
A Multivariate Based Provably Secure Certificateless Signature Scheme with Applications to the Internet of Medical Things
Over the last few years, Internet of Medical Things (IoMT) has completely transformed the healthcare industry. It is bringing out the most notable, and unprecedented impacts on human health, and has totally changed the way we look at the healthcare industry. The healthcare sector all around the globe are leapfrogging, and adopting the technology, helping in transforming drastically in a very short span of time. However, as more and more number of medical devices are being connected to IoMT, security issues like ensuring authenticity and integrity of the transmitted data are also on the rise. In view of the context, there is a need of an efficient cryptographic primitive that can address these issues in a viable manner. A signature scheme seems to be the natural choice to mitigate the security concerns. But, traditional signature schemes, both PKI-based and Identity-based have their own disadvantages which makes them unsuitable for IoMT networks. Thus, to address the security issues and problems like certificate management and key escrow, herein, we put forward the {\em first} multivariate based certificateless signature scheme, namely {\sf Mul-CLS}, which is built on top of the intractability of multivariate-quadratic (MQ) problem. The fact that multivariate public key cryptosystem (MPKC) provides fast, post-quantum safe, and efficient primitives, makes it a front runner candidate among the other post-quantum cryptography candidates. Our scheme {\sf Mul-CLS} provides existential unforgeability against chosen message and chosen identity Super Type I and Super Type II adversary if solving the MQ problem is NP-hard. In addition to that, our proposed {\sf Mul-CLS} presents itself as a robust and cost-friendly cryptographic building block for building IoMT networks.
Haze and Daze: Compliant Privacy Mixers
Blockchains enable mutually distrustful parties to perform financial operations in a trustless, decentralized, publicly-verifiable environment. Blockchains typically offer little privacy, and thus motivated the construction of privacy mixers, a solution to make funds untraceable. Privacy mixers concern regulators due to their increasing use by bad actors to illegally conceal the origin of funds. Consequently, Tornado Cash, the largest privacy mixer to date, is sanctioned by large portions of the Ethereum network.
In this work, we propose Haze and Daze, two privacy mixers that mitigate the undesired abuse of privacy mixers for illicit activities. Haze guarantees users’ privacy together with compliance, i.e., funds can be withdrawn as long as they were deposited from a non-banned address, without revealing any information on the matching deposit. This means that once a user is flagged as non-compliant, their funds can no longer exit the mixer. However, this leads to a race condition for bad actors to withdraw funds before becoming flagged as unlawful in the banned-addresses list. Thus, we introduce Daze, a second mixer protocol that, in addition to compliance, enables retroactive de-anonymization of transactions of non-compliant users, making the mixer fruitless for them. To maintain privacy of compliant users, the de-anonymization procedure is performed by a committee, with privacy guaranteed as long as at least one of the committee members is honest. Moreover, Haze and Daze have an optional feature for non-compliant funds to be released from the mixer to some predetermined entity.
We empirically evaluate our solution in a proof-of-concept system, demonstrating gas consumption for each deposit and withdrawal that is comparable to Tornado Cash for compliant users, both for Haze and Daze. To the best of our knowledge, our solution is the first to guarantee compliance and privacy on the blockchain (on-chain) that is implemented via a smart contract.
High-speed Implementation of AIM symmetric primitives within AIMer digital signature
Recently, as quantum computing technology develops, the importance of quantum resistant cryptography technology is increasing. AIMer is a quantum-resistant cryptographic algorithm that was selected as the first candidate in the electronic signature section of the KpqC Contest, and uses symmetric primitive AIM. In this paper, we propose a high-speed implementation technique of symmetric primitive AIM and evaluate the performance of the implementation. The proposed techniques are two methods, a Mer operation optimization technique and a linear layer operation simplification technique, and as a result of performance measurement, it achieved a performance improvement of up to 97.9% compared to the existing reference code. This paper is the first study to optimize the implementation of AIM.
Optimized Quantum Circuit for Quantum Security Strength Analysis of Argon2
This paper explores the optimization of quantum circuits for Argon2, a memory-hard function used for password hashing and other applications. With the rise of quantum computers, the security of classical cryptographic systems is at risk. It emphasizes the need to accurately measure the quantum security strength of cryptographic schemes using optimized quantum circuits. The proposed method focuses on two perspectives: qubit reduction (qubit optimization) and depth reduction (depth optimization). The qubit-optimized quantum circuit was designed to find a point where an appropriate inverse is possible and reuses the qubit through the inverse to minimize the number of qubits. The start point and end point of the inverse are set by finding a point where qubits can be reused with minimal computation. The depth-optimized quantum circuit reduces the depth by using the minimum number of qubits as necessary without performing an inverse operation. The trade-off between qubit and depth is confirmed by modifying the internal structure of the circuits and the quantum adders. Qubit optimization achieved up to a 12,229 qubit reduction, while depth optimization resulted in approximately 196,741 (approximately 69.02%) depth reduction. In conclusion, this research demonstrates the importance of implementing and analyzing quantum circuits from various optimization perspectives. The results contribute to the post-quantum strength analysis of Argon2 and provide valuable insights for future research on quantum circuit design, considering the appropriate trade-offs of quantum resources in response to advancements in quantum computing technology.
Analysis of Parallel Implementation of Pilsung Block Cipher On Graphics Processing Unit
This paper focuses on the GPU implementation of the Pilsung block cipher used in the Red Star 3.0 operating system developed in North Korea. The Pilsung block cipher is designed based on AES. One notable feature of the Pilsung block cipher is that the table calculations required for encryption take longer than the encryption process itself. This paper emphasizes the parallel implementation of the Pilsung block cipher by leveraging the parallel processing capabilities of GPUs and evaluates the performance of the Pilsung block cipher. Techniques for optimization are proposed, including the use of Pinned memory to reduce data transfer time and work distribution between the CPU and GPU. Pinned memory helps optimize data transfer, and work distribution between the CPU and GPU needs to be considered for efficient parallel processing. Performance measurements were performed using the Nvidia GTX 3060 laptop for evaluation, comparing the results of applying Pinned memory usage and work distribution optimization. As a result, optimizing memory transfer costs was found to have a greater impact on performance improvement. When both techniques were applied together, approximately a 1.44 times performance improvement was observed.
Post Quantum Fuzzy Stealth Signatures and Applications
Private payments in blockchain-based cryptocurrencies have been a topic of research, both academic and industrial, ever since the advent of Bitcoin. Stealth address payments were proposed as a solution to improve payment privacy for users and are, in fact, deployed in several major cryptocurrencies today. The mechanism lets users receive payments so that none of these payments are linkable to each other or the recipient. Currently known stealth address mechanisms either (1) are insecure in certain reasonable adversarial models, (2) are inefficient in practice or (3) are incompatible with many existing currencies.
In this work, we formalize the underlying cryptographic abstraction of this mechanism, namely, stealth signatures with formal game-based definitions. We show a surprising application of our notions to passwordless authentication defined in the Fast IDentity Online (FIDO) standard. We then present SPIRIT, the first efficient post-quantum secure stealth signature construction based on the NIST standardized signature and key-encapsulation schemes, Dilithium and Kyber. The basic form of SPIRIT is only secure in a weak security model, but we provide an efficiency-preserving and generic transform, which boosts the security of SPIRIT to guarantee the strongest security notion defined in this work. Compared to state-of-the-art, there is an approximately 800x improvement in the signature size while keeping signing and verification as efficient as 0.2 ms.
We extend SPIRIT with a fuzzy tracking functionality where recipients can outsource the tracking of incoming transactions to a tracking server, satisfying an anonymity notion similar to that of fuzzy message detection (FMD) recently introduced in [CCS 2021]. We also extend SPIRIT with a new fuzzy tracking framework called scalable fuzzy tracking that we introduce in this work. This new framework can be considered as a dual of FMD, in that it reduces the tracking server's computational workload to sublinear in the number of users, as opposed to linear in FMD. Experimental results show that, for millions of users, the server only needs 3.4 ms to filter each incoming message which is a significant improvement upon the state-of-the-art.
CipherGPT: Secure Two-Party GPT Inference
ChatGPT is recognized as a significant revolution in the field of artificial intelligence, but it raises serious concerns regarding user privacy, as the data submitted by users may contain sensitive information. Existing solutions for secure inference face significant challenges in supporting GPT-like models due to the enormous number of model parameters and complex activation functions.
In this paper, we develop CipherGPT, the first framework for secure two-party GPT inference, building upon a series of innovative protocols. First, we propose a secure matrix multiplication that is customized for GPT inference, achieving upto 6.2× speedup and 4.1× bandwidth reduction over SOTA. We also propose a novel protocol for securely computing GELU, surpassing SOTA by 1.8× in runtime, 2.5× in communication and 7.4× in precision. Furthermore, we come up with the first protocol for secure top-k sampling.
We provide a full-fledged implementation and comprehensive benchmark for CipherGPT. In particular, we measure the runtime and communication for each individual operation, along with their corresponding proportions. We believe this can serve as a reference for future research in this area.
Structured Encryption for Indirect Addressing
The Structured Encryption (StE) framework can be used to capture the encryption and querying of complex data structures on an honest-but-curious server. In this work, we introduce a new type of StE called indirectly addressed multimap encryption (IA-MME). We propose two IA-MME schemes: the layered multimaps approach" which extends and generalizes the existing "multimap chaining" approach, and a novel technique called the single multimap approach which has comparable efficiency and strictly better security. We demonstrate that our formalisms simplify and modularize StE solutions for real-world use cases in searchable encryption and SQL databases, and provide simulations demonstrating that our IA-MME constructions lead to tangible efficiency and security gains on realistic data. As a part of our techniques, we identify and correct a technical error in prior constructions — providing greater insight into issues that can arise when composing StE schemes.
Instantiating the Hash-Then-Evaluate Paradigm: Strengthening PRFs, PCFs, and OPRFs.
We instantiate the hash-then-evaluate paradigm for pseudorandom functions (PRFs), $\mathsf{PRF}(k, x) := \mathsf{wPRF}(k, \mathsf{RO}(x))$, which builds a PRF $\mathsf{PRF}$ from a weak PRF $\mathsf{wPRF}$ via a public preprocessing random oracle $\mathsf{RO}$. In applications to secure multiparty computation (MPC), only the low-complexity wPRF performs secret-depending operations. Our construction replaces RO by $f(k_H , \mathsf{elf}(x))$, where $f$ is a non-adaptive PRF and the key $k_H$ is public and thus known to the distinguishing adversary.
We show that, perhaps surprisingly, several existing weak PRF candidates are plausibly also secure when their inputs are generated by $f(k_H , \mathsf{elf}(x))$. Firstly, analogous cryptanalysis applies (because pseudorandomness of $f$ implies good statistical properties) and/or secondly an attack
against the weak PRF with such pseudorandom inputs generated by $f$ would imply surprising results such as key agreement from the hardness of the high-noise version of the Learning Parity with Noise (LPN) when implementing both wPRF and $f$ from this assumption.
Our simple transformation of replacing RO(·) public pre-processing by $f(k_H , \mathsf{elf}(x))$ public preprocessing applies to the entire family of PRF-style functions. Specifically, we obtain results for oblivious PRFs, which are a core building block for password-based authenticated key exchange (PAKE) and private set intersection (PSI) protocols, and we also obtain results for pseudorandom correlation functions (PCF), which are a key tool for silent oblivious transfer (OT) extension.
Abuse Reporting for Metadata-Hiding Communication Based on Secret Sharing
As interest in metadata-hiding communication grows in both research and practice, a need exists for stronger abuse reporting features on metadata-hiding platforms. While message franking has been deployed on major end-to-end encrypted platforms as a lightweight and effective abuse reporting feature, there is no comparable technique for metadata-hiding platforms. Existing efforts to support abuse reporting in this setting, such as asymmetric message franking or the Hecate scheme, require order of magnitude increases in client and server computation or fundamental changes to the architecture of messaging systems. As a result, while metadata-hiding communication inches closer to practice, critical content moderation concerns remain unaddressed.
This paper demonstrates that, for broad classes of metadata-hiding schemes, lightweight abuse reporting can be deployed with minimal changes to the overall architecture of the system. Our insight is that much of the structure needed to support abuse reporting already exists in these schemes. By taking a non-generic approach, we can reuse this structure to achieve abuse reporting with minimal overhead. In particular, we show how to modify schemes based on secret sharing user inputs to support a message franking-style protocol. Compared to prior work, our shared franking technique more than halves the time to prepare a franked message and gives order of magnitude reductions in server-side message processing times, as well as in the time to decrypt a message and verify a report.
Combined Fault and Leakage Resilience: Composability, Constructions and Compiler
Uncategorized
Uncategorized
Real-world cryptographic implementations nowadays are not only attacked via classical cryptanalysis but also via implementation attacks, including passive attacks (observing side-channel information about the inner computation) and active attacks (inserting faults into the computation). While countermeasures exist for each type of attack, countermeasures against combined attacks have only been considered recently.
Masking is a standard technique for protecting against passive side-channel attacks, but protecting against active attacks with additive masking is challenging. Previous approaches include running multiple copies of a masked computation, requiring a large amount of randomness or being vulnerable to horizontal attacks. An alternative approach is polynomial masking, which is inherently fault-resistant.
This work presents a compiler based on polynomial masking that achieves linear computational complexity for affine functions and cubic complexity for non-linear functions. The resulting compiler is secure against attackers using region probes and adaptive faults. In addition, the notion of fault-invariance is introduced to improve security against combined attacks without the need to consider all possible fault combinations. Our approach has the best-known asymptotic efficiency among all known approaches.
On the Efficiency of Generic, Quantum Cryptographic Constructions
One of the central questions in cryptology is how efficient generic constructions of cryptographic primitives can be. Gennaro, Gertner, Katz, and Trevisan [SIAM J. Compt. 2005] studied the lower bounds of the number of invocations of a (trapdoor) oneway permutation in order to construct cryptographic schemes, e.g., pseudorandom number generators, digital signatures, and public-key and symmetric-key encryption.
Recently quantum machines have been explored to _construct_ cryptographic primitives other than quantum key distribution. This paper studies the efficiency of _quantum_ black-box constructions of cryptographic primitives when the communications are _classical_. Following Gennaro et al., we give the lower bounds of the number of invocations of an underlying quantumly-computable quantum-oneway permutation (QC-qOWP) when the _quantum_ construction of pseudorandom number generator (PRG) and symmetric-key encryption (SKE) is weakly black-box. Our results show that the quantum black-box constructions of PRG and SKE do not improve the number of invocations of an underlying QC-qOWP.
Composable Gadgets with Reused Fresh Masks $-$ First-Order Probing-Secure Hardware Circuits with only 6 Fresh Masks
Albeit its many benefits, masking cryptographic hardware designs has proven to be a non-trivial and error-prone task, even for experienced engineers. Masked variants of atomic logic gates, like AND or XOR - commonly referred to as gadgets - aim to facilitate the process of masking large circuits by offering free composition while sustaining the overall design's security in the $d$-probing adversary model. A wide variety of research has already been conducted to (i) find formal properties a gadget must fulfill to guarantee composability and (ii) construct gadgets that fulfill these properties, while minimizing overhead requirements. In all existing composition frameworks like NI/SNI/PINI and all corresponding gadget realizations, the security argument relies on the fact that each gadget requires individual fresh randomness. Naturally, this approach leads to very high randomness requirements of the resulting composed circuit.
In this work, we present composable gadgets with reused fresh masks (COMAR), allowing the composition of any first-order secure hardware circuit utilizing only $6$ fresh masks in total. By construction, our newly presented gadgets render individual fresh randomness unnecessary, while retaining free composition and first-order security in the robust probing model.
More precisely, we give an instantiation of gadgets realizing arbitrary XOR and AND gates with an arbitrary number of inputs which can be trivially extended to all basic logic gates.
With these, we break the linear dependency between the number of (non-linear) gates in a circuit and the randomness requirements, hence offering the designers the possibility to highly optimize a masked circuit's randomness requirements while keeping error susceptibility to a minimum.
Quantum Circuit Designs of Point Doubling Operation for Binary Elliptic Curves
In the past years, research on Shor’s algorithm for solving elliptic curves for discrete logarithm problems (Shor’s ECDLP), the basis for cracking elliptic curve-based cryptosystems (ECC), has started to garner more significant interest. To achieve this, most works focus on quantum point addition subroutines to realize the double scalar multiplication circuit, an essential part of Shor’s ECDLP, whereas the point doubling subroutines are often overlooked. In this paper, we investigate the quantum point doubling circuit for the stricter assumption of Shor’s algorithm when doubling a point should also be taken into consideration. In particular, we analyze the challenges on implementing the circuit and provide the solution. Subsequently, we design and optimize the corresponding quantum circuit, and analyze the high-level quantum resource cost of the circuit. Additionally, we discuss the implications of our findings, including the concerns for its integration with point addition for a complete double scalar multiplication circuit and the potential opportunities resulting from its implementation. Our work lays the foundation for further evaluation of Shor’s ECDLP.
Optimal Load-Balanced Scalable Distributed Agreement
We consider the fundamental problem of designing classical consensus-related distributed abstractions for large-scale networks, where the number of parties can be huge. Specifically, we consider tasks such as Byzantine Agreement, Broadcast, and Committee Election, and our goal is to design scalable protocols in the sense that each honest party processes and sends a number of bits which is sub-linear in $n$, the total number of parties.
In this work, we construct the first such scalable protocols for all of the
above tasks. In our protocols, each party processes and sends $\tilde O (\sqrt n)$ bits throughout $\tilde O (1)$ rounds of communication, and correctness is guaranteed for at most $1/3-\epsilon$ fraction of static byzantine corruptions for every constant $\epsilon>0$ (in the full information model). All previous protocols for the considered agreement tasks were non-scalable, either because the communication complexity was linear or because the computational complexity was super polynomial.
We complement our result with a matching lower bound showing that any
Byzantine Agreement protocol must have $\Omega(\sqrt n)$ complexity in our model. Previously, the state of the art was the well-known $\tilde\Omega(\sqrt[3]{n})$ lower bound of Holtby, Kapron, and King (Distributed Computing, 2008).
Invisible Warning Line: Efficient and Generic Regulation for Anonymous Cryptocurrencies
Decentralized finance based on blockchain has experienced rapid development. To safeguard the privacy of participants, decentralized anonymous payment (DAP) systems such as ZCash and Zether have emerged. These systems employ cryptographic techniques to conceal the trader addresses and payment amounts. However, this anonymity presents challenges in terms of regulation. To address this issue, we propose the Walsh-DAP (WDAP) scheme, an efficient and generic regulation scheme for decentralized anonymous payments that strikes a balance between regulation and privacy preservation. Our scheme introduces two regulation policies: first, users who have exceeded their spending limits within a certain period will be identified during the regulation process; second, the supervisor possesses the capability to trace any anonymous transaction. To implement regulation effectively, we have designed an innovative commitment scheme, Walsh commitment, which leverages the orthogonal properties of Walsh codes to achieve the features of aggregation and extraction. The supervisor in WDAP only needs to deal with the aggregation result of the Walsh commitments instead of the huge amount of raw transactions information, which greatly increases the efficiency. In a DAP system with 256 users, 10 transaction per second and 30 days as a regulation period, we reduced the communication cost for regulation from 14 GB to 94.20 KB, and the computing cost from $\text{1.6}\times \text{10}^{\text{5}}$ s to 2.17 s. Both improvement is of over five orders of magnitude. We formally discussed the security of the whole system, and verified its feasibility and practicability in the ZCash system.
A New Sieving Approach for Solving the HNP with One Bit of Nonce by Using Built-in Modulo Arithmetic
The Hidden Number Problem (HNP) has been extensively used in the side-channel attacks against (EC)DSA and Diffie-Hellman. The lattice approach is a primary method of solving the HNP. In EUROCRYPT 2021, Albrecht and Heninger constructed a new lattice to solve the HNP, which converts the HNP to the SVP. After that, their approach became the state-of-the-art lattice method of solving the HNP. But Albrecht and Heninger's approach has a high failure rate for solving the HNP with one bit of nonce ($1$-bit HNP) because there are enormous vectors related to the modulus $q$ shorter than the target vector in Albrecht and Heninger's Lattice.
To decrease the number of vectors that are shorter than the target vector and avoid the duplicated reduction, we introduce the modulo-$q$ lattice, a residue class ring of the general lattice modulo $q$, where $q$ is the modulus of the HNP. We present a new sieving algorithm to search for the shortest vectors in the modulo-$q$ lattice. Our algorithm uses built-in modulo $q$ arithmetic and many optimization techniques. As a result, we can solve a general 1-bit HNP ($q=2^{120}$) within 5 days and solve a general 1-bit HNP ($q=2^{128}$) within 17 days.
Secure Multiparty Computation with Identifiable Abort from Vindicating Release
In the dishonest-majority setting, secure multiparty computation (MPC) with identifiable abort (IA) guarantees that honest parties can identify and agree upon at least one cheating party if the protocol does not produce an output. Known MPC constructions with IA rely on generic zero-knowledge proofs, adaptively secure oblivious transfer (OT) protocols, or homomorphic primitives, and thus incur a substantial penalty with respect to protocols that abort without identifiability.
We introduce a new, weaker notion of IA called input-revealing IA (IRIA), which can be constructed through selective revealing of committed input values - a technique we call vindicating release. We show that this weaker form of IA can be achieved with small concrete overheads for many interesting protocols in the literature, including the pre-processing protocols needed for several state-of-the-art MPC protocols.
We next show how to assemble these IRIA components into an MPC protocol for any functionality with standard IA. Such a realization differs minimally in terms of cost, techniques, and analysis from the equivalent realization that lacks identifiability, e.g., our total bandwidth overhead incurred is less than 2x, which is an asymptotic improvement over prior work on IA.
On a practical level, we apply our techniques to the problem of threshold ECDSA, and show that the resulting protocol with standard IA is concretely efficient. On a theoretical level, we present a compiler that transforms any secure protocol into one with standard IA assuming only a variant of statically-corruptable ideal OT.
HaMAYO: A Fault-Tolerant Reconfigurable Hardware Implementation of the MAYO Signature Scheme
MAYO is a topical modification of the established multivariate signature scheme UOV. Signer and Verifier locally enlarge the public key map, such that the dimension of the oil space and therefore, the parameter sizes in general, can be reduced. This significantly reduces the public key size while maintaining the appealing properties of UOV, like short signatures and fast verification. Therefore, MAYO is considered as an attractive candidate in the NIST call for additional digital signatures and might be an adequate solution for real-world deployment in resource-constrained devices.
When emerging to hardware implementation of multivariate schemes
and specifically MAYO, different challenges are faced, namely resource utilization, which scales up with higher parameter sets. To accommodate this, we introduce a configurable hardware implementation designed for integration across various FPGA architectures. Our approach features adaptable configurations aligned with NIST-defined security levels and incorporates resources optimization modules. Our implementation is specifically tested on the Zynq ZedBoard with the Zynq-7020 SoC, with performance evaluations and comparisons made against previous hardware implementations of multivariate schemes.
Furthermore, we conducted a security analysis of the MAYO implementation highlighting potential physical attacks and implemented
lightweight countermeasures.
Randomness Generation for Secure Hardware Masking - Unrolled Trivium to the Rescue
Masking is a prominent strategy to protect cryptographic implementations against side-channel analysis. Its popularity arises from the exponential security gains that can be achieved for (approximately) quadratic resource utilization. Many variants of the countermeasure tailored for different optimization goals have been proposed. The common denominator among all of them is the implicit demand for robust and high entropy randomness. Simply assuming that uniformly distributed random bits are available, without taking the cost of their generation into account, leads to a poor understanding of the efficiency vs. security tradeoff of masked implementations. This is especially relevant in case of hardware masking schemes which are known to consume large amounts of random bits per cycle due to parallelism. Currently, there seems to be no consensus on how to most efficiently derive many pseudo-random bits per clock cycle from an initial seed and with properties suitable for masked hardware implementations.
In this work, we evaluate a number of building blocks for this purpose and find that hardware-oriented stream ciphers like Trivium and its reduced-security variant Bivium B outperform most competitors when implemented in an \emph{unrolled} fashion. Unrolled implementations of these primitives enable the flexible generation of many bits per cycle, which is crucial for satisfying the large randomness demands of state-of-the-art masking schemes.
According to our analysis, only Linear Feedback Shift Registers (LFSRs), when also unrolled, are capable of producing long non-repetitive sequences of random-looking bits at a higher rate per cycle for the same or lower cost as Trivium and Bivium B. Yet, these instances do not provide black-box security as they generate only linear outputs. We experimentally demonstrate that using multiple output bits from an LFSR in the same masked implementation can violate probing security and even lead to harmful randomness cancellations. Circumventing these problems, and enabling an independent analysis of randomness generation and masking, requires the use of cryptographically stronger primitives like stream ciphers.
As a result of our studies, we provide an evidence-based estimate for the cost of securely generating $n$ fresh random bits per cycle. Depending on the desired level of black-box security and operating frequency, this cost can be as low as $20n$ to $30n$ ASIC gate equivalent (GE) or $3n$ to $4n$ FPGA look-up tables (LUTs), where $n$ is the number of random bits required. Our results demonstrate that the cost per bit is (sometimes significantly) lower than estimated in previous works, incentivizing parallelism whenever exploitable. This provides further motivation to potentially move low randomness usage from a primary to a secondary design goal in hardware masking research.
Algebraic Attacks on RAIN and AIM Using Equivalent Representations
Designing novel symmetric-key primitives for advanced protocols like secure multiparty computation (MPC), fully homomorphic encryption (FHE) and zero-knowledge proof systems (ZK), has been an important research topic in recent years. Many such existing primitives adopt quite different design strategies from conventional block ciphers. Notable features include that many of these ciphers are defined over a large finite field, and that a power map is commonly used to construct the nonlinear component due to its efficiency in these applications as well as its strong resistance against the differential and linear cryptanalysis. In this paper, we target the MPC-friendly ciphers AIM and RAIN used for the post-quantum signature schemes AIMer (CCS 2023 and NIST PQC Round 1 Additional Signatures) and Rainier (CCS 2022), respectively. Specifically, we can find equivalent representations of 2-round RAIN and full-round AIM, respectively, which make them vulnerable to either the polynomial method, or the crossbred algorithm, or the fast exhaustive search attack. Consequently, we can break 2-round RAIN with the 128/192/256-bit key in only $2^{111}/2^{170}/2^{224}$ bit operations. For full-round AIM with the 128/192/256-bit key, we could break them in $2^{136.2}/2^{200.7}/2^{265}$ bit operations, which are equivalent to about $2^{115}/2^{178}/2^{241}$ calls of the underlying primitives. In particular, our analysis indicates that AIM does not reach the required security levels by the NIST competition.
Cryptanalysis and Improvement of a Flexible and Lightweight Group Authentication Scheme
Shamir’s secret sharing scheme is one of the substantial threshold primitives, based on which many security protocols are constructed such as group authentication schemes. Notwithstanding the unconditional security of Shamir's secret sharing scheme, protocols that are designed based on this scheme do not necessarily inherit this property. In this work, we evaluate the security of a lightweight group authentication scheme, introduced for IoT networks in IEEE IoT Journal in 2020, and prove its weakness against the linear subspace attack, which is a recently-proposed cryptanalytical method for secret sharing-based schemes. Then, we propose an efficient and attack-resistant group authentication protocol for IoT networks.
One vector to rule them all: Key recovery from one vector in UOV schemes
Unbalanced Oil and Vinegar is a multivariate signature scheme that was introduced in 1999.
Most multivariate candidates for signature schemes at NIST's PQC standardization process are either based on UOV or closely related to it.
The UOV trapdoor is a secret subspace, the "oil subspace".
We show how to recover an equivalent secret key from the knowledge of a single vector in the oil subspace in any characteristic.
The reconciliation attack was sped-up by adding some bilinear equations in the subsequent computations, and able to conclude after two vectors were found.
We show here that these bilinear equations contain enough information to dismiss the quadratic equations and retrieve the secret subspace with linear algebra for practical parametrizations of UOV, in at most 15 seconds for modern instanciations of UOV.
This proves that the security of the UOV scheme lies in the complexity of finding exactly one vector in the oil space.
In addition, we deduce a key recovery attack from any forgery attack by applying a corollary of our main result.
We show how to extend this result to schemes related to UOV, such as MAYO and VOX.
Asynchronous Agreement on a Core Set in Constant Expected Time and More Efficient Asynchronous VSS and MPC
A major challenge of any asynchronous MPC protocol is the need to reach an agreement on the set of private inputs to be used as input for the MPC functionality. Ben-Or, Canetti and Goldreich [STOC 93] call this problem Agreement on a Core Set (ACS) and solve it by running $n$ parallel instances of asynchronous binary Byzantine agreements. To the best of our knowledge, all results in the perfect and statistical security setting used this same paradigm for solving ACS. Using all known asynchronous binary Byzantine agreement protocols, this type of ACS has $\Omega(\log n)$ expected round complexity, which results in such a bound on the round complexity of asynchronous MPC protocols as well (even for constant depth circuits).
We provide a new solution for Agreement on a Core Set that runs in expected $O(1)$ rounds. Our perfectly secure variant is optimally resilient ($t<n/4$) and requires just $O(n^4\log n)$ expected communication complexity. We show a similar result with statistical security for $t<n/3$. Our ACS is based on a new notion of Asynchronously Validated Asynchronous Byzantine Agreement (AVABA) and new information-theoretic analogs to techniques used in the authenticated model. Along the way, we also construct a new perfectly secure packed asynchronous verifiable secret sharing (AVSS) protocol with just $O(n^3\log n)$ communication complexity, improving the state of the art by a factor of $O(n)$. This leads to a more efficient asynchronous MPC that matches the state-of-the-art synchronous MPC.
All You Need Is Fault: Zero-Value Attacks on AES and a New $\lambda$-Detection M&M
Deploying cryptography on embedded systems requires security against physical attacks. At CHES 2019, M&M was proposed as a combined countermeasure applying masking against SCAs and information-theoretic MAC tags against FAs.
In this paper, we show that one of the protected AES implementations in the M&M paper is vulnerable to a zero-value SIFA2-like attack. A practical attack is demonstrated on an ASIC board.
We propose two versions of the attack: the first follows the SIFA approach to inject faults in the last round, while the second one is an extension of SIFA and FTA but applied to the first round with chosen plaintext. The two versions work at the byte level, but the latter version considerably improves the efficiency of the attack.
Moreover, we show that this zero-value SIFA2 attack is specific to the AES tower-field decomposed S-box design. Hence, such attacks are applicable to any implementation featuring this AES S-box architecture.
Then, we propose a countermeasure that prevents these attacks. We extend M&M with a fine-grained detection-based feature capable of detecting the zero-value glitch attacks. In this effort, we also solve the problem of a combined attack on the ciphertext output check of M&M scheme by using Kronecker's delta function. We deploy the countermeasure on FPGA and verify its security against both fault and side-channel analysis with practical experiments.
Leaking Secrets in Homomorphic Encryption with Side-Channel Attacks
Uncategorized
Uncategorized
Homomorphic encryption (HE) allows computing encrypted data in the ciphertext domain without knowing the encryption key. It is possible, however, to break fully homomorphic encryption (FHE) algorithms by using side channels. This article demonstrates side-channel leakages of the Microsoft SEAL HE library. The proposed attack can steal encryption keys during the key generation phase by abusing the leakage of ternary value assignments that occurs during the number theoretic transform (NTT) algorithm. We propose two attacks, one for -O0 flag non-optimized code implementation which targets addition and subtraction operations, and one for -O3 flag compiler optimization which targets guard and mul root operations. In particular, the attacks can steal the secret key coefficients from a single power/electromagnetic measurement trace of SEAL’s NTT implementation. To achieve high accuracy with a single-trace, we develop novel machine-learning side-channel profilers. On an ARM Cortex-M4F processor, our attacks are able to extract secret key coefficients with an accuracy of 98.3% when compiler optimization is disabled, and 98.6% when compiler optimization is enabled. We finally demonstrate that our attack can evade an application of the random delay insertion defense.
TariScript: Bringing dynamic scripting to Mimblewimble
Mimblewimble is a cryptocurrency protocol with good privacy and scalability properties. A trade-off of Mimblewimble is the requirement that transactions are interactive between sender and receiver. TariScript is presented as an extension to Mimblewimble that adds scripting capabilities to the protocol. We describe the theoretical basis for TariScript and present the modifications required to make it secure. The trade-offs and use cases for TariScript are briefly covered.
Non-Observable Quantum Random Oracle Model
Uncategorized
Uncategorized
The random oracle model (ROM), introduced by Bellare and Rogaway (CCS 1993), enables a formal security proof for many (efficient) cryptographic primitives and protocols, and has been quite impactful in practice. However, the security model also relies on some very strong and non-standard assumptions on how an adversary interacts with a cryptographic hash function, which might be unrealistic in a real world setting and thus could lead one to question the validity of the security analysis. For example, the ROM allows adaptively programming the hash function or observing the hash evaluations that an adversary makes.
We introduce a substantially weaker variant of the random oracle model in the post-quantum setting, which we call "non-observable quantum random oracle model" (NO QROM). Our model uses weaker heuristics than the quantum random oracle model by Boneh, Dagdelen, Fischlin, Lehmann, Schaffner, and Zhandry (ASIACRYPT 2011), or the non-observable random oracle model proposed by Ananth and Bhaskar (ProvSec 2013). At the same time, we show that our model is a viable option for establishing the post-quantum security of many cryptographic schemes by proving the security of important primitives such as extractable non-malleable commitments, digital signatures, and chosen-ciphertext secure public-key encryption in the NO QROM.
Finding short integer solutions when the modulus is small
We present cryptanalysis of the inhomogenous short integer solution (ISIS) problem for anomalously small moduli \(q\) by exploiting the geometry of BKZ reduced bases of $q$-ary lattices.
We apply this cryptanalysis to examples from the literature where taking such small moduli has been suggested. A recent work [Espitau–Tibouchi–Wallet–Yu, CRYPTO 2022] suggests small \(q\) versions of the lattice signature scheme FALCON and its variant MITAKA.
For one small \(q\) parametrisation of FALCON we reduce the estimated security against signature forgery by approximately 26 bits. For one small \(q\) parametrisation of MITAKA we successfully forge a signature in $15$ seconds.
An Algebraic Approach to Circulant Column Parity Mixers
Column Parity Mixers, or CPMs in short, are a particular type of linear maps, used as the mixing layer in permutation-based cryptographic primitives like Keccak-f (SHA3) and Xoodoo. Although being successfully applied, not much is known regarding their algebraic properties. They are limited to invertibility of CCPMs, and that the set of invertible CCPMs forms a group. A possible explanation is due to the complexity of describing CPMs in terms of linear algebra. In this paper, we introduce a new approach to studying CPMs using module theory from commutative algebra. We show that many interesting algebraic properties can be deduced using this approach, and that known results regarding CPMs turn out to be trivial consequences of module theoretic concepts. We also show how this approach can be used to study the linear layer of Xoodoo, and other linear maps with a similar structure which we call DCD-compositions. Using this approach, we prove that every DCD-composition where the underlying vector space with the same dimension as that of Xoodoo has a low order. This provides a solid mathematical explanation for the low order of the linear layer of Xoodoo, which equals 32. We design a DCD-composition using this module theoretic approach, but with a higher order using a different dimension.
On the Cost of Post-Compromise Security in Concurrent Continuous Group-Key Agreement
Continuous Group-Key Agreement (CGKA) allows a group of users to maintain a shared key.
It is the fundamental cryptographic primitive underlying group messaging schemes and related protocols, most notably TreeKEM, the underlying key agreement protocol of the Messaging Layer Security (MLS) protocol, a standard for group messaging by the IETF.
CKGA works in an asynchronous setting where parties only occasionally must come online, and their messages are relayed by an untrusted server.
The most expensive operation provided by CKGA is that which allows for a user to refresh their key material in order to achieve forward secrecy (old messages are secure when a user is compromised) and post-compromise security (users can heal from compromise).
One caveat of early CGKA protocols is that these update operations had to be performed sequentially, with any user wanting to update their key material having had to receive and process all previous updates.
Late versions of TreeKEM do allow for concurrent updates at the cost of a communication overhead per update message that is linear in the number of updating parties.
This was shown to be indeed necessary when achieving PCS in just two rounds of communication by [Bienstock et al. TCC'20].
The recently proposed protocol CoCoA [Alwen et al. Eurocrypt'22], however, shows that this overhead can be reduced if PCS requirements are relaxed, and only a logarithmic number of rounds is required.
The natural question, thus, is whether CoCoA is optimal in this setting.
In this work we answer this question, providing a lower bound on the cost (concretely, the amount of data to be uploaded to the server) for CGKA protocols that heal in an arbitrary $k$ number of rounds, that shows that CoCoA is very close to optimal.
Additionally, we extend CoCoA to heal in an arbitrary number of rounds, and propose a modification of it, with a reduced communication cost for certain $k$.
We prove our bound in a combinatorial setting where the state of the protocol progresses in rounds, and the state of the protocol in each round is captured by a set system, each set specifying a set of users who share a secret key.
We show this combinatorial model is equivalent to a symbolic model capturing building blocks including PRFs and public-key encryption, related to the one used by Bienstock et al.
Our lower bound is of order $k\cdot n^{1+1/(k-1)}/\log(k)$, where $2\le k\le \log(n)$ is the number of updates per user the protocol requires to heal.
This generalizes the $n^2$ bound for $k=2$ from Bienstock et al.
This bound almost matches the $k\cdot n^{1+2/(k-1)}$ or $k^2\cdot n^{1+1/(k-1)}$ efficiency we get for the variants of the CoCoA protocol also introduced in this paper.
Frequency-revealing attacks against Frequency-hiding Order-preserving Encryption
Order-preserving encryption (OPE) allows efficient comparison operations over encrypted data and thus is popular in encrypted databases. However, most existing OPE schemes are vulnerable to inference attacks as they leak plaintext frequency. To this end, some frequency-hiding order-preserving encryption (FH-OPE) schemes are proposed and claim to prevent the leakage of frequency. FH-OPE schemes are considered an important step towards mitigating inference attacks.
Unfortunately, there are still vulnerabilities in all existing FH-OPE schemes. In this work, we revisit the security of all existing FH-OPE schemes. We are the first to demonstrate that plaintext frequency hidden by them is recoverable. We present three ciphertext-only attacks named frequency-revealing attacks to recover plaintext frequency. We evaluate our attacks in three real-world datasets. They recover over 90% of plaintext frequency hidden by any existing FH-OPE scheme. With frequency revealed, we also show the potentiality to apply inference attacks on existing FH-OPE schemes.
Our findings highlight the limitations of current FH-OPE schemes. Our attacks demonstrate that achieving frequency-hiding requires addressing the leakages of both non-uniform ciphertext distribution and insertion orders of ciphertexts, even though the leakage of insertion orders is always ignored in OPE.
SoK: Public Randomness
Public randomness is a fundamental component in many cryptographic protocols and distributed systems and often plays a crucial role in ensuring their security, fairness, and transparency properties. Driven by the surge of interest in blockchain and cryptocurrency platforms and the usefulness of such a building block in those areas, designing secure protocols to generate public randomness in a distributed manner has received considerable attention in recent years. This paper presents a systematization of knowledge on the topic of public randomness with a focus on cryptographic tools providing public verifiability and key themes underlying these systems. We provide concrete insights on how state-of-the-art protocols achieve this task efficiently in an adversarial setting and present various research gaps that may be of interest for future research.
TVA: A multi-party computation system for secure and expressive time series analytics
We present TVA, a multi-party computation (MPC) system for secure analytics on secret-shared time series data. TVA achieves strong security guarantees in the semi-honest and malicious settings, and high expressivity by enabling complex analytics on inputs with unordered and irregular timestamps. TVA is the first system to support arbitrary composition of oblivious window operators, keyed aggregations, and multiple filter predicates, while keeping all data attributes private, including record timestamps and user-defined values in query predicates. At the core of the TVA system lie novel protocols for secure window assignment: (i) a tumbling window protocol that groups records into fixed-length time buckets and (ii) two session window protocols that identify periods of activity followed by periods of inactivity. We also contribute a new protocol for secure division with a public divisor, which may be of independent interest. We evaluate TVA on real LAN and WAN environments and show that it can efficiently compute complex window-based analytics on inputs of $2^{22}$ records with modest use of resources. When compared to the state-of-the-art, TVA achieves up to $5.8\times$ lower latency in queries with multiple filters and two orders of magnitude better performance in window aggregation.
Outsider-Anonymous Broadcast Encryption with Keyword Search: Generic Construction, CCA Security, and with Sublinear Ciphertexts
As a multi-receiver variants of public key encryption with keyword search (PEKS), broadcast encryption with keyword search (BEKS) has been proposed (Attrapadung et al. at ASIACRYPT 2006/Chatterjee-Mukherjee at INDOCRYPT 2018). Unlike broadcast encryption, no receiver anonymity is considered because the test algorithm takes a set of receivers as input and thus a set of receivers needs to be contained in a ciphertext. In this paper, we propose a generic construction of BEKS from anonymous and weakly robust 3-level hierarchical identity-based encryption (HIBE). The proposed generic construction provides outsider anonymity, where an adversary is allowed to obtain secret keys of outsiders who do not belong to the challenge sets, and provides sublinear-size ciphertext in terms of the number of receivers. Moreover, the proposed construction considers security against chosen-ciphertext attack (CCA) where an adversary is allowed to access a test oracle in the searchable encryption context. The proposed generic construction can be seen as an extension to the Fazio-Perera generic construction of anonymous broadcast encryption (PKC 2012) from anonymous and weakly robust identity-based encryption (IBE) and the Boneh et al. generic construction of PEKS (EUROCRYPT 2004) from anonymous IBE. We run the Fazio-Perera construction employs on the first-level identity and run the Boneh et al. generic construction on the second-level identity, i.e., a keyword is regarded as a second-level identity. The third-level identity is used for providing CCA security by employing one-time signatures. We also introduce weak robustness in the HIBE setting, and demonstrate that the Abdalla et al. generic transformation (TCC 2010/JoC 2018) for providing weak robustness to IBE works for HIBE with an appropriate parameter setting. We also explicitly introduce attractive concrete instantiations of the proposed generic construction from pairings and lattices, respectively.
Practically-exploitable Vulnerabilities in the Jitsi Video Conferencing System
Jitsi Meet is an open-source video conferencing system, and a popular alternative to proprietary services such as Zoom and Google Meet. The Jitsi project makes strong privacy and security claims in its advertising, but there is no published research into the merits of these claims. Moreover, Jitsi announced end-to-end encryption (E2EE) support in April 2020, and prominently features this in its marketing.
We present an in-depth analysis of the design of Jitsi and its use of cryptography. Based on our analysis, we demonstrate two practical attacks that compromised server components can mount against the E2EE layer: we show how the bridge can break integrity by injecting inauthentic media into E2EE conferences, whilst the signaling server can defeat the encryption entirely. On top of its susceptibility to these attacks, the E2EE feature does not apply to text-based communications. This is not made apparent to users and would be a reasonable expectation given how Jitsi is marketed. Further, we identify critical issues with Jitsi's poll feature, which allow any meeting participant to arbitrarily manipulate voting results. Our findings are backed by proof-of-concept implementations and were verified to be exploitable in practice.
We communicated our findings to Jitsi via a coordinated disclosure process. Jitsi has addressed the vulnerabilities via a mix of technical improvements and documentation changes.
Mask Compression: High-Order Masking on Memory-Constrained Devices
Masking is a well-studied method for achieving provable security against side-channel attacks. In masking, each sensitive variable is split into $d$ randomized shares, and computations are performed with those shares. In addition to the computational overhead of masked arithmetic, masking also has a storage cost, increasing the requirements for working memory and secret key storage proportionally with $d$.
In this work, we introduce mask compression. This conceptually simple technique is based on standard, non-masked symmetric cryptography. Mask compression allows an implementation to dynamically replace individual shares of large arithmetic objects (such as polynomial rings) with $\kappa$-bit cryptographic seeds (or temporary keys) when they are not in computational use. Since $\kappa$ does not need to be larger than the security parameter (e.g., $\kappa=256$ bits) and each polynomial share may be several kilobytes in size, this radically reduces the memory requirement of high-order masking. Overall provable security properties can be maintained by using appropriate gadgets to manage the compressed shares. We describe gadgets with Non-Inteference (NI) and composable Strong-Non Interference (SNI) security arguments.
Mask compression can be applied in various settings, including symmetric cryptography, code-based cryptography, and lattice-based cryptography. It is especially useful for cryptographic primitives that allow quasilinear-complexity masking and hence are practically capable of very high masking orders. We illustrate this with a $d=32$ (Order-31) implementation of the recently introduced lattice-based signature scheme Raccoon on an FPGA platform with limited memory resources.
Applying system of equations to factor semiprime numbers
This paper explores the use of a system of equations to factor semiprime numbers. Semiprime numbers are a special type of omposite number that are the product of two prime numbers. Factoring semiprime numbers is important in cryptography and number theory. In this study, we present a method that applies a system of polynomial equations to factor semiprime number $M$. Where $M$ can be any semiprime number. In fact, we build a family of systems where each system compose from three polynomial equations with three variables. The results of this study show that a solution for one system results with a complete factorization for a semiprime number. It may be possible to apply well known algorithms, such as Grobner method to solve one of those systems for a particular semiprime number $M$.
Two Shuffles Make a RAM: Improved Constant Overhead Zero Knowledge RAM
We optimize Zero Knowledge (ZK) proofs of statements expressed as RAM programs over arithmetic values. Our arithmetic-circuit-based read/write memory uses only 4 input gates and 6 multiplication gates per memory access. This is an almost 3× total gate improvement over prior state of the art (Delpech de Saint Guilhem et al., SCN’22).
We implemented our memory in the context of ZK proofs based on vector oblivious linear evaluation (VOLE), and we further optimize based on techniques available in the VOLE setting. Our experiments show that (1) our total runtime improves over that of the prior best VOLE-ZK RAM (Franzese et al., CCS’21) by 2-20× and (2) on a typical hardware setup, we can achieve ≈ 600K RAM accesses per second.
We also develop improved read-only memory and set ZK data structures. These are used internally in our read/write memory and improve over prior work.
On iterated punctured Grover
Grover’s algorithm is a very versatile cryptanalytical tool. Even though it doesn’t provide an exponential speed-up, it still changed the cryptographic requirements all over the world. Usually, Grover’s algorithm is executed with a fixed well-defined function indicating good states. In this paper, we want to investigate what happens if the function is changed over time to mark less and less good states. We compute the amplitudes after $2^{s/2}$ steps of an adjusted Grover’s algorithm proposed by Zheng et al. in Nested Quantum Search Model on Symmetric Ciphers and Its Applications (2023). We use the amplitudes to reason that such an approach always leads to a worse run-time when compared to the naïve version. We also indicate at which point in Zheng et al. the counterintuitive nature of quantum computation leads to false assumptions.
Breaking the Hutton 2 challenge
Uncategorized
Uncategorized
In 2018, Eric Bond Hutton posed a challenge online involving a classical cipher of his creation. It was broken nearly two years later by brute-forcing the keywords, and a new challenge that involves a modified cipher was posted. This is an explanation of how we broke the second challenge. We did so by scanning all books on Project Gutenberg for an acceptable match, then resolving any discrepancies and finding the keys.
Tornado Vote: Anonymous Blockchain-Based Voting
Decentralized apps (DApps) often hold significant cryptocurrency assets. In order to manage these assets and coordinate joint investments, shareholders leverage the underlying smart contract functionality to realize a transparent, verifiable, and secure decision-making process. That is, DApps implement proposal-based voting. Permissionless blockchains, however, lead to a conflict between transparency and anonymity; potentially preventing free decision-making if individual votes and intermediate results become public. In this paper, we therefore present Tornado Vote, a voting DApp for anonymous, fair, and practical voting on the Ethereum blockchain. We propose to use a cryptocurrency mixer such as Tornado Cash to reconcile transparency and anonymity. To this end, we adapt Tornado Cash and develop a voting protocol that implements a fair voting process. While Tornado Vote can technically process 10k votes on Ethereum in approximately two hours, this is not feasible under realistic conditions: Third-party transactions on the Ethereum Mainnet reduce the possible throughput, and transaction fees make it infeasible to use all available block capacities. We therefore present various Gas cost models that yield lower bounds and economic estimations with respect to the required number of blocks and voting costs to assess and adjust Tornado Vote's feasibility trade-off.
Optimized stream-cipher-based transciphering by means of functional-bootstrapping
Fully homomorphic encryption suffers from a large expansion in the size of encrypted data, which makes FHE impractical for low-bandwidth networks. Fortunately, transciphering allows to circumvent this issue by involving a symmetric cryptosystem which does not carry the disadvantage of a large expansion factor, and maintains the ability to recover an FHE ciphertext with the cost of extra homomorphic computations on the receiver side. Recent works have started to investigate the efficiency of TFHE as the FHE layer in transciphering, combined with various symmetric schemes including a NIST finalist for lightweight cryptography, namely Grain128-AEAD. Yet, this has so far been done without taking advantage of TFHE functional bootstrapping abilities, that is, evaluating any discrete function ``for free'' within the bootstrapping operation. In this work, we thus investigate the use of TFHE functional bootstrapping for implementing Grain128-AEAD in a more efficient base ($B > 2$) representation, rather than a binary one. This significantly reduces the overall number of necessary bootstrappings in a homomorphic run of the stream-cipher, for example reducing the number of bootstrappings required in the warm-up phase by a factor of $\approx$ 3 when $B=16$.
Breaking Free: Leakage Model-free Deep Learning-based Side-channel Analysis
Profiling side-channel analysis has gained widespread acceptance in both academic and industrial realms due to its robust capacity to unveil protected secrets, even in the presence of countermeasures. To harness this capability, an adversary must access a clone of the target device to acquire profiling measurements, labeling them with leakage models. The challenge of finding an effective leakage model, especially for a protected dataset with a low signal-to-noise ratio or weak correlation between actual leakages and labels, often necessitates an intuitive engineering approach, as otherwise, the attack will not perform well.
In this paper, we introduce a deep learning approach that does not assume any specific leakage model, referred to as the multibit model.
Instead of trying to learn a representation of the target intermediate data (label), we utilize the concept of the stochastic model to decompose the label into bits. Then, the deep learning model is used to classify each bit independently. This versatile multibit model can align with existing leakage models like the Hamming weight and Most Significant Bit leakage models while also possessing the flexibility to adapt to complex leakage scenarios. To further improve the attack efficiency, we extend the multibit model to simultaneously attack all 16 subkey bytes, which requires negligible computational effort. Based on our preliminary analysis, two of the four considered datasets could only be broken using a Hamming Weight leakage model. Using the same model, the proposed methods can efficiently crack all key bytes across four considered datasets. Our work, thus, signifies a significant step forward in deep learning-based side-channel attacks, showcasing a high degree of flexibility and efficiency without any presumption of the leakage model.
An End-to-end Plaintext-based Side-channel Collision Attack without Trace Segmentation
Side-channel Collision Attacks (SCCA) constitute a subset of non-profiling attacks that exploit information dependency leaked during cryptographic operations. Unlike traditional collision attacks, which seek instances where two different inputs to a cryptographic algorithm yield identical outputs, SCCAs specifically target the internal state, where identical outputs are more likely. In CHES 2023, Staib et al. presented a Deep Learning-based SCCA (DL-SCCA), which enhanced the attack performance while decreasing the required effort for leakage preprocessing. Nevertheless, this method inherits the conventional SCCA's limitations, as it operates on trace segments reflecting the target operation explicitly, leading to issues such as portability and low tolerance to errors.
This paper introduces an end-to-end plaintext-based SCCA to address these challenges. We leverage the bijective relationship between plaintext and secret data to label the leakage measurement with known information, then learn plaintext-based profiling models to depict leakages from varying operations. By comparing the leakage representations produced by the profiling model, an adversary can reveal the key difference. As an end-to-end approach, we propose an error correction scheme to rectify false predictions. Experimental results indicate our approach significantly surpasses DL-SCCA in terms of attack performance (e.g., success rate increased from 53\% to 100\%) and computational complexity (training time reduced from approximately 2 hours to 10 minutes). These findings underscore our method's effectiveness and practicality in real-world attack scenarios.
It's a Kind of Magic: A Novel Conditional GAN Framework for Efficient Profiling Side-channel Analysis (Extended Version)
Profiling side-channel analysis (SCA) is widely used to evaluate the security of cryptographic implementations under worst-case attack scenarios. This method assumes a strong adversary with a fully controlled device clone, known as a profiling device, with full access to the internal state of the target algorithm, including the mask shares. However, acquiring such a profiling device in the real world is challenging, as secure products enforce strong life cycle protection, particularly on devices that allow the user partial (e.g., debug mode) or full (e.g., test mode) control. This enforcement restricts access to profiling devices, significantly reducing the effectiveness of profiling SCA.
To address this limitation, this paper introduces a novel framework that allows an attacker to create and learn from their own white-box reference design without needing privileged access on the profiling device.
Specifically, the attacker first implements the target algorithm on a different type of device with full control. Since this device is a white box to the attacker, they can access all internal states and mask shares. A novel conditional generative adversarial network (CGAN) framework is then introduced to mimic the feature extraction procedure from the reference device and transfer this experience to extract high-order leakages from the target device. These extracted features then serve as inputs for profiled SCA. Experiments show that our approach significantly enhances the efficacy of black-box profiling SCA, matching or potentially exceeding the results of worst-case security evaluations. Compared with conventional profiling SCA, which has strict requirements on the profiling device, our framework relaxes this threat model and, thus, can be better adapted to real-world attacks.
Verifiable Timed Proxy Signatures and Multi-signatures
Verifiable timed commitments serve as cryptographic tools that enable the binding of information to specific time intervals. By integrating these commitments into signature schemes, secure and tamper-evident digital signatures can be generated, ensuring the integrity of time-sensitive mechanisms. This article delves into the concept of verifiable timed commitments and explores their efficient applications in digital signature constructions. Specifically, it focuses on two important signature schemes: proxy signatures and multi-signatures. The idea of the timed proxy signature is to enable the delegation of signing rights for a specified period, allowing designated entities to sign messages on behalf of the original signer. On the other hand, multi-signatures allow multiple parties to collectively generate a single signature, ensuring enhanced security and accountability. The article presents an in-depth analysis of the underlying mechanisms, discussing their properties, strengths, and computational complexity. Through this exploration, the article aims to shed light on the potential of verifiable timed commitments and inspire further research in this evolving field of cryptography.
ProtoGalaxy: Efficient ProtoStar-style folding of multiple instances
We continue the recent line of work on folding schemes. Building on ideas from ProtoStar [BC23] we construct a folding scheme where the recursive verifier's ``marginal work'', beyond linearly combining witness commitments, consists only of a logarithmic number of field operations and a constant number of hashes. Moreover, our folding scheme performs well when \emph{folding multiple instances at one step}, in which case the marginal number of verifier field operations per instance becomes constant, assuming constant degree gates.
MAPLE: A Metadata-Hiding Policy-Controllable Encrypted Search Platform with Minimal Trust
Commodity encrypted storage platforms (e.g., IceDrive, pCloud) permit data store and sharing across multiple users while preserving data confidentiality. However, end-to-end encryption may not be sufficient since it only offers confidentiality when the data is at rest or in transit. Meanwhile, sensitive information can be leaked from metadata representing activities during data operations (e.g., query, processing). Recent encrypted search platforms such as DORY (OSDI’20) or DURASIFT (WPES’19) permit multi-user data query functionalities, while protecting metadata privacy. However, they either incur a high processing overhead or offer limited secu- rity/functionality, and require strong trust assumptions.
We propose MAPLE, a new metadata-hiding encrypted search platform that offers query functionalities (search, update) on the shared data across multiple users with complex policy controls. MAPLE protects metadata privacy all the time during query processing, while achieving significantly (asymptotically) lower processing overhead than state-of-the-art platforms. The core technique of MAPLE is the design of oblivious data structures for search index and access control coupled with secure computation techniques to enable efficient query processing with a minimal trust. We fully implemented MAPLE and evaluated its performance on commodity cloud (Amazon EC2) under real settings. Experimental results showed that MAPLE achieved a concrete performance comparable with its counterparts, while offering provably stronger security guarantees and more diverse functionalities.
An Efficient Unicode encoded in UTF-16 text cryptography based on the AES algorithm
Data security and secrecy from unwanted applications are the subjects of the science known as cryptography. The advanced encryption standard algorithm is the most used and secure algorithm to encrypt data. The AES algorithm is based on the symmetric algorithm and uses a single key to encrypt and decrypt data. The AES algorithm uses 128 bits length of plain text with 128 bits, 192 bits, or 256 bits key size to encrypt data. Latin script uses ASCII codes, and a single byte represents each alphabet. Therefore, in 128 bits AES encryption, 16 characters can be encrypted each time. The other language script used the Unicode standard to represent their alphabets. In Unicode, at least 2 bytes are required to represent a character. Therefore, eight characters can be encrypted at a time. Also, there is no available S-box for Unicode characters. Therefore, a modified algorithm is proposed for Unicode to encrypt data. To use the AES algorithm in Unicode data, we need to convert the Unicode into character encoding, such as UTF-16. Nevertheless, In UTF-16, some Unicode characters have similar recurrent values. This paper demonstrates a modified AES algorithm to encrypt the Unicode script to reduce time complexity.
Practical Large-Scale Proof-of-Stake Asynchronous Total-Order Broadcast
We present simple and practical protocols for generating randomness as used by asynchronous total-order broadcast. The protocols are secure in a proof-of-stake setting with dynamically changing stake. They can be plugged into existing protocols for asynchronous total-order broadcast and will turn these into asynchronous total-order broadcast with dynamic stake. Our contribution relies on two important techniques. The paper ``Random Oracles in Constantinople: Practical Asynchronous Byzantine Agreement using Cryptography'' [Cachin, Kursawe, and Shoup, PODC 2000] has influenced the design of practical total-order broadcast through its use of threshold cryptography. However, it needs a setup protocol to be efficient. In a proof-of-stake setting with dynamic stake this setup would have to be continually recomputed, making the protocol impractical. The work ``Asynchronous Byzantine Agreement with Subquadratic Communication'' [Blum, Katz, Liu-Zhang, and Loss, TCC 2020] showed how to use an initial setup for broadcast to asymptotically efficiently generate sub-sequent setups. The protocol, however, resorted to fully homomorphic encryption and was therefore not practically efficient. We adopt their approach to the proof-of-stake setting with dynamic stake, apply it to the Constantinople paper, and remove the need for fully homomorphic encryption. This results in simple and practical proof-of-stake protocols. We discuss how to use the new coin-flip protocols together with DAG rider [Keidar et al., PODC 2021] and create a variant which works for dynamic proof of stake. Our method can be employed together with many further asynchronous total-order broadcast protocols.
Coercion Mitigation for Voting Systems with Trackers: A Selene Case Study
An interesting approach to achieving verifiability in voting systems is to make use of tracking numbers. This gives voters a simple way of verifying that their ballot was counted: they can simply look up their ballot/tracker pair on a public bulletin board. It is crucial to understand how trackers affect other security properties, in particular privacy. However, existing privacy definitions are not designed to accommodate tracker-based voting systems. Furthermore, the addition of trackers increases the threat of coercion. There does however exist techniques to mitigate the coercion threat. While the term coercion mitigation has been used in the literature when describing voting systems such as Selene, no formal definition of coercion mitigation seems to exist. In this paper we formally define what coercion mitigation means for tracker-based voting systems. We model Selene in our framework and we prove that Selene provides coercion mitigation, in addition to privacy and verifiability.
$\mathcal{S}_0$-equivalent classes, a new direction to find better weightwise perfectly balanced functions, and more
We investigate the concept of $\mathcal{S}_0$-equivalent class, $n$-variable Boolean functions up to the addition of a symmetric function null in $0_n$ and $1_n$, as a tool to study weightwise perfectly balanced functions.
On the one hand we show that weightwise properties, such as being weightwise perfectly balanced, the weightwise nonlinearity and weightwise algebraic immunity, are invariants of these classes.
On the other hand we analyze the variation of global parameters inside the same class, showing for example that there is always a function with high degree, algebraic immunity, or nonlinearity in the $\mathcal{S}_0$-equivalent class of a function.
Finally, we discuss how these results extend to other equivalence relations and their applications in cryptography.
Shift-invariance Robustness of Convolutional Neural Networks in Side-channel Analysis
Convolutional neural networks (CNNs) offer unrivaled performance in profiling side-channel analysis. This claim is corroborated by numerous results where CNNs break targets protected with masking and hiding countermeasures. One hiding countermeasure is commonly investigated in related works - desynchronization (misalignment). The conclusions usually state that CNNs can break desynchronization as they are shift-invariant. This paper investigates that claim in more detail and reveals that the situation is more complex. While CNNs have certain shift-invariance, it is insufficient for commonly encountered scenarios in deep learning-based side-channel analysis.
We propose to use data augmentation to improve the shift-invariance and, in a more powerful version, ensembles of data augmentation. Our results show the proposed techniques work very well and improve the attack significantly, even for an order of magnitude.
A Digital Identity in the Hands of Swiss Citizens
The Swiss law on electronic identity (LSIE) was rejected on March 7, 2021. Its opponents accused it of involving private companies which could thus collect citizens' data and store them centrally. Six motions with identical wording were tabled on March 10, 2021: they all ask the Swiss Federal Council to set up a state-run system allowing citizens to prove their identity online in complete confidence. They stipulate that only necessary information is collected and stored in a decentralized manner. The Swiss Federal Council has recommended to Parliament to approve these motions on May 26, 2021, and wishes to propose a new e-ID solution responding to citizens' concerns as soon as possible. The Federal Department of Justice and Police has been asked to draw up a first draft presenting several technical solutions and specifying their respective costs. Following the publication of a working document on September 2, 2021, a public consultation was opened. It ended on October 14, 2021, with a public debate organized at the Government House in Bern and broadcasted live on a virtual platform. Self-Sovereign Identity (SSI) is one of the solutions identified during this process. It gives the citizens control of their electronic identity: they hold credentials issued by public administrations and choose the data they wish to disclose when they authenticate with a service (they can for example prove that they are over 18 without specifying their exact date of birth).
We propose here a decentralized and user-centric e-ID system based on SSI principles. Our solution embraces an open-source philosophy, fostering transparency and community involvement. We employ blockchain technology as a design pattern to establish trust and ensure the immutability of identity-related data. By design, our solution ensures the right to be forgotten by exclusively storing the digests of verifiable credentials on the blockchain. To demonstrate the feasibility and effectiveness of our SSI solution, we have developed a proof of concept leveraging the Partisia blockchain.
$\textsf{Asterisk}$: Super-fast MPC with a Friend
Secure multiparty computation$~$(MPC) enables privacy-preserving collaborative computation over sensitive data held by multiple mutually distrusting parties. Unfortunately, in the most natural setting where a majority of the parties are maliciously corrupt$~$(also called the $\textit{dishonest majority}$ setting), traditional MPC protocols incur high overheads and offer weaker security guarantees than are desirable for practical applications. In this paper, we explore the possibility of circumventing these drawbacks and achieving practically efficient dishonest majority MPC protocols with strong security guarantees by assuming an additional semi-honest, non-colluding helper party $\mathrm{HP}$. We believe that this is a more realistic alternative to assuming an honest majority, since many real-world applications of MPC involving potentially large numbers of parties$~$(such as dark pools) are typically enabled by a central governing entity that can be modeled as the $\mathrm{HP}$.
In the above model, we are the first to design, implement and benchmark a practically-efficient and general multi-party framework, $\textsf{Asterisk}$. Our framework requires invoking $\mathrm{HP}$ only a constant number of times, achieves the strong security guarantee of $\textit{fairness}$ (either all parties learn the output or none do), scales to hundreds of parties, outperforms all existing dishonest majority MPC protocols, and is, in fact, competitive with state-of-the-art honest majority MPC protocols. Our experiments show that $\textsf{Asterisk}$ achieves $228-288\times$ speedup in preprocessing as compared to the best dishonest majority MPC protocol. With respect to online time, $\textsf{Asterisk}$ supports $100$-party evaluation of a circuit with $10^6$ multiplication gates in approximately $20$ seconds. We also implement and benchmark practically efficient and highly scalable dark pool instances using $\textsf{Asterisk}$. The corresponding run times showcase the effectiveness of $\textsf{Asterisk}$ in enabling efficient realizations of real-world privacy-preserving applications with strong security guarantees.
Quantum Money from Abelian Group Actions
We give a construction of public key quantum money, and even a strengthened version called quantum lightning, from abelian group actions, which can in turn be constructed from suitable isogenies over elliptic curves. We prove security in the generic group model for group actions under a plausible computational assumption, and develop a general toolkit for proving quantum security in this model. Along the way, we explore knowledge assumptions and algebraic group actions in the quantum setting, finding significant limitations of these assumptions/models compared to generic group actions.
White-Box Block Cipher Implementation Based on LS-Design
Protecting secret keys from malicious observers in untrusted environments is a critical security issue. White-box cryptography suggests software protection by hiding the key in the white-box setting. One method for hiding the key in the cipher code is through encoding methods. Unfortunately, encoding methods may be vulnerable to algebraic attacks and side-channel analysis. Another technique to hide the key is (M,Z)-space hardness approach that conceals the key into a large lookup table generated with a reliable small block cipher. In (M,Z)-space-hard algorithms, the key extraction problem in the white-box setting turns into a key recovery problem in the black-box setting. One of the problems for (M,Z)-space-hard algorithms is improving run-time performance. In this study, we aim to improve the run-time performance of the existing white-box implementations. We propose an LS-design based white-box algorithm with better run-rime performance than space-hard SPNbox algorithm. Moreover, an LS-design based table creation method is designed. When we compare the run-time performance of our method with the SPNbox algorithm, we obtain 28% improvement for white-box implementation and 27% for black-box implementation for 128-bit block size. The LS-design based method is also used for 256-bit block size in the white-box setting.
Chosen-Key Distinguishing Attacks on Full AES-192, AES-256, Kiasu-BC, and More
At CRYPTO 2020, Liu et al. find that many differentials on Gimli are actually incompatible. On the related-key differential of AES, the incompatibilities also exist and are handled in different ad-hoc ways by adding respective constraints into the searching models. However, such an ad-hoc method is insufficient to rule out all the incompatibilities and may still output false positive related-key differentials. At CRYPTO 2022, a new approach combining a Constraint Programming (CP) tool and a triangulation algorithm to search for rebound attacks against AES- like hashing was proposed. In this paper, we combine and extend these techniques to create a uniform related-key differential search model, which can not only generate the related-key differentials on AES and similar ciphers but also immediately verify the existence of at least one key pair fulfilling the differentials. With the innovative automatic tool, we find new related-key differentials on full-round AES-192, AES-256, Kiasu-BC, and round-reduced Deoxys-BC. Based on these findings, full- round limited-birthday chosen-key distinguishing attacks on AES-192, AES-256, and Kiasu-BC are presented, as well as the first chosen-key dis- tinguisher on reduced Deoxys-BC. Furthermore, a limited-birthday dis- tinguisher on 9-round Kiasu-BC with practical complexities is found for the first time.
Round Optimal Fully Secure Distributed Key Generation
Protocols for distributed (threshold) key generation (DKG) in the discrete-logarithm setting have received a tremendous amount of attention in the past few years. Several synchronous DKG protocols have been proposed, but most such protocols are not fully secure: they either allow corrupted parties to bias the key, or are not robust and allow malicious parties to prevent successful generation of a key.
We explore the round complexity of fully secure DKG in the honest-majority setting where it is feasible. We show the impossibility of one-round, unbiased DKG protocols (even satisfying weaker notions of security), regardless of any prior setup. On the positive side, we show various round-optimal protocols for fully secure DKG offering tradeoffs in terms of their efficiency, necessary setup, and required assumptions.
Properties of Lattice Isomorphism as a Cryptographic Group Action
In recent years, the Lattice Isomorphism Problem (LIP) has served as an underlying assumption to construct quantum-resistant cryptographic primitives, e.g. the zero-knowledge proof and digital signature scheme by Ducas and van Woerden (Eurocrypt 2022), and the HAWK digital signature scheme (Asiacrypt 2022).
While prior lines of work in group action cryptography, e.g. the works of Brassard and Yung (Crypto 1990), and more recently Alamati, De Feo, Montgomery and Patranabis (Asiacrypt 2020), focused on studying the discrete logarithm problem and isogeny-based problems in the group action framework, in recent years this framing has been used for studying the cryptographic properties of computational problems based on the difficulty of determining equivalence between algebraic objects. Examples include Permutation and Linear Code Equivalence Problems used in LESS (Africacrypt 2020), and the Tensor Isomorphism Problem (TCC 2019). This study delves into the quadratic form version of LIP, examining it through the lens of group actions.
In this work we (1) give formal definitions and study the cryptographic properties of this group action (LIGA), (2) demonstrate that LIGA lacks both weak unpredictability and weak pseudorandomness, and (3) under certain assumptions, establish a theoretical trade-off between time complexity and the required number of samples for breaking weak unpredictability, for large dimensions. We also conduct experiments supporting our analysis. Additionally, we employ our findings to formulate new hard problems on quadratic forms.
The wrong use of FESTA trapdoor functions leads to an adaptive attack
Isogeny-based cryptography is one of the candidates for post-quantum cryptography. In 2023, Kani's theorem breaks an isogeny-based scheme SIDH, which was considered a promising post-quantum scheme. Though Kani's theorem damaged isogeny-based cryptography, some researchers have been trying to dig into the applications of this theorem. A FESTA trapdoor function is an isogeny-based trapdoor function that is one trial to apply Kani's theorem to cryptography.
This paper claims that there is an adaptive attack for a FESTA-based scheme if this scheme does not check the correctness of the input matrix. Our attack cannot be adapted to IND-CCA PKE schemes named FESTA proposed in the FESTA original paper so far. In this paper, we provide an adaptive attack for a FESTA trapdoor function using a specific oracle, and it reveals the secret key of the function. This oracle may be constructed if the FESTA trapdoor function is used in the wrong way (\textit{i.e.,} without the checking process of the input matrix). As an example, we explain that our attack can be adapted to a possible PKE scheme based on a FESTA trapdoor function in the wrong way.
On Derandomizing Yao's Weak-to-Strong OWF Construction
The celebrated result of Yao (FOCS'82) shows that concatenating $n\cdot p(n)$ copies of a weak one-way function (OWF) $f$, which can be inverted with probability $1-\tfrac{1}{p(n)}$, yields a strong OWF $g$, showing that weak and strong OWFs are black-box equivalent. Yao's transformation is not security-preserving, i.e., the input to $g$ needs to be much larger than the input to $f$. Understanding whether a larger input is inherent is a long-standing open question.
In this work, we explore necessary features of constructions which achieve short input length by proving the following: for any direct product construction of a strong OWF $g$ from a weak OWF $f$, which can be inverted with probability $1-\tfrac{1}{p(n)}$, the input size of $g$ must grow as $\Omega(p(n))$. Here, direct product refers to the following structure: the construction $g$ executes some arbitrary pre-processing function (independent of $f$) on its input $s$, obtaining a vector $(x_1, \cdots, x_l)$, and outputs $f(x_1), \cdots, f(x_l)$. When setting the pre-processing to be the identity, one recovers thus Yao's construction.
Our result generalizes to functions $g$ with post-processing, as long as the post-processing function is not too lossy. Thus, in essence, any weak-to-strong OWF hardness amplification must either (1) be very far from security-preserving, (2) use adaptivity, or (3) must be very far from a direct-product structure (in the sense that post-processing of the outputs of $f$ is very lossy).
On a technical level, we use ideas from lower bounds for secret-sharing to prove the impossibility of derandomizing Yao in a black-box way. Our results are in line with Goldreich, Impagliazzo, Levin, Venkatesan, and Zuckerman (FOCS 1990) who derandomize Yao's construction for regular weak OWFs by evaluating the OWF along a random walk on an expander graph – the construction is adaptive, since it alternates steps on the expander graph with evaluations of the weak OWF.
Bulletproofs With Stochastic Equation Sets
Bulletproofs are general-purpose Zero Knowledge Proof protocols that allow a Prover to demonstrate to a Verifier knowledge of a solution to a set of equations, represented as a Rank 1 Constraint System.
We present a protocol extending the standard Bulletproof protocol, which introduces a second layer of interactivity to the protocol, by allowing the Verifier to choose the set of equations after the Prover has already committed to portions of the solution.
We show that such Verifier-chosen (or stochastically-chosen) equation sets can be used to design smaller equation sets with less variables that have the same proving-power as their larger, deterministic counterparts but are, in practice, orders of magnitude faster both in proof generation and in proof verification, and even reduce the size of the resulting proofs. We demonstrate this with an example from a real-world application.
Our method maintains all the classical benefits of the Bulletproof approach: efficient proof generation, efficient proof checking, extremely short proofs, and the ability to use Fiat-Shamir challenges in order to turn an interactive proof into a non-interactive proof.
Security-Performance Tradeoff in DAG-based Proof-of-Work Blockchain Protocols
Proof-of-work (PoW) blockchain protocols based on directed acyclic graphs (DAGs) have demonstrated superior transaction confirmation performance compared to their chain-based predecessors. However, it is uncertain whether their security deteriorates in high-throughput settings similar to their predecessors, because their acceptance of simultaneous blocks and complex block dependencies presents challenges for rigorous security analysis.
We address these challenges by analyzing DAG-based protocols via a congestible blockchain model (CBM), a general model that allows case-by-case upper bounds on the block propagation delay, rather than a uniform upper bound as in most previous analyses. CBM allows us to capture two key phenomena of high-throughput settings: (1) simultaneous blocks increase each other's propagation delay, and (2) a block can be processed only after receiving all the blocks it refers to. We further devise a reasonable adversarial block propagation strategy in CBM, called the late-predecessor attack, which exploits block dependencies to delay the processing of honest blocks. We then evaluate the security and performance of Prism and OHIE, two DAG-based protocols that aim to break the security-performance tradeoff, in the presence of an attacker capable of launching the late predecessor attack. Our results show that these protocols suffer from reduced security and extended latency in high-throughput settings similar to their chain-based predecessors.
Building Hard Problems by Combining Easy Ones
In this work, we initiate a new conceptual line of attack on the fundamental question of how to generate hard problems. Motivated by the need for one-way functions in cryptography, we propose an information-theoretic framework to study the question of generating new provably hard one-way functions by composing functions that are easy to invert and evaluate, where each such easy function is modeled as a random oracles paired with another oracle that implements an inverse function.
Moving a Step of ChaCha in Syncopated Rhythm
The stream cipher ChaCha is one of the most widely used ciphers in the real world, such as in TLS, SSH and so on. In this paper, we study the security of ChaCha via differential cryptanalysis based on probabilistic neutrality bits (PNBs). We introduce the \textit{syncopation} technique for the PNB-based approximation in the backward direction, which significantly amplifies its correlation by utilizing the property of ARX structure. In virtue of this technique, we present a new and efficient method for finding a good set of PNBs. A refined framework of key-recovery attack is then formalized for round-reduced ChaCha. The new techniques allow us to break 7.5 rounds of ChaCha without the last XOR and rotation, as well as to bring faster attacks on 6 rounds and 7 rounds of ChaCha.
On One-way Functions and the Worst-case Hardness of Time-Bounded Kolmogorov Complexity
Whether one-way functions (OWF) exist is arguably the most important
problem in Cryptography, and beyond. While lots of candidate
constructions of one-way functions are known, and recently also
problems whose average-case hardness characterize the existence of
OWFs have been demonstrated, the question of
whether there exists some \emph{worst-case hard problem} that characterizes
the existence of one-way functions has remained open since their
introduction in 1976.
In this work, we present the first ``OWF-complete'' promise
problem---a promise problem whose worst-case hardness w.r.t. $\BPP$
(resp. $\Ppoly$) is \emph{equivalent} to the existence of OWFs secure
against $\PPT$ (resp. $\nuPPT$) algorithms. The problem is a
variant of the Minimum Time-bounded Kolmogorov Complexity
problem ($\mktp[s]$ with a threshold $s$), where we condition on
instances having small ``computational depth''.
We furthermore show that depending on the choice of the
threshold $s$, this problem characterizes either ``standard''
(polynomially-hard) OWFs, or quasi polynomially- or
subexponentially-hard OWFs. Additionally, when the threshold is
sufficiently small (e.g., $2^{O(\sqrt{n})}$ or $\poly\log n$) then
\emph{sublinear} hardness of this problem suffices to characterize
quasi-polynomial/sub-exponential OWFs.
While our constructions are black-box, our analysis is \emph{non-
black box}; we additionally demonstrate that fully black-box constructions
of OWF from the worst-case hardness of this problem are impossible.
We finally show that, under Rudich's conjecture, and standard derandomization
assumptions, our problem is not inside $\coAM$; as such, it
yields the first candidate problem believed to be outside of $\AM \cap \coAM$,
or even ${\bf SZK}$, whose worst case hardness implies the existence of OWFs.
Fuzzy Deduplication Scheme Supporting Pre-verification of Label Consistency
Efficiently and securely removing encrypted redundant data with cross-user in the cloud is challenging. Convergent Encryption (CE) is difficult to resist dictionary attacks for its deterministic tag. Server-aided mechanism is against such attacks while it may exist collusion. Focus on multimedia data, this paper proposes an efficient and secure fuzzy deduplication system without any additional servers. We also propose a notion of preverification of label consistency to compensate for the irreparable post-verification loss. Compared with other fuzzy deduplication schemes, our work has apparent advantages in deduplication efficiency and security based on a natural data set.
A Side-Channel Attack on a Masked Hardware Implementation of CRYSTALS-Kyber
NIST has recently selected CRYSTALS-Kyber as a new public key encryption and key establishment algorithm to be standardized. This makes it important to evaluate the resistance of CRYSTALS-Kyber implementations to side-channel attacks. Software implementations of CRYSTALS-Kyber have already been thoroughly analysed. The discovered vulnerabilities helped improve the subsequently released versions and promoted stronger countermeasures against side-channel attacks. In this paper, we present the first attack on a protected hardware implementation of CRYSTALS-Kyber. We demonstrate a practical message (shared key) recovery attack on the first-order masked FPGA implementation of Kyber-512 by Kamucheka et al. (2022) using power analysis based on the Hamming distance leakage model. The presented attack exploits a vulnerability located in the masked message decoding procedure which is called during the decryption step of the decapsulation. The message recovery is performed using a profiled deep learning-based method which extracts the message directly, without extracting each share explicitly. By repeating the same decapsulation process multiple times, it is possible to increase the success rate of full shared key recovery to 99%.
Keyed Sum of Permutations: a simpler RP-based PRF
Idealized constructions in cryptography prove the security of a primitive based on the security of another primitive.
The challenge of building a pseudorandom function (PRF) from a random permutation (RP) has only been recently tackled by Chen, Lambooij and Mennink [CRYPTO 2019] who proposed Sum of Even-Mansour (SoEM) with a provable beyond-birthday-bound security.
In this work, we revisit the challenge of building a PRF from an RP.
On the one hand, we describe Keyed Sum of Permutations (KSoP) that achieves the same provable security as SoEM while being strictly simpler since it avoids a key addition but still requires two independent keys and permutations.
On the other hand, we show that it is impossible to further simplify the scheme by deriving the two keys with a simple linear key schedule as it allows a non-trivial birthday-bound key recovery attack.
The birthday-bound attack is mostly information-theoretic, but it can be optimized to run faster than a brute-force attack.
Intmax2: A ZK-rollup with Minimal Onchain Data and Computation Costs Featuring Decentralized Aggregators
We present a blockchain scaling solution called Intmax2, which is a Zero-Knowledge rollup (ZK-rollup) protocol with stateless and permissionless block production, while minimizing the usage of data and computation on the underlying blockchain. Our architecture distinctly diverges from existing ZK-rollups since essentially all of the data and computational costs are shifted to the client-side as opposed to imposing heavy requirements on the block producers or the underlying Layer 1 blockchain. The only job for block producers is to periodically generate a commitment to a set of transactions, distribute inclusion proofs to each sender, and collect and aggregate signatures by the senders. This design allows permissionless and stateless block production, and is highly scalable with the number of users. We give a proof of the main security property of the protocol, which has been formally verified by the Nethermind Formal Verification Team in the Lean theorem prover.
ARITHMETIZATION-ORIENTED APN FUNCTIONS
Recently, many cryptographic primitives such as homomorphic encryption (HE), multi-party computation (MPC) and zero-knowledge (ZK) protocols have been proposed in the literature which operate on prime field $\mathbb{F}_p$ for some large prime $p$. Primitives that are designed using such operations are called arithmetization-oriented primitives. As the concept of arithmetization-oriented primitives is new, a rigorous cryptanalysis of such primitives is yet to be done. In this paper, we investigate arithmetization-oriented APN functions. More precisely, we investigate APN permutations in the CCZ-classes of known families of APN power functions over prime field $\mathbb{F}_p$. Moreover, we present a new class of APN binomials over $\mathbb{F}_q$ obtained by modifying the planar function $x^2$ over $\mathbb{F}_q$. We also present a class of binomials having differential uniformity at most $5$ defined via the quadratic character over finite fields of odd characteristic. We give sufficient conditions for which this family of binomials is permutation. Computationally it is confirmed that the latter family contains new APN functions for some small parameters. We conjecture it to contain an infinite subfamily of APN functions.
ACORN-QRE: Specification and Analysis of a Method of Generating Secure One-time Pads for Use in Encryption
REAMC Report-007(2023)
ACORN-QRE: Specification and Analysis of a Method of Generating Secure One-time Pads for Use in Encryption
Roy S Wikramaratna (email: rwikramaratna@gmail.com)
Abstract
The Additive Congruential Random Number (ACORN) generator is straightforward to implement; it has been demonstrated in previous papers to give rise to sequences with long period which can be proven from theoretical considerations to approximate to being uniform in up to k dimensions (for any given k).
The ACORN-QRE algorithm is a straightforward modification of ACORN which effectively avoids the linearity of the original algorithm, while preserving the uniformity of the modified sequence. It provides a new method for generating one-time pads that are resistant to attack either by current computers or by future computing developments, including quantum computers. The pads can use any alphabet (including both binary and alphanumeric) and can be used with a Vernam-type cypher to securely encrypt both files and communications.
This report explains how the ACORN-QRE algorithm works and provides evidence for the claim that the resulting one-time pads are inherently not susceptible to cryptanalysis and that they will remain secure against foreseeable developments in computing, including the potential development of quantum computers.
The ACORN-QRE algorithm is patented in the UK under Patent No. GB2591467; patent applied for in the US under Application No. 17/795632. The patents are owned by REAMC Limited, 4 Nuthatch Close, Poole, Dorset BH17 7XR, United Kingdom
Foundations of Data Availability Sampling
Towards building more scalable blockchains, an approach known as data availability sampling (DAS) has emerged over the past few years.
Even large blockchains like Ethereum are planning to eventually deploy DAS to improve their scalability.
In a nutshell, DAS allows the participants of a network to ensure the full availability of some data without any one participant downloading it entirely.
Despite the significant practical interest that DAS has received, there are currently no formal definitions for this primitive, no security notions, and no security proofs for any candidate constructions.
For a cryptographic primitive that may end up being widely deployed in large real-world systems, this is a rather unsatisfactory state of affairs.
In this work, we initiate a cryptographic study of data availability sampling.
To this end, we define data availability sampling precisely as a clean cryptographic primitive.
Then, we show how data availability sampling relates to erasure codes.
We do so by defining a new type of commitment schemes which naturally generalizes vector commitments and polynomial commitments.
Using our framework, we analyze existing constructions and prove them secure.
In addition, we give new constructions which are based on weaker assumptions, computationally more efficient, and do not rely on a trusted setup, at the cost of slightly larger communication complexity.
Finally, we evaluate the trade-offs of the different constructions.
Bypassing Android isolation with fuel gauges: new risks with advanced power ICs
Efficient power management is critical for embedded devices, both for extending their lifetime and ensuring safety. However, this can be a challenging task due to the unpredictability of the batteries commonly used in such devices. To address this issue, dedicated Integrated Circuits known as "fuel gauges" are often employed outside of the System-On-Chip. These devices provide various metrics about the available energy source and are highly accurate. However, their precision can also be exploited by malicious actors to compromise platform confidentiality if the Operating System fails to intervene. Depending on the fuel gauge and OS configuration, several attack scenarios are possible. In this article, we focus on Android and demonstrate how it is possible to bypass application isolation to recover PINs entered in other processes.
Taming Adaptivity in YOSO Protocols: The Modular Way
YOSO-style MPC protocols (Gentry et al., Crypto'21), are a promising framework where the overall computation is partitioned into small, short-lived pieces, delegated to subsets of one-time stateless parties. Such protocols enable gaining from the security benefits provided by using a large community of participants where "mass corruption" of a large fraction of participants is considered unlikely, while keeping the computational and communication costs manageable. However, fully realizing and analyzing YOSO-style protocols has proven to be challenging: While different components have been defined and realized in various works, there is a dearth of protocols that have reasonable efficiency and enjoy full end to end security against adaptive adversaries.
The YOSO model separates the protocol design, specifying the short-lived responsibilities, from the mechanisms assigning these responsibilities to machines participating in the computation. These protocol designs must then be translated to run directly on the machines, while preserving security guarantees. We provide a versatile and modular framework for analyzing the security of YOSO-style protocols, and show how to use it to compile any protocol design that is secure against static corruptions of $t$ out of $c$ parties, into protocols that withstand adaptive corruption of $T$ out of $N$ machines (where $T/N$ is closely related to $t/c$, specifically when $t/c<0.5$, we tolerate $T/N \leq 0.29$) at overall communication cost that is comparable to that of the traditional protocol even when $c << N$.
Furthermore, we demonstrate how to minimize the use of costly non-committing encryption, thereby keeping the computational and communication overhead manageable even in practical terms, while still providing end to end security analysis. Combined with existing approaches for transforming stateful protocols into stateless ones while preserving static security (e.g. Gentry et al. 21, Kolby et al. 22), we obtain end to end security.
Non-Interactive Threshold BBS+ From Pseudorandom Correlations
The BBS+ signature scheme is one of the most prominent solutions for realizing anonymous credentials. Its prominence is due to properties like selective disclosure and efficient protocols for creating and showing possession of credentials. Traditionally, a single credential issuer produces BBS+ signatures, which poses significant risks due to a single point of failure.
In this work, we address this threat via a novel $t$-out-of-$n$ threshold BBS+ protocol. Our protocol supports an arbitrary security threshold $t \leq n$ and works in the so-called preprocessing setting. In this setting, we achieve non-interactive signing in the online phase and sublinear communication complexity in the number of signatures in the offline phase, which, as we show in this work, are important features from a practical point of view. As it stands today, none of the widely studied signature schemes, such as threshold ECDSA and threshold Schnorr, achieve both properties simultaneously. In this work, we make the observation that presignatures can be directly computed from pseudorandom correlations which allows servers to create signatures shares without additional cross-server communication. Both our offline and online protocols are actively secure in the Universal Composability model. Finally, we evaluate the concrete efficiency of our protocol, including an implementation of the online phase and the expansion algorithm of the pseudorandom correlation generator (PCG) used during the offline phase. The online protocol without network latency takes less than $14 ms$ for $t \leq 30$ and credentials sizes up to $10$. Further, our results indicate that the influence of $t$ on the online signing is insignificant, $\leq 6 \%$ for $t \leq 30$, and the overhead of the thresholdization occurs almost exclusively in the offline phase. Our implementation of the PCG expansion shows that even for a committee size of $10$ servers, each server can expand a correlation of up to $2^{17}$ presignatures in less than $100$ ms per presignature.
Streebog as a Random Oracle
The random oracle model is an instrument used for proving that protocol has no structural flaws when settling with standard hash properties is impossible or fairly difficult. In practice, however, random oracles have to be instantiated with some specific hash functions, which are not random oracles. Hence, in the real world, an adversary has broader capabilities than considered in the random oracle proof — it can exploit the peculiarities of a specific hash function to achieve its goal. In a case when a hash function is based on some building block, one can go further and show that even if the adversary has access to that building block, the hash function still behaves like a random oracle under some assumptions made about the building block. Thereby, the protocol can be proved secure against more powerful adversaries under less complex assumptions. The indifferentiability notion formalizes that approach.
In this paper we study whether Streebog, a Russian standardized hash function, can instantiate a random oracle from that point of view. We prove that Streebog is indifferentiable from a random oracle under an ideal cipher assumption for the underlying block cipher.
From MLWE to RLWE: A Differential Fault Attack on Randomized & Deterministic Dilithium
The post-quantum digital signature scheme CRYSTALS-Dilithium has been recently selected by the NIST for standardization. Implementing CRYSTALS-Dilithium, and other post-quantum cryptography schemes, on embedded devices raises a new set of challenges, including ones related to performance in terms of speed and memory requirements, but also related to side-channel and fault injection attacks security. In this work, we investigated the latter and describe a differential fault attack on the randomized and deterministic versions of CRYSTALS-Dilithium. Notably, the attack requires a few instructions skips and is able to reduce the MLWE problem that Dilithium is based on to a smaller RLWE problem which can be practically solved with lattice reduction techniques. Accordingly, we demonstrated key recoveries using hints extracted on the secret keys from the same faulted signatures using the LWE with side-information framework introduced by Dachman-Soled et al. at CRYPTO’20. As a final contribution, we proposed algorithmic countermeasures against this attack and in particular showed that the second one can be parameterized to only induce a negligible overhead over the signature generation.
The Reality of Backdoored S-Boxes - An Eye Opener
The analysis of real-life incidents has revealed that state-level efforts are made to camouflage the intentional flaws in the mathematical layer of an S-Box to exploit the information-theoretic properties, i.e., Kuznyechik. To extract and investigate the common features in the backdoored S-Box(es), this research thoroughly examines them from the perspective of 24 cryptanalytic attack vectors available in the open literature. We have debunked the earlier claims by the backdoor engineers that their designs are stealthy against statistical distinguishers. A backdoored architecture fulfils the notions of randomness but lacks the strength to resist sophisticated cryptanalytic attacks. Our analysis has revealed that during the backdoor insertion phase, a malicious designer compromises vital cryptographic properties, prominently the algebraic degree, differential trails, avalanche characteristics and leaving the open ground for hybrid attacks. It is observed that these mappings attain the upper bound of BCT, FBCT and DLCT, thus paving the way for hybrid attacks with high probability.