All papers in 2021 (Page 11 of 1705 results)
Bridging Machine Learning and Cryptanalysis via EDLCT
Machine learning aided cryptanalysis is an interesting but
challenging research topic. At CRYPTO'19, Gohr proposed a Neural
Distinguisher (ND) based on a plaintext difference.
The ND takes a ciphertext pair as input and outputs its class (a real or random ciphertext pair).
At EUROCRYPTO'20, Benamira et al proposed a deeper analysis
of how two specific NDs against Speck32/64 work. However, there are
still three research gaps that researchers are eager to fill in.
(1) what features related to a ciphertext pair are learned by the ND?
(2) how to explain various phenomena related to NDs?
(3) what else can machine learning do in conventional cryptanalysis?
In this paper, we filled in the three research gaps: (1) we first propose
the Extended Differential-Linear Connectivity Table (EDLCT) which is
a generic tool describing a cipher. Features corresponding to the EDLCT
are designed to describe a ciphertext pair.
Based on these features, various machine learning-based distinguishers including the ND are built.
To explore various NDs from the EDLCT view, we propose a Feature Set
Sensitivity Test (FSST) to identify which features may have a significant influence on NDs.
Features identified by FSST share the same characteristic related to the cipher's round function.
Surrogate models of NDs are also built based on identified features.
Experiments on Speck32/64 and DES confirm that features corresponding to the EDLCT are learned
by NDs. (2) We explain phenomena related to NDs via EDLCT.
(3) We show how to use machine learning to search differential-linear
propagations ∆ → λ with a high correlation, which is a tough task in the
differential-linear attack. Applications in Chaskey and DES demonstrate
the advantages of machine learning.
Furthermore, we provide some optional inputs to improve ND
IBM Digital Health Pass Whitepaper: A Privacy-Respectful Platform for Proving Health Status
IBM Digital Health Pass (IDHP) is a technology developed by IBM offering the technical infrastructure to allow individuals to prove their COVID19-related health status (e.g., whether that individual was tested negative for COVID19, has been partially/fully vaccinated, or recovered from COVID19) to third parties in a secure and privacy-respectful way.
In a nutshell, IBM Digital Health Pass technology enables issuers, i.e., authorised healthcare providers onboarded to the system by health authorities of a given country or jurisdiction, to produce digital attestations about individuals’ health status. These attestations, called Health Certificates are issued to individuals, called subjects or holders, and are stored on a piece of paper or within subjects’ mobile phone wallets. Subjects can then demonstrate the authenticity of one or more of their Health Certificates to third parties of their choice called verifiers, when the necessity of demonstrating COVID19 related health status arises. Subjects can also demonstrate their association with each of their Health Certificates.
IBM Digital Health Pass is built around preserving individuals’ privacy as a first-class requirement, based on established public key cryptography concepts in a way that can easily scale to millions of Health Certificates.
Automatic Quantum Multi-collision Distinguishers and Rebound Attacks with Triangulation Algorithm
In EUROCRYPT 2020, Hosoyamada and Sasaki found that differential paths with probability $2^{-2n/3}$ can be useful in quantum collision attacks, v.s. $2^{-n/2}$ for classical collision attacks. This observation led to attacks for more rounds on some AES-like hash functions. In this paper, we quantize the multi-collision distinguisher proposed by Biryukov, Khovratovich, and Nikoli{\'c} at CRYPTO 2009, and propose quantum multi-collision distinguishers. We use CP-tool to automatically search for the configurations for multi-collision distinguishers and rebound attacks by taking into account related-key/single-key differentials of the underlying block cipher. We apply our method to AES-like primitives including block ciphers AES, Rijndael, Saturnin and AES-hashing modes AES-DM and AES-HCF.
Symmetric Key Exchange with Full Forward Security and Robust Synchronization
We construct lightweight authenticated key exchange protocols based on pre-shared keys, which achieve full forward security and rely only on simple and efficient symmetric-key primitives. All of our protocols have rigorous security proofs in a strong security model, all have low communication complexity, and are particularly suitable for resource-constrained devices. We describe three protocols that apply linear key evolution to provide different performance and security properties. Correctness in parallel and concurrent protocol sessions is difficult to achieve for linearly key-evolving protocols, emphasizing the need for assurance of availability alongside the usual confidentiality and authentication security goals. We introduce synchronization robustness as a new formal security goal, which essentially guarantees that parties can re-synchronize efficiently. All of our new protocols achieve this property. Since protocols based on linear key evolution cannot guarantee that all concurrently initiated sessions successfully derive a key, we also propose two constructions with non-linear key evolution based on puncturable PRFs. These are instantiable from standard hash functions and require O( C log(|CTR|)) memory, where C is the number of concurrent sessions and |CTR| is an upper bound on the total number of sessions per party. These are the first protocols to simultaneously achieve full forward security, synchronization robustness, and concurrent correctness.
Multidimentional ModDiv public key exchange protocol
This paper presents Multidimentional ModDiv public key exchange protocol which security is based on the hardness of an LWR problem instance consisting on finding a secret vector $\textbf{X}$ in $\mathbb{Z}_{q}^{n}$ knowing vectors $\textbf{A}$ and $\textbf{B}$ respectively in $\mathbb{Z}_{p}^{m}$ and $\mathbb{Z}_{p-q}^{m-n}$, where elements of vector $\textbf{B}$ are defined as follows : $ B(i)$ = ($\sum_{j=1}^{j=n} A(i+n-j) \times X(j)$) $ Mod(P)Div(Q)$.
Mod is integer modulo, Div is integer division, P and Q are known positive integers which sizes in bits are respectively p and q which satisfy $ p > 2 \times q $. m and n satisfy $ m >2 \times n $ .
DeCSIDH: Delegating isogeny computations in the CSIDH setting
Delegating heavy computations to auxiliary servers, while keeping the inputs secret, presents a practical solution for computationally limited devices to use resource-intense cryptographic protocols, such as those based on isogenies, and thus allows the deployment of post-quantum security on mobile devices and in the internet of things. We propose two algorithms for the secure and verifiable delegation of isogeny computations in the CSIDH setting. We then apply these algorithms to different instances of CSIDH and to the signing algorithms SeaSign and CSI-FiSh. Our algorithms present a communication-cost trade-off. Asymptotically (for high communication), the cost for the delegator is reduced by a factor 9 for the original CSIDH-512 parameter set and a factor 30 for SQALE'd CSIDH-4096, while the relative cost of SeaSign vanishes. Even for much lower communication cost, we come close to these asymptotic results. Using the knowledge of the class group, the delegation of CSI-FiSh is basically free (up to element generation) already at a very low communication cost.
Radical Isogenies on Montgomery Curves
We work on some open problems in radical isogenies. Radical isogenies are formulas to compute chains of $N$-isogenies for small $N$ and proposed by Castryck, Decru, and Vercauteren in Asiacrypt 2020. These formulas do not need to generate a point of order $N$ generating the kernel and accelerate some isogeny-based cryptosystems like CSIDH. On the other hand, since these formulas use Tate normal forms, these need to transform Tate normal forms to curves with efficient arithmetic, e.g., Montgomery curves. In this paper, we propose radical-isogeny formulas of degrees 3 and 4 on Montgomery curves. Our formulas compute some values determining Montgomery curves, from which one can efficiently recover Montgomery coefficients. And our formulas are more efficient for some cryptosystems than the original radical isogenies. In addition, we prove a conjecture left open by Castryck et al. that relates to radical isogenies of degree 4.
Multi-Dimensional Sub/Super-Range Signatures
In time-specific signatures (TSS) [Paterson \& Quaglia, SCN'10] [Ishizaka \& Kiyomoto, ISC'20] with $T$ numerical values, each signer is given a secret-key associated with a numerical value $t\in[0,T-1]$ and each signature on a message is generated under a numerical range $[L,R]$ s.t. $0\leq L\leq R\leq T-1$. A signer with $t$ can correctly generate a signature under $[L,R]$ if $t$ is truly included in $[L,R]$, i.e., $t\in[L,R]$.
As a generalized primitive of TSS, we propose multi-dimensional \textit{sub}-range signatures (MDSBRS). As a related primitive, we also propose multi-dimensional \textit{super}-range signatures (MDSPRS). In MDSBRS (resp. MDSPRS) with $D\in\mathbb{N}$ dimensions, each secret-key is associated with a set of $D$ ranges $\{[l_i,r_i]\mid i\in[1,D]\}$ s.t. $0 \leq l_i\leq r_i\leq T_i-1$ and a threshold value $d\in[1,D]$, and it correctly produces a signature on any message under a set of $D$ ranges $\{[L_i,R_i]\mid i\in[1,D]\}$ s.t. $0 \leq L_i\leq R_i\leq T_i-1$, if and only if total number of key-ranges every one $[l_i,r_i]$ of which is a \textit{sub}-range (resp. \textit{super}-range) of the corresponded signature-range $[L_i,R_i]$, i.e., $L_i\leq l_i\leq r_i\leq R_i$ (resp. $l_i\leq L_i\leq R_i\leq r_i$), is more than $d-1$. We show that, by extending (or generalizing) an existing TSS scheme, we obtain MDSBRS and MDSPRS schemes each one of which is secure, i.e., existentially unforgeable and perfectly (signer-)private, under standard assumption and asymptotically efficient.
GoAT: File Geolocation via Anchor Timestamping
Decentralized storage systems are a crucial component of the rapidly growing blockchain ecosystem. They aim to achieve robustness by proving that they store multiple replicas of every file. They have a serious limitation, though: They cannot prove that file replicas are spread across distinct systems, e.g., different hard drives. Consequently, files are vulnerable to loss in a single, locally catastrophic event.
We introduce a new primitive, Proof of Geo-Retrievability or PoGeoRet, that proves that a file is located within a strict geographic boundary. Using PoGeoRet, one can, for example, prove that a file is spread across several distinct geographic regions---and by extension across multiple systems, e.g., hard drives. We define what it means for a PoGeoRet scheme to be complete and sound, extending prior formalism in key ways.
We also propose GoAT, a practical PoGeoRet scheme to prove file geolocation. Unlike previous geolocation systems that only offer nominal geolocation guarantees and require dedicated anchors, GoAT geolocates provers using any timestamping server on the internet with a fixed, known location as a geolocation anchor.
GoAT's geolocation guarantees directly depend on the physical constraints of the internet, making them very reliable.
GoAT internally uses a communication-efficient Proof-of-Retrievability (PoRet) scheme in a novel way to achieve constant-size PoRet-component in its proofs.
We validate GoAT's practicality by conducting an initial measurement study to find usable anchors and perform a real-world experiment. The results show that a significant fraction of the internet can be used as anchors and that GoAT achieves geolocation radii as low as 500km.
The "quantum annoying" property of password-authenticated key exchange protocols
During the Crypto Forum Research Group (CFRG)'s standardization of password-authenticated key exchange (PAKE) protocols, a novel property emerged: a PAKE scheme is said to be ``quantum-annoying'' if a quantum computer can compromise the security of the scheme, but only by solving one discrete logarithm for each guess of a password. Considering that early quantum computers will likely take quite long to solve even a single discrete logarithm, a quantum-annoying PAKE, combined with a large password space, could delay the need for a post-quantum replacement by years, or even decades.
In this paper, we make the first steps towards formalizing the quantum-annoying property. We consider a classical adversary in an extension of the generic group model in which the adversary has access to an oracle that solves discrete logarithms. While this idealized model does not fully capture the range of operations available to an adversary with a general-purpose quantum computer, this model does allow us to quantify security in terms of the number of discrete logarithms solved. We apply this approach to the CPace protocol, a balanced PAKE advancing through the CFRG standardization process, and show that the CPaceBase variant is secure in the generic group model with a discrete logarithm oracle.
Adaptively Secure Lattice-based Revocable IBE in the QROM: Compact Parameters, Tight Security, and Anonymity
Revocable identity-based encryption (RIBE) is an extension of IBE that satisfies a key revocation mechanism to manage a number of users dynamically and efficiently. To resist quantum attacks, two adaptively secure lattice-based RIBE schemes are known in the (quantum) random oracle model ((Q)ROM). Wang et al.'s scheme that is secure in the ROM has large secret keys depending on the depth of a binary tree and its security reduction is not tight. Ma and Lin's scheme that is secure in the QROM has large ciphertexts depending on the length of identities and is not anonymous. In this paper, we propose an adaptively secure lattice-based RIBE scheme that is secure in the QROM. Our scheme has compact parameters, where the ciphertext-size is smaller than Wang et al.'s scheme and the secret key size is the same as Ma and Lin's scheme. Moreover, our scheme is anonymous and its security reduction is completely tight. We design the proposed scheme by modifying Ma-Lin's scheme instantiated by the Gentry-Peikert-Vaikuntanathan (GPV) IBE. We can obtain the advantages of our scheme by making use of Katsumata et al.'s proof technique of the GPV IBE in the QROM.
On Interactive Oracle Proofs for Boolean R1CS Statements
The framework of interactive oracle proofs (IOP) has been used with great success to construct a number of efficient transparent zk-SNARKs in recent years. However, these constructions are based on Reed-Solomon codes and can only be applied directly to statements given in the form of arithmetic circuits or R1CS over large fields $\mathbb{F}$ since their soundness error is at least $1/|\mathbb{F}|$.
This motivates the question of what is the best way to apply these IOPs to statements that are naturally written as R1CS over small fields, and more concretely, the binary field $\mathbb{F}_2$. While one can just see the system as one over an extension field $\mathbb{F}_{2^e}$ containing $\mathbb{F}_2$, this seems wasteful, as it uses $e$ bits to encode just one ``information'' bit. In fact, the recent BooLigero has devised a way to apply the well-known Ligero while being able to encode $\sqrt{e}$ bits into one element of $\mathbb{F}_{2^e}$.
In this paper, we introduce a new protocol for $\mathbb{F}_2$-R1CS which among other things relies on a more efficient embedding which (for practical parameters) allows to encode $\geq e/4$ bits into an element of $\mathbb{F}_{2^e}$. Our protocol makes then black box use of lincheck and rowcheck protocols for the larger field. Using the lincheck and rowcheck introduced in Aurora and Ligero respectively we obtain $1.31 - 1.65 \times$ smaller proofs for Aurora and $3.71 \times$ for Ligero. We also estimate the reduction of prover time by a factor of $24.7 \times$ for Aurora and between $6.9 - 32.5 \times$ for Ligero without interactive repetitions.
Our methodology uses the notion of reverse multiplication friendly embeddings introduced in the area of secure multiparty computation, combined with a new IOPP to test linear statements modulo a subspace $V \leq \mathbb{F}_{2^e}$ which may be of independent interest.
Hardware Penetration Testing Knocks Your SoCs Off
Today’s society depends on interconnected electronic devices, which handle various sensitive information. Due to the
knowledge needed to develop these devices and the economic advantage of reusable solutions, most of these systems contain
Third-Party Intellectual Property (3PIP) cores that might not be trustworthy. If one of these 3PIP cores is vulnerable, the security of the entire device is potentially affected. As a result, sensitive data that is processed by the device can be leaked to an attacker. Competitions like Hack@DAC serve as a playground to develop and examine novel approaches and computer-aided tools that identify security vulnerabilities in System-on-Chip (SoC) Register-Transfer-Level (RTL) designs. In this paper, we present a successful divide and conquer approach to test SoC security which is illustrated by exemplary RTL vulnerabilities in the competition’s SoC design. Additionally, we craft real-world software attacks that exploit these vulnerabilities.
Shorter Signatures Based on Tailor-Made Minimalist Symmetric-Key Crypto
Signature schemes based on the MPC-in-the-head approach (MPCitH) have either been designed by taking a proof system and selecting a suitable symmetric-key primitive (Picnic, CCS16), or starting with an existing primitive such as AES and trying to find the most suitable proof system (BBQ, SAC19 or Banquet, PKC21).
In this work we do both: we improve certain symmetric-key primitives to better fit existing signature schemes, and we also propose a new signature scheme that combines a new, minimalist one-way function with changes to a proof system to make their combination even more efficient. Our concrete results are as follows.
First, we show how to provably remove the need to include the key schedule of block ciphers. This simplifies schemes like Picnic and it also leads to the fastest and smallest AES-based signatures, where we achieve signature sizes of around 10.8 to 14.2 KB using AES-128, on average 10% shorter than Banquet and 15% faster.
Second, we investigate a variant of AES with larger S-boxes we call LSAES, for which we argue that it is likely to be at least as strong as AES, further reducing the size of AES-based signatures to 9.9 KB.
Finally, we present a new signature scheme, Rainier, combining a new one-way function called Rain with a Banquet-like proof system. To the best of our knowledge, it is the first MPCitH-based signature scheme which can produce signatures that are less than 5 KB in size; it also outperforms previous Picnic and Banquet instances in all performance metrics.
General Bootstrapping Approach for RLWE-based Homomorphic Encryption
We propose a new bootstrapping approach that works for all three Brakerski-Gentry-Vaikuntanathan (BGV), Brakerski/Fan-Vercauteren (BFV), and Cheon-Kim-Kim-Song (CKKS) schemes. This approach adopts a blind rotation technique from FHEW-type schemes. For BGV and BFV, our bootstrapping does not have any restrictions on plaintext modulus unlike typical cases of the previous methods. For CKKS, our approach introduces an error comparable to a rescaling error which enables more than 70 bits of precision after bootstrapping while consuming only 1-2 levels. Due to the high precision of the proposed bootstrapping algorithm, it is the first bootstrapping resistant to the security vulnerability of CKKS found by Li and Micciancio (Eurocrypt 2021). In addition, we introduce methods to reduce the size of public keys required for blind rotations generated by a secret key holder.
On Communication Models and Best-Achievable Security in Two-Round MPC
Recently, a sequence of works have made strong advances in two-round (i.e., round-optimal) secure multi-party computation (MPC). In the honest-majority setting -- the focus of this work -- Ananth et al. [CRYPTO'18, EC'19], Applebaum et al. [TCC'18, EC'19] and Garg et al. [TCC'18] have established the feasibility of general two-round MPC in standard communication models involving broadcast (BC) and private point-to-point (P2P) channels.
In this work, we set out to understand what features of the communication model are necessary for these results, and more broadly the design of two-round MPC. Focusing our study on the plain model -- the most natural model for honest-majority MPC -- we obtain the following results:
1. Dishonest majority from Honest majority: In the two round setting, honest-majority MPC and dishonest-majority MPC are surprisingly close, and often equivalent. This follows from our results that the former implies 2-message oblivious transfer, in many settings. (i) We show that without private point-to-point (P2P) channels, i.e., when we use only broadcast (BC) channels, honest-majority MPC implies 2-message oblivious transfer. (ii) Furthermore, this implication holds even when we use both P2P and BC, provided that the MPC protocol is robust against ``fail-stop'' adversaries.
2. Best-Achievable Security: While security with guaranteed output delivery (and even fairness) against malicious adversaries is impossible in two rounds, nothing is known with regards to the ``next best'' security notion, namely, security with identifiable abort (IA). We show that IA is also impossible to achieve with honest-majority even if we use both P2P and BC channels. However, if we replace P2P channels with a ``bare'' (i.e., untrusted) public-key infrastructure (PKI), then even security with guaranteed output delivery (and hence IA) is possible to achieve.
These results ``explain'' that the reliance on P2P channels (together with BC) in the recent two-round protocols in the plain model was in fact necessary, and that these protocols couldn't have achieved a stronger security guarantee, namely, IA. Overall, our results (put together with prior works) fully determine the best-achievable security for honest-majority MPC in different communication models in two rounds. As a consequence, they yield the following hierarchy of communication models:
BC < P2P < BC+P2P < BC+PKI
This shows that BC channel is the weakest communication model, and that BC+PKI model is strictly stronger than BC+PTP model.
OSHA: A General-purpose and Next Generation One-way Secure Hash Algorithm
Secure hash functions are widely used in cryptographic algorithms to secure against diverse attacks. A one-way secure hash function is used in the various research fields to secure, for instance, blockchain. Notably, most of the hash functions provide security based on static parameters and publicly known operations. Consequently, it becomes easier to attack by the attackers because all parameters and operations are predefined. The publicly known parameters and predefined operations make the oracle regenerate the key even though it is a one-way secure hash function. Moreover, the sensitive data is mixed with the predefined constant where an oracle may find a way to discover the key. To address the above issues, we propose a novel one-way secure hash algorithm, OSHA for short, to protect sensitive data against attackers. OSHA depends on a pseudo-random number generator to generate a hash value. Particularly, OSHA mixes multiple pseudo-random numbers to produce a secure hash value. Furthermore, OSHA uses dynamic parameters, which is difficult for adversaries to guess. Unlike conventional secure hash algorithms, OSHA does not depend on fixed constants. It replaces the fixed constant with the pseudo-random numbers. Also, the input message is not mixed with the pseudo-random numbers; hence, there is no way to recover and reverse the process for the adversaries.
Statistical ZAPs from Group-Based Assumptions
Uncategorized
Uncategorized
We put forth a template for constructing statistical ZAPs for NP. Our template
compiles NIZKs for NP in the hidden bit model (which exist unconditionally)
into statistical ZAPs using a new notion of interactive hidden-bit generator
(IHBG), which adapts the notion of hidden-bit generator to the plain model by
building upon the recent notion of statistically-hiding extractable
commitments. We provide a construction of IHBG from the explicit hardness of
the decision Diffie-Hellman assumption (where explicit refers to requiring an
explicit upper bound on the advantage of any polynomial-time adversary against
the assumption) and the existence of statistical ZAPs for a specific simple
language, building upon the recent construction of dual-mode hidden-bit
generator from (Libert et al., EUROCRYPT 2020). We provide two instantiations
of the underlying simple ZAP:
1. Using the recent statistical ZAP for the Diffie-Hellman language of
(Couteau and Hartmann, CRYPTO 2020), we obtain statistical ZAPs for NP
assuming (the explicit hardness of) DDH in $G_1$ and kernel-DH in $G_2$ (a
search assumption which is weaker than DDH), where $(G_1,G_2)$ are groups
equipped with an asymmetric pairing. This improves over the recent work of
(Lombardi et al., EUROCRYPT 2020) which achieved a relaxed variant of
statistical ZAP for NP, under a stronger assumption.
2. Using the recent work of (Couteau et al., EUROCRYPT 2020), we obtain
statistical ZAPs for NP assuming the explicit hardness of DDH, together with
the assumption that no efficient adversary can break the key-dependent message
one-wayness of ElGamal with respect to efficient functions over groups of size
$2^\secpar$ with probability better than $\poly(\secpar)/2^{(c + o(1)) \cdot
\secpar}$, denoted $2^{-c\secpar}$-\OWKDM, for a constant c = 1/2, in
pairing-free groups.
Note that the latter is a search discrete-log-style falsifiable
assumption, incomparable to DDH (in particular, it is not known to imply
public-key encryption).
Towards Understanding Practical Randomness Beyond Noise: Differential Privacy and Mixup
Information-theoretical privacy relies on randomness. Representatively, Differential Privacy (DP) has emerged as the gold standard to quantify the individual privacy preservation provided by given randomness. However, almost all randomness in existing differentially private optimization and learning algorithms is restricted to noise perturbation. In this paper, we set out to provide a privacy analysis framework to understand the privacy guarantee produced by other randomness commonly used in optimization and learning algorithms (e.g., parameter randomness).
We take mixup: a random linear aggregation of inputs, as a concrete example. Our contributions are twofold. First, we develop a rigorous analysis on the privacy amplification provided by mixup either on samples or updates, where we find the hybrid structure of mixup and the Laplace Mechanism produces a new type of DP guarantee lying between Pure DP and Approximate DP. Such an average-case privacy amplification can produce tighter composition bounds. Second, both empirically and theoretically, we show that proper mixup comes almost free of utility compromise.
Meteor: Cryptographically Secure Steganography for Realistic Distributions
Despite a long history of research and wide-spread applications to censorship resistant systems, practical steganographic systems capable of embedding messages into realistic communication distributions, like text, do not exist. We identify two primary impediments to deploying universal steganography: (1) prior work leaves the difficult problem of finding samplers for non-trivial distributions unaddressed, and (2) prior constructions have impractical minimum entropy requirements. We investigate using generative models as steganographic samplers, as they represent the best known technique for approximating human communication. Additionally, we study methods to overcome the entropy requirement, including evaluating existing techniques and designing a new steganographic protocol, called Meteor. The resulting protocols are provably indistinguishable from honest model output and represent an important step towards practical steganographic communication for mundane communication channels. We implement Meteor and evaluate it on multiple computation environments with multiple generative models.
Blind Side-Channel SIFA
Statistical Ineffective Fault Attacks (SIFA) have been recently proposed as very powerful key-recovery strategies on symmetric cryptographic primitives' implementations. Specically, they have been shown to bypass many common countermeasures against faults such as redundancy or infection, and to remain applicable even when side-channel countermeasures are deployed. In this work, we investigate combined side-channel and fault attacks and show that a profiled, SIFA-like attack can be applied despite not having any direct ciphertext knowledge. The proposed attack exploits the ciphertext's side-channel and fault characteristics to mount successful key recoveries, even in the presence of masking and duplication countermeasures, at the cost of both side-channel and fault profiling. We analyze the attack using simulations, discuss its requirements, strengths and limitations, and compare different approaches to distinguish the correct key. Finally, we demonstrate its applicability on an ARM Cortex-M4 device, utilizing a combination of laser-based fault injection and microprobe-based EM side-channel analysis.
Tight Setup Bounds for Identifiable Abort
We present fundamental (in-)feasibility results for the strongest security notion for Secure Multi-Party Computation (MPC) that is achievable when a majority of parties is malicious, i.e. security with Identifiable Abort.
As general Universally Composable (UC) MPC requires a setup, typically in the form of a Common Reference String or Common-Randomness, we investigate whether the setup must provide randomness to all parties.
Given broadcast, we give tight bounds for the necessary and sufficient setup cardinality, i.e. number of participating parties, for UC-MPC protocols with Identifiable Abort.
Concretely, we improve previous upper bounds by constructing Secure Function Evaluation for \(n\) parties (\(h\) of which are honest) from setups of cardinality \(\beta := \min(n,\lfloor n/h\rfloor+\lfloor(n-1)/h\rfloor-1)\) and broadcast.
Conversely, we present the first general lower bound on the minimal setup cardinality for Identifiable Abort by proving that setups of cardinality \(\beta-1\) plus broadcast are insufficient even for a commitment among \(n\) parties.
Somewhat surprisingly, we show that Oblivious Transfer plus broadcast is sufficient for \(n = 2h \geq 2\) which is consistent with the fact that in two-party MPC Identifiable Abort comes for free.
We present the results in the Universal Composibility (UC) framework and assume the setup functionalities to be secure with Identifiable Abort.
Our constructions yield an efficient (poly-time) protocol for any \(n \in \mathrm{poly}(\lambda)\) where \(\lambda\) is the security parameter if at least a constant fraction \(h \in \Theta(n)\) of parties is honest.
However for \(h \in \mathrm{o}(n)\) our results suggest that for efficient protocols the overall number of parties \(n\) is limited quite severely by \(\binom{n}{\beta} \in \mathrm{poly}(\lambda)\).
S2Dedup: SGX-enabled Secure Deduplication
Secure deduplication allows removing duplicate content at third-party storage services while preserving the privacy of users’ data. However, current solutions are built with strict designs that cannot be adapted to storage service and applications with different security and performance requirements.
We present S2Dedup, a trusted hardware-based privacy-preserving deduplication system designed to support multiple security schemes that enable different levels of performance, security guarantees and space savings. An in-depth evaluation shows these trade-offs for the distinct Intel SGX-based secure schemes supported by our prototype.
Moreover, we propose a novel Epoch and Exact Frequency scheme that prevents frequency analysis leakage attacks present in current deterministic approaches for secure deduplication while maintaining similar performance and space savings to state-of-the-art approaches.
Batching Base Oblivious Transfers
Protocols that make use of oblivious transfer (OT) rarely require just one instance. Usually a batch of OTs is required --- notably, when generating base OTs for OT extension. There is a natural way to optimize 2-round OT protocols when generating a batch, by reusing certain protocol messages across all instances. In this work we show that this batch optimization is error-prone. We catalog many implementations and papers that have an incorrect treatment of this batch optimization, some of them leading to catastrophic leakage in OT extension protocols.
We provide a full treatment of how to properly optimize recent 2-round OT protocols for the batch setting. Along the way we show several performance improvements to the OT protocol of McQuoid, Rosulek, and Roy (ACM CCS 2020). In particular, we show an extremely simple OT construction that may be of pedagogical interest.
Learnability of Multiplexer PUF and $S_N$-PUF : A Fourier-based Approach
In this work, we prove that Multiplexer PUF~(MPUF) and $S_N$-PUF are learnable in the PAC model. First, we show that both the designs can be represented as a function of Linear Threshold Functions. We show that the noise sensitivity of $(n,k)$-MPUF and $S_N$-PUF can be bounded by $O(2^{k} \sqrt{\epsilon})$ and $O(N\sqrt{\epsilon})$ respectively. Finally, we show that as a result of bounded noise sensitivity, both the designs can be accurately approximated using low degree algorithm. Also, the number of labelled examples~(challenge-response pairs) required by the algorithm is polynomial in the input length and PAC model parameters.
Last updated: 2022-02-01
Efficient Attribute Based Encryption for Boolean Circuits
We provide a new technique for secret sharing and reconstruction for Boolean circuits, applicable in ABE systems.
We show that our construction holds for Key-policy ABE and can be adapted also to Ciphertext-policy ABE.
This is the most efficient solution for Attribute Based Encryption
for circuits access structures using bilinear maps. Our KP-ABE system has decryption key of
linear size in the number of attributes, and public parameters linear in the
circuit size (Two public values for each FO-gate). We prove that our scheme is secure under the decisional bilinear Diffie-Hellman
Assumption in the Selective Set Model.
Permutation Based EDM: An Inverse Free BBB Secure PRF
In CRYPTO 2019, Chen et al. have initiated an interesting research direction in designing PRF based on public permutations. They have proposed two beyond the birthday bound secure $n$-bit to $n$-bit PRF constructions, i.e., \textsf{SoEM22} and \textsf{SoKAC21}, which are built on public permutations, where $n$ is the size of the permutation. However, both of their constructions require two independent instances of public permutations. In FSE 2020, Chakraborti et al. have proposed a single public permutation based $n$-bit to $n$-bit beyond the birthday bound secure PRF, which they refer to as \textsf{PDMMAC}. Although the construction is minimal in the number of permutations, it requires the inverse call of its underlying permutation in their design. Coming up with a beyond the birthday bound secure public permutation based $n$-bit to $n$-bit PRF with a single permutation and two forward calls was left as an open problem in their paper. In this work, we propose $\textsf{pEDM}$, a single permutation based $n$-bit to $n$-bit PRF with two calls that do not require invertibility of the permutation. We have shown that our construction is secured against all adaptive information-theoretic distinguishers that make roughly up to $2^{2n/3}$ construction and primitive queries. Moreover, we have also shown a matching attack with similar query complexity that establishes the tightness of our security bound.
Faster indifferentiable hashing to elliptic $\mathbb{F}_{\!q^2}$-curves
Let $\mathbb{F}_{\!q}$ be a finite field and $E\!: y^2 = x^3 + ax + b$ be an elliptic $\mathbb{F}_{\!q^2}$-curve of $j(E) \not\in \mathbb{F}_{\!q}$. This article provides a new constant-time hash function $\mathcal{H}\!: \{0,1\}^* \to E(\mathbb{F}_{\!q^2})$ indifferentiable from a random oracle. Furthermore, $\mathcal{H}$ can be computed with the cost of $3$ exponentiations in $\mathbb{F}_{\!q}$. In comparison, the actively used (indifferentiable constant-time) simplified SWU hash function to $E(\mathbb{F}_{\!q^2})$ computes $2$ exponentiations in $\mathbb{F}_{\!q^2}$, i.e., it costs $4$ ones in $\mathbb{F}_{\!q}$. In pairing-based cryptography one often uses the hashing to elliptic $\mathbb{F}_{\!q^2}$-curves $E_b\!: y^2 = x^3 + b$ (of $j$-invariant $0$) having an $\mathbb{F}_{\!q^2}$-isogeny $\tau\!: E \to E_b$ of small degree. Therefore the composition $\tau \circ \mathcal{H}\!: \{0,1\}^* \to \tau\big( E(\mathbb{F}_{\!q^2}) \big)$ is also an indifferentiable constant-time hash function.
Generalized Galbraith's Test: Characterization and Applications to Anonymous IBE Schemes
The main approaches currently used to construct identity based encryption (IBE) schemes are based on bilinear mappings, quadratic residues and lattices. Among them, the most attractive approach is the one based on quadratic residues, due to the fact that the underlying security assumption is a well understood hard problem. The first such IBE scheme was constructed by Cocks and some of its deficiencies were addressed in subsequent works. In this paper, we will focus on two constructions that address the anonymity problem inherent in Cocks' scheme and we will tackle some of their incomplete theoretical claims. More precisely, we rigorously study Clear et. al and Zhao et. al's schemes and give accurate probabilities of successful decryption and identity detection in the non-anonymized version of the schemes. Also, in the case of Zhao \emph{et. al}'s scheme, we give a proper description of the underlying security assumptions.
Extending the GLS endomorphism to speed up GHS Weil descent using Magma
Let \(q~=~2^n\), and let \(\mathcal{E} / \mathbb{F}_{q^{\ell}}\) be a generalized
Galbraith--Lin--Scott (GLS) binary curve, with $\ell \ge 2$ and \((\ell, n) = 1\).
We show that the GLS endomorphism on \(\mathcal{E} / \mathbb{F}_{q^{\ell}}\) induces an efficient
endomorphism on the Jacobian \(\mathrm{Jac}_\mathcal{H}(\mathbb{F}_q)\) of the genus-\(g\) hyperelliptic
curve \(\mathcal{H}\) corresponding to the image of the GHS Weil-descent attack applied to
\(\mathcal{E} / \mathbb{F}_{q^\ell}\), and that this endomorphism yields a factor-$n$ speedup when
using standard index-calculus procedures for solving the Discrete Logarithm Problem
(DLP) on \(\mathrm{Jac}_\mathcal{H}(\mathbb{F}_q)\). Our analysis is backed up by the explicit computation
of a discrete logarithm defined on a prime-order subgroup of a GLS elliptic curve
over the field $\mathbb{F}_{2^{5\cdot 31}}$. A Magma implementation of our algorithm finds
the aforementioned discrete logarithm in about $1,035$ CPU-days.
3-round Feistel is Not Superpseudorandom Over Any Group
Luby and Rackoff used a Feistel cipher over bit strings to construct a pseudorandom permutation from pseudorandom functions in 1988 and in 2002, Patel, Ramzan, and Sundaram generalized the construction to arbitrary abelian groups. They showed that the 3-round Feistel cipher is not superpseudorandom over abelian groups but left as an open problem a proof for non-abelian groups. We give this proof.
Keywords: Feistel, non-abelian group, pseudorandom.
On the Effect of the Key-expansion Algorithm in Simon-like Ciphers
In this work we investigate how the choice of the key-expansion algorithm and its interaction with the round function affect the resistance of Simon-like ciphers against rotational-XOR (RX) cryptanalysis. We observe that among the key-expansion algorithms we consider, Simon is most resistant, while Simeck is much less so. Implications on lightweight ciphers design are discussed and open questions are proposed.
zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy
Deep learning techniques with neural networks are developing prominently in recent years and have been deployed in numerous applications. Despite their great success, in many scenarios it is important for the users to validate that the inferences are truly computed by legitimate neural networks with high accuracy, which is referred to as the integrity of machine learning predictions. To address this issue, in this paper, we propose zkCNN, a zero knowledge proof scheme for convolutional neural networks (CNN). The scheme allows the owner of the CNN model to prove to others that the prediction of a data sample is indeed calculated by the model, without leaking any information about the model itself. Our scheme can also be generalized to prove the accuracy of a secret CNN model on a public dataset.
Underlying zkCNN is a new sumcheck protocol for proving fast Fourier transforms and convolutions with a linear prover time, which is even faster than computing the result asymptotically. We also introduce several improvements and generalizations on the interactive proofs for CNN predictions, including verifying the convolutional layer, the activation function of ReLU and the max pooling. Our scheme is highly efficient in practice. It can support the large CNN of VGG16 with 15 million parameters and 16 layers. It only takes 88.3 seconds to generate the proof, which is 1264 times faster than existing schemes. The proof size is 341 kilobytes, and the verifier time is only 59.3 milliseconds. Our scheme can further scale to prove the accuracy of the same CNN on 20 images.
PQC: R-Propping a Chaotic Cellular Automata
Post-quantum cryptography (PQC) has a well-deserved NIST status. Our approach (R-Propping) replaces all numeric field arithmetic with GF(2^8) field operations. This method yields both classical and quantum secure protocols. The present work is dedicated to strengthening a chaotic Wolfram Class III cellular automata and discuss its usability as a cryptographical secure PRBG (pseudorandom bit generator), a building block for stream-ciphers, hashing, and other random numbers requiring protocols.
Multi-Threshold Byzantine Fault Tolerance
Classic Byzantine fault tolerant (BFT) protocols are designed for a specific timing model, most often one of the following: synchronous, asynchronous or partially synchronous.
It is well known that the timing model and fault tolerance threshold present inherent trade-offs. Synchronous protocols tolerate up to $n/2$ Byzantine faults, while asynchronous or partially synchronous protocols tolerate only up to $n/3$ Byzantine faults.
In this work, we generalize the fault thresholds of BFT and introduce a new problem called multi-threshold BFT.
Multi-threshold BFT has four separate fault thresholds for safety and liveness under synchrony and asynchrony (or partial-synchrony), respectively. Decomposing the fault thresholds in this way allows us to design protocols that provide meaningful fault tolerance under both synchrony and asynchrony (or partial synchrony).
We establish tight fault thresholds bounds for multi-threshold BFT and present protocols achieving them.
As an example, we show a BFT state machine replication (SMR) protocol that tolerates up to $2n/3$ faults for safety under synchrony while tolerating up to $n/3$ faults for other scenarios (liveness under synchrony as well as safety and liveness under partial synchrony).
This is strictly stronger than classic partially synchronous SMR protocols.
We also present a general framework to transform known partially synchronous or asynchronous BFT SMR protocols to additionally enjoy the optimal $2n/3$ fault tolerance for safety under synchrony.
AOT: Anonymization by Oblivious Transfer
We introduce AOT, an anonymous communication system based on mix network architecture that uses oblivious transfer (OT) to deliver messages. Using OT to deliver messages helps AOT resist blending (n−1) attacks and helps AOT preserve receiver anonymity, even if a covert adversary controls all nodes in AOT. AOT comprises three levels of nodes, where nodes at each level perform a different function and can scale horizontally. The sender encrypts their payload and a tag, derived from a secret shared between the sender and receiver, with the public key of a Level-2 node and sends them to a Level-1 node. On a public bulletin board, Level-3 nodes publish tags associated with messages ready to be retrieved. Each receiver checks the bulletin board, identifies tags, and receives the associated messages using OT. A receiver can receive their messages even if the receiver is offline when messages are ready. Through what we call a "handshake" process, communicants can use the AOT protocol to establish shared secrets anonymously. Users play an active role in contributing to the unlinkability of messages: periodically, users initiate requests to AOT to receive dummy messages, such that an adversary cannot distinguish real and dummy requests.
Pravuil: Global Consensus for a United World
Pravuil is a robust, secure, and scalable consensus protocol for a permissionless blockchain suitable for deployment in an adversarial environment such as the Internet. Pravuil circumvents previous shortcomings of other blockchains:
- Bitcoin’s limited adoption problem: as transaction demand grows, payment confirmation times grow much lower than other PoW blockchains
- higher transaction security at a lower cost
- more decentralisation than other permissionless blockchains
- impossibility of full decentralisation and the blockchain scalability trilemma: decentralisation, scalability, and security can be achieved simultaneously
- Sybil-resistance for free implementing the social optimum
- Pravuil goes beyond the economic limits of Bitcoin or other PoW/PoS blockchains, leading to a more valuable and stable crypto-currency
Grover on SM3
Grover search algorithm accelerates the key search on the symmetric key cipher and the pre-image attack on the hash function. In order to perform Grover search algorithm, the target algorithm should be implemented in a quantum circuit. For this reason, we propose an optimal SM3 hash function (Chinese standard) in a quantum circuit. We focused on minimizing the use of qubits together with reducing the use of quantum gates.
To do this, the on-the-fly approach is utilized for message expansion and compression functions. In particular, the previous value is restored and used without allocating new qubits in the permutation operation. Finally, we estimate quantum resources required for the quantum pre-image attack based on the proposed SM3 hash function implementation in the quantum circuit.
Optimized Implementation of SM4 on AVR Microcontrollers, RISC-V Processors, and ARM Processors
The SM4 block cipher is a Chinese domestic crpytographic that was introduced in 2003. Since the algorithm was developed for the use in wireless sensor networks, it is mandated in the Chinese National Standard for Wireless LAN WAPI (Wired Authentication and Privacy Infrastructure). The SM4 block cipher uses a 128-bit block size and a 32-bit round key. This consists of 32 rounds and one reverse translation \texttt{R}. In this paper, we present the optimized implementation of the SM4 block cipher on 8-bit AVR microcontrollers, which are widely used in wireless sensor devices, the optimized implementation of the SM4 block cipher on 32-bit RISC-V processors, which are open-source based computer architectures,
and the optimized implementation of SM4 on 64-bit ARM processors with the parallel computation, which are widely used in smartphone and tablet. In the AVR microcontroller, it is implemented in three versions, including speed-optimization, memory-optimization, and code-optimization. As a result, speed-optimization, memory-optimization, and code-optimization achieved 205.2 cycles per byte, 213.3 cycles per byte and 207.4 cycles per byte, respectively. This is faster than the reference implementation written in C (1670.7 cycles per byte). The implementation on 32-bit RISC-V processors 128.8 cycles per byte. This is faster than the reference C code implementation (345.7 cycles per byte).
The implementation on 64-bit ARM processors is 8.62 cycles per byte. This is faster than the reference C code implementation (120.07 cycles per byte).
Secure cloud-of-clouds storage with space-efficient secret sharing
Cloud storage services are top-rated, but there are often concerns about the security of the files there stored. Clouds-of-clouds or multi-clouds are being explored in order to improve that security. The idea is to store the files in several clouds, ensuring integrity and availability. Confidentiality, however, is obtained by encrypting the files with block ciphers that do not provide provable security. Secret sharing allows distributing files among the clouds providing information-theoretic security/secrecy. However, existing secret sharing schemes are space-inefficient (the size of the shares is much larger than the size of the secret) or purely theoretical. In this paper, we propose the first practical space-efficient secret sharing scheme that provides information-theoretic security, which we denominate PRactical Efficient Secret Sharing (PRESS). Moreover, we present the Secure CloUD storage (SCUD) service, a new cloud-of-clouds storage service that leverages PRESS to provide file confidentiality. Additionally, SCUD provides data integrity and availability, leveraging replication.
On the algebraic immunity of direct sum constructions
In this paper, we study sufficient conditions to improve the lower bound on the algebraic immunity of a direct sum of Boolean functions. We exhibit three properties on the component functions such that satisfying one of them is sufficient to ensure that the algebraic immunity of their direct sum exceeds the maximum of their algebraic immunities. These properties can be checked while computing the algebraic immunity and they allow to determine better the security provided by functions central in different cryptographic constructions such as stream ciphers, pseudorandom generators, and weak pseudorandom functions. We provide examples for each property and determine the exact algebraic immunity of candidate constructions.
A Trustless GQ Multi-Signature Scheme with Identifiable Abort
Guillou-Quisquater (GQ) signature is an efficient RSA-based digital signature scheme amongst the most famous Fiat-Shamir follow-ons owing to its good simplicity. However, there exist two bottlenecks for GQ hindering its application in industry or academia: the RSA trapdoor $n=pq$ in the key generation phase and its high bandwidth caused by the storage-consuming representation of RSA group elements (3072 bits per one element in 128-bit security).
In this paper, we first formalize the definition and security proof of class group based GQ signature (CL-GQ), which eliminates the trapdoor in key generation phase and improves the bandwidth efficiency from the RSA-based GQ signature. Then, we construct a trustless GQ multi-signature scheme by applying non-malleable equivocable commitments and our well-designed compact non-interactive zero-knowledge proofs (NIZK). Our scheme has a well-rounded performance compared to existing multiparty GQ, Schnorr and ECDSA schemes, in the aspects of bandwidth (no range proof or multiplication-to-addition protocol required), rather few interactions (only 4 rounds in signing), provable security in \textit{dishonest majority model} and identifiable abort property. Another interesting finding is that, our NIZK is highly efficient (only one round required) by using the Bezout formula, and this trick can also optimize the ZK proof of Paillier ciphertext which greatly improves the speed of Yi's Blind ECDSA (AsiaCCS 2019).
On the Design and Misuse of Microcoded (Embedded) Processors — A Cautionary Note
Today's microprocessors often rely on microcode updates to address issues such as security or functional patches. Unfortunately, microcode update flexibility opens up new attack vectors through malicious microcode alterations. Such attacks share many features with hardware Trojans and have similar devastating consequences for system security. However, due to microcode's opaque nature, little is known in the open literature about the capabilities and limitations of microcode Trojans.
We introduce the design of a microcoded RISC-V processor architecture together with a microcode development and evaluation environment. Even though microcode typically has almost complete control of the processor hardware, the design of meaningful microcode Trojans is not straightforward. This somewhat counter-intuitive insight is due to the lack of information at the hardware level about the semantics of executed software. In three security case studies we demonstrate how to overcome these issues and give insights on how to design meaningful microcode Trojans that undermine system security. To foster future research and applications, we publicly release our implementation and evaluation platform.
Verifying Post-Quantum Signatures in 8 kB of RAM
In this paper, we study implementations of post-quantum signature schemes on resource-constrained devices. We focus on verification of signatures and cover NIST PQC round-3 candidates Dilithium, Falcon, Rainbow, GeMSS, and SPHINCS+. We assume an ARM CortexM3 with 8 kB of memory and 8 kB of flash for code; a practical and widely deployed setup in, for example, the automotive sector. This amount of memory is insufficient for most schemes. Rainbow and GeMSS public keys are too big; SPHINCS+ signatures do not fit in this memory. To make signature verification work for these schemes, we stream in public keys and signatures. Due to the memory requirements for efficient Dilithium implementations, we stream in the public key to cache more intermediate results. We discuss the suitability of the signature schemes for streaming, adapt existing implementations, and compare performance.
Linear Cryptanalyses of Three AEADs with GIFT-128 as Underlying Primitives
This paper considers the linear cryptanalyses of Authenticated Encryptions with Associated Data (AEADs) GIFT-COFB, SUNDAE-GIFT, and HyENA. All of these proposals take GIFT-128 as underlying primitives. The automatic search with the Boolean satisfiability problem (SAT) method is implemented to search for linear approximations that match the attack settings concerning these primitives. With the newly identified approximations, we launch key-recovery attacks on GIFT-COFB, SUNDAE-GIFT, and HyENA when the underlying primitives are replaced with 16-round, 17-round, and 16-round versions of GIFT-128. The resistance of GIFT-128 against linear cryptanalysis is also evaluated. We present a 24-round key-recovery attack on GIFT-128 with a newly obtained 19-round linear approximation. We note that the attack results in this paper are far from threatening the security of GIFT-COFB, SUNDAE-GIFT, HyENA, and GIFT-128.
Best-Possible Unpredictable Proof-of-Stake: An Impossibility and a Practical Design
The proof-of-stake (PoS) protocols have been proposed to eliminate the unnecessary waste of computing power in Bitcoin. Multiple practical and provably secure designs have been developed, such as Ouroboros Praos (Eurocrypt 2018), Snow White (FC 2019), and more. However, an important security property called unpredictability has not been carefully studied in these provably secure PoS. Unpredictability property is critical for PoS since the attackers could use predictability to launch strengthened versions of multiple attacks (e.g., selfish-mining and bribing). Unpredictability has previously been investigated by Brown-Cohen et al. (EC 2019) in incentive-driven settings. In this paper, we investigate the property in the cryptographic setting, to achieve the ''best possible'' unpredictability for PoS.
First, we present an impossibility result for all proof-of-stake protocols under the single-extension design framework. In this framework, each honest player is allowed to extend exactly one chain in each round; the state-of-the-art permissionless PoS protocols (e.g., Praos, Snow White, and more), are all under this single-extension framework. Our impossibility result states that, if a single-extension PoS protocol achieves the best possible unpredictability, then this protocol cannot be proven secure unless more than $73\%$ of stake is honest. Then, to overcome the impossibility result, we introduce a new design framework, called multi-extension PoS, which allows each honest player to extend multiple chains in a round. We develop a novel strategy called ''$D$-distance-greedy'' strategy (where $D$ is a positive integer), in which honest players are allowed to extend a set of best chains that are ''close'' to the longest chain. (Of course, malicious players are allowed to behave arbitrarily in the protocol execution.) This ''$D$-distance-greedy'' strategy enables us to construct a class of PoS protocols that achieve the best possible unpredictability. Plus, we design a new tiebreak rule for the multi-extension protocol to choose the best chain that can be extended faster. This ensures that the adversary cannot slowdown the chain growth of honest players. Note, these protocols can be proven secure, assuming a much smaller fraction (e.g., $57\%$) of stake to be honest.
To enable a thorough security analysis in the cryptographic setting, we develop several new techniques. As the players are allowed to extend multiple chains, the analysis of chain growth is highly non-trivial. We introduce a new analysis framework to analyze the chain growth of a multi-extension protocol. To prove the common prefix property, we introduce a new concept called ''virtual chains'', and then present a reduction from the regular version of the common prefix to ''common prefix w.r.t. virtual chains''.
Technical report: CoPHEE: Co-processor forPartially Homomorphic Encrypted Execution
This technical report provides extensive information for designing, implementing, fabricating, and validating CoPHEE: A Co-Processor for Partially Homomorphic Encrypted Execution, complementing the publication appearing in the 2019 IEEE Hardware-Oriented Security and Trust symposium.
A Practical Adaptive Key Recovery Attack on the LGM (GSW-like) Cryptosystem
We present an adaptive key recovery attack on the leveled homomorphic encryption scheme suggested by Li, Galbraith and Ma (Provsec 2016), which itself is a modification of the GSW cryptosystem designed to resist key recovery attacks by using a different linear combination of secret keys for each decryption.
We were able to efficiently recover the secret key for a realistic choice of parameters using a statistical attack.
In particular, this means that the Li, Galbraith and Ma strategy does not prevent adaptive key recovery attacks.
Locally Reconstructable Non-malleable Secret Sharing
Non-malleable secret sharing (NMSS) schemes, introduced by Goyal and Kumar (STOC 2018), ensure that a secret $m$ can be distributed into shares $m_1,...,m_n$ (for some $n$), such that any $t$ (a parameter $<=n$) shares can be reconstructed to recover the secret $m$, any $t-1$ shares doesn't leak information about $m$ and even if the shares that are used for reconstruction are tampered, it is guaranteed that the reconstruction of these tampered shares will either result in the original $m$ or something independent of $m$. Since their introduction, non-malleable secret sharing schemes sparked a very impressive line of research.
In this work, we introduce a feature of local reconstructability in NMSS, which allows reconstruction of any portion of a secret by reading just a few locations of the shares. This is a useful feature, especially when the secret is long or when the shares are stored in a distributed manner on a communication network. In this work, we give a compiler that takes in any non-malleable secret sharing scheme and compiles it into a locally reconstructable non-malleable secret sharing scheme. To secret share a message consisting of $k$ blocks of length $l$ each, our scheme would only require reading $l + log k$ bits (in addition to a few more bits, whose quantity is independent of $l$ and $k$) from each party's share (of a reconstruction set) to locally reconstruct a single block of the message.
We show an application of our locally reconstructable non-malleable secret sharing scheme to a computational non-malleable secure message transmission scheme in the pre-processing model, with an improved communication complexity, when transmitting multiple messages.
Automated Search Oriented to Key Recovery on Ciphers with Linear Key Schedule: Applications to Boomerangs in SKINNY and ForkSkinny
Automatic modelling to search distinguishers with high probability covering as many rounds as possible, such as MILP, SAT/SMT, CP models, has become a very popular cryptanalysis topic today. In those models, the optimizing objective is usually the probability or the number of rounds of the distinguishers. If we want to recover the secret key for a round-reduced block cipher, there are usually two phases, i.e., finding an efficient distinguisher and performing key-recovery attack by extending several rounds before and after the distinguisher. The total attacked number of rounds is not only related to the chosen distinguisher, but also to the extended rounds before and after the distinguisher. In this paper, we try to combine the two phases in a uniform automatic model.
Concretely, we apply this idea to automate the related-key rectangle attacks on SKINNY and ForkSkinny. We propose some new distinguishers with advantage to perform key-recovery attacks. Our key-recovery attacks on a few versions of round-reduced SKINNY and ForkSkinny cover 1 to 2 more rounds than the best previous attacks.
On the Effect of Projection on Rank Attacks in Multivariate Cryptography
The multivariate scheme HFEv- used to be considered a promising candidate for a post-quantum signature system. First
suggested in the early 2000s, a version of the scheme made it to the third round of the ongoing NIST post-quantum standardization process. In late 2020, the system suffered from an efficient rank attack due to Tao, Petzoldt, and Ding. In this paper, we inspect how this recent rank attack is affected by the projection modification. This modification was introduced to secure the signature scheme PFLASH against its predecessor's attacks. We prove upper bounds for the rank of projected HFEv- (pHFEv-) and PFLASH under the new attack, which are tight for the experiments we have performed. We conclude that projection could be a useful tool in protecting against this recent cryptanalysis.
Non-Interactive, Secure Verifiable Aggregation for Decentralized, Privacy-Preserving Learning
Uncategorized
Uncategorized
We propose a novel primitive called NIVA that allows the distributed aggregation of multiple users' secret inputs by multiple untrusted servers.
The returned aggregation result can be publicly verified in a non-interactive way, i.e. the users are not required to participate in the aggregation except for providing their secret inputs.
NIVA allows the secure computation of the sum of a large amount of users' data and can be employed, for example, in the federated learning setting in order to aggregate the model updates for a deep neural network.
We implement NIVA and evaluate its communication and execution performance and compare it with the current state-of-the-art, i.e. Segal et al. protocol (CCS 2017) and Xu et al. VerifyNet protocol (IEEE TIFS 2020), resulting in better user's communicated data and
Smooth Zero-Knowledge Hash Functions
We define smooth zero-knowledge hash functions (SZKHFs) as smooth projective hash functions (SPHFs) for which the completeness holds even when the language parameter lpar and the projection key HP were maliciously generated.
We prove that blackbox SZKHF in the plain model is impossible even if lpar was honestly generated. We then define SZKHF in the registered public key (RPK) model, where both lpar and HP are possibly maliciously generated but accepted by an RPK server, and show that the CRS-model trapdoor SPHFs of Benhamouda et al. are also secure in the weaker RPK model. Then, we define and instantiate subversion-zero knowledge SZKHF in the plain model. In this case, both lpar and HP are completely untrusted, but one uses non-blackbox techniques in the security proof.
Detector+: An Approach for Detecting, Isolating, and Preventing Timing Attacks
In this work, we present a novel approach, called Detector+ , to detect, isolate, and prevent timing-based side channel attacks (i.e., timing attacks) at runtime. The proposed approach is based on a simple observation that the time measurements required by the timing attacks differ from those required by the benign applications as these attacks need to measure the execution times of typically quite short-running operations. Detector+ , therefore, monitors the time readings made by processes and mark consecutive pairs of readings that are close to each other in time as suspicious. In the presence of suspicious time measurements, Detector+ introduces noise into the measurements to prevent the attacker from extracting information by using these measurements. The sequence of suspicious time measurements are then analyzed by using a sliding window based approach to pinpoint the malicious processes at runtime. We have empirically evaluated the proposed approach by using five well known timing attacks, including Meltdown, together with their variations, representing some of the mechanisms that an attacker can employ to become stealthier. In one evaluation setup, each type of attack was carried out concurrently by multiple processes. In the other setup, multiple types of attacks were carried out concurrently. In all the experiments, Detector+ detected all the malicious time measurements with almost a perfect accuracy, prevented all the attacks, and correctly pinpointed all the malicious processes involved in the attacks without any false positives after they have made a few time measurements with an average runtime overhead of 1.56%.
Leo: A Programming Language for Formally Verified, Zero-Knowledge Applications
Decentralized ledgers that support rich applications suffer from three limitations. First, applications are provisioned tiny execution environments with limited running time, minimal stack size, and restrictive instruction sets. Second, applications must reveal their state transition, enabling miner frontrunning attacks and consensus instability. Third, applications offer weak guarantees of correctness and safety.
We design, implement, and evaluate Leo, a new programming language designed for formally verified, zero-knowledge applications. Leo provisions a powerful execution environment that is not restricted in running time, stack size, or instruction sets. Besides offering application privacy and mitigating miner-extractable value (MEV), Leo achieves two fundamental properties. First, applications are formally verified with respect to their high-level specification. Second, applications can be succinctly verified by anyone, regardless of the size of application.
Leo is the first known programming language to introduce a testing framework, package registry, import resolver, remote compiler, formally defined language, and theorem prover for general-purpose, zero-knowledge applications.
Structured Leakage and Applications to Cryptographic Constant-Time and Cost
Many security properties of interest are captured by instrumented
semantics that model the functional behavior and the leakage of
programs. For several important properties, including cryptographic
constant-time (CCT), leakage models are sufficiently abstract that
one can define instrumented semantics for high-level and low-level
programs. One important goal is then to relate leakage of source programs and leakage of their compilation---this can be used, e.g.\, to prove preservation of CCT.
To simplify this task, we put forward the idea of structured leakage.
In contrast to the usual modeling of leakage as a sequence of observations,
structured leakage is tightly coupled with the operational semantics of programs.
This coupling greatly simplifies the definition of leakage transformers that map the
leakage of source programs to leakage of their compilation and
yields more precise statements about the preservation of security
properties. We illustrate our methods on the Jasmin
compiler and prove preservation results for two policies of interest: CCT and cost.
On the Algebraic Immunity - Resiliency trade-off, implications for Goldreich's Pseudorandom Generator
Uncategorized
Uncategorized
Goldreich's pseudorandom generator is a well-known building block for many theoretical cryptographic constructions from multi-party computation to indistinguishability obfuscation. Its unique efficiency comes from the use of random local functions: each bit of the output is computed by applying some fixed public $n$-variable Boolean function $f$ to a random public size-$n$ tuple of distinct input bits.
The characteristics that a Boolean function $f$ must have to ensure pseudorandomness is a puzzling issue. It has been studied in several works and particularly by Applebaum and Lovett (STOC 2016) who showed that resiliency and algebraic immunity are key parameters in this purpose.
In this paper, we propose the first study on Boolean functions that reach together maximal algebraic immunity and high resiliency.
1) We assess the possible consequences of the asymptotic existence of such optimal functions. We show how they allow to build functions reaching all possible algebraic immunity-resiliency trade-offs (respecting the algebraic immunity and Siegenthaler bounds).
We provide a new bound on the minimal number of variables~$n$, and thus on the minimal locality, necessary to ensure a secure Goldreich's pseudorandom generator. Our results come with a granularity level depending on the strength of our assumptions, from none to the conjectured asymptotic existence of optimal functions.
2) We extensively analyze the possible existence and the properties of such optimal functions. Our results show two different trends. On the one hand, we were able to show some impossibility results concerning existing families of Boolean functions that are known to be optimal with respect to their algebraic immunity, starting by the promising XOR-MAJ functions. We show that they do not reach optimality and could be beaten by optimal functions if our conjecture is verified.
On the other hand, we prove the existence of optimal functions in low number of variables by experimentally exhibiting some of them up to $12$ variables. This directly provides better candidates for Goldreich's pseudorandom generator than the existing XOR-MAJ candidates for polynomial stretches from $2$ to $6$.
Security of COFB against Chosen Ciphertext Attacks
COFB is a lightweight Authenticated Encryption with Associated Data (AEAD) mode based on block ciphers.
It was proposed in CHES 2017 and is the basis for GIFT-COFB, a finalist in the NIST lightweight standardization project.
It comes with provable security results that guarantee its security up to the birthday
bound in the nonce-respecting model. However, the designers offer multiple versions of the analysis with different details and the implications of attacks against the scheme are not discussed deeply. In this article, we look at a group of possible forgery and privacy attacks against COFB. We show that the security for both forgery and privacy is bounded by the number of forgery attempts. We show the existence of forgery and privacy attacks with success probability $q_d/2^{n/2}$, given $q_d$ forgery attempts. In particular, we show an attack with $2^{n/2}$ attempts using only a single
known-plaintext encryption query against COFB. While these attacks do not contradict the claims made by the designers of GIFT-COFB, they show its limitations in terms of the number of forgery attempts. They also show that, while COFB generates a 128-bit tag, it behaves in a very similar manner to an AEAD scheme with
64-bit tag. As a result of independent interest, our analysis provides a contradiction to the main theorem of {\it Journal of Cryptology volume 33, pages 703–741 (2020)},
which includes an improved security proof of COFB compared to the CHES 2017 version.
Finally, we discuss the term $nq_d/2^{n/2}$ that appears in the security proof of GIFT-COFB and CHES 2017, showing why there is a security gap between the provable results and the attacks. We emphasize that the results in this article do not threaten the security of GIFT-COFB in the scope of the NIST lightweight cryptography requirements or the claims made by the designers in the specification document of the design.
privateDH: An Enhanced Diffie-Hellman Key-Exchange Protocol using RSA and AES Algorithm
RSA cryptography is an asymmetric communication protocol, and it is facing diverse issues. Recent research works suggest that RSA security has already broken. On the contrary, AES is the most used symmetric-key cryptography protocol, and it is also facing issues. Literature search suggests that there is an issue of cryptanalysis attacks. A shared secret key requires for AES cryptography. The most famous key exchange protocol is Diffie-Hellman; however, it has an issue of the number field sieve discrete log algorithm attacks. Moreover, recent research suggested that Diffie-Hellman is less secure than widely perceived. Moreover, there is another issue of Logjam attack that allows man-in-middle attack in Diffie-Hellman. Thus, we combine RSA, AES, and Diffie-Hellman algorithm to provide security on the key exchange protocol, called privateDH. Our key objective is to provide security to the Diffie-Hellman Algorithm. Therefore, privateDH does not share the data publicly with the intended party. Instead, privateDH encrypts all shareable data in the time of key exchange by encrypting using the AES algorithm. privateDH uses the RSA algorithm and retrieves the public key to avoid a man-in-the-middle attack. Thus, we demonstrate how to provide security to the Diffie-Hellman algorithm to defeat various kinds of attacks.
Optimization of Advanced Encryption Standard on Graphics Processing Units
Graphics processing units (GPUs) are specially designed for parallel applications and perform parallel operations much faster than central processing units (CPUs). In this work, we focus on the performance of the Advanced Encryption Standard (AES) on GPUs. We present optimizations which remove bank conflicts in shared memory accesses and provide 878.6 Gbps throughput for AES-128 encryption on an RTX 2070 Super, which is equivalent to 4.1 Gbps per Watt. Our optimizations provide more than 2.56x speed-up against the best GPU results in the literature. Our optimized AES implementations on GPUs even outperform any CPU using the hardware level AES New Instructions (AES-NI) and legacy FPGA-based cluster architectures like COPACOBANA and RIVYERA. Even on a low-end GPU like MX 250, we obtained 60.0 Gbps throughput for AES-256 which is generally faster than the read/write speeds of solid disks. Thus, transition from AES-128 to AES-256 when using GPUs would provide military grade security with no visible performance loss. With these breakthrough performances, GPUs can be used as a cryptographic co-processor for file or full disk encryption to remove performance loss coming from CPU encryption. With a single GPU as a co-processor, busy SSL servers can be free from the burden of encryption and use their whole CPU power for other operations. Moreover, these optimizations can help GPUs to practically verify theoretically obtained cryptanalysis results or their reduced versions in reasonable time.
Legendre PRF (Multiple) Key Attacks and the Power of Preprocessing
Due to its amazing speed and multiplicative properties the Legendre PRF recently finds widespread applications e.g. in Ethereum 2.0, multiparty computation and in the quantum-secure signature proposal LegRoast. However, its security is not yet extensively studied.
The Legendre PRF computes for a key $k$ on input $x$ the Legendre symbol $L_k(x) = \left( \frac {x+k} {p} \right)$ in some finite field $\F_p$. As standard notion, PRF security is analysed by giving an attacker oracle access to $L_k(\cdot)$. Khovratovich's collision-based algorithm recovers $k$ using $L_k(\cdot)$ in time $\sqrt{p}$ with constant memory. It is a major open problem whether this birthday-bound complexity can be beaten.
We show a somewhat surprising wide-ranging analogy between the discrete logarithm problem and Legendre symbol computations. This analogy allows us to adapt various algorithmic ideas from the discrete logarithm setting.
More precisely, we present a small memory multiple-key attack on $m$ Legendre keys $k_1, \ldots, k_m$ in time $\sqrt{mp}$, i.e. with amortized cost $\sqrt{p/m}$ per key. This multiple-key attack might be of interest in the Ethereum context, since recovering many keys simultaneously maximizes an attacker's profit.
Moreover, we show that the Legendre PRF admits precomputation attacks, where the precomputation depends on the public $p$ only -- and not on a key $k$. Namely, an attacker may compute e.g. in precomputation time $p^{\frac 2 3}$ a hint of size $p^{\frac 1 3}$. On receiving access to $L_k(\cdot)$ in an online phase, the attacker then uses the hint to recover the desired key $k$ in time only $p^{\frac 1 3}$. Thus, the attacker's online complexity again beats the birthday-bound.
In addition, our precomputation attack can also be combined with our multiple-key attack. We explicitly give various tradeoffs between precomputation and online phase. E.g. for attacking $m$ keys one may spend time $mp^{\frac 2 3}$ in the precomputation phase for constructing a hint of size $m^2 p^{\frac 1 3}$. In an online phase, one then finds {\em all $m$ keys in total time} only $p^{\frac 1 3}$.
Precomputation attacks might again be interesting in the Ethereum 2.0 context, where keys are frequently changed such that a heavy key-independent precomputation pays off.
Cryptanalysis of Semidirect Product Key Exchange Using Matrices Over Non-Commutative Rings
It was recently demonstrated that the Matrix Action Key Exchange (MAKE) algorithm, a new type of key exchange protocol using the semidirect product of matrix groups, is vulnerable to a linear algebraic attack if the matrices are over a commutative ring. In this note, we establish conditions under which protocols using matrices over a non-commutative ring are also vulnerable to this attack. We then demonstrate that group rings $R[G]$, where $R$ is a commutative ring and $G$ is a non-abelian group, are examples of non-commutative rings that satisfy these conditions.
On MILP-based Automatic Search for Bit-Based Division Property for Ciphers with (large) Linear Layers
With the introduction of the division trail, the bit-based division property (BDP) has become the most efficient method to search for integral distinguishers. The notation of the division trail allows us to automate the search process by modelling the propagation of the DBP as a set of constraints that can be solved using generic Mixed-integer linear programming (MILP) and SMT/SAT solvers. The current models for the basic operations and Sboxes are efficient and accurate. In contrast, the two approaches to model the propagation of the BDP for the non-bit-permutation linear layer are either inaccurate or inefficient. The first approach relies on decomposing the matrix multiplication of the linear layer into COPY and XOR operations. The model obtained by this approach is efficient, in terms of the number of the constraints, but it is not accurate and might add invalid division trails to the search space, which might lead to missing the balanced property of some bits. The second approach employs a one-to-one map between the valid division trails through the primitive matrix represented the linear layer and its invertible sub-matrices. Despite the fact that the current model obtained by this approach is accurate, it is inefficient, i.e., it produces a large number of constraints for large linear layers like the one of Kuznyechik. In this paper, we address this problem by utilizing the one-to-one map to propose a new MILP model and a search procedure for large non-bit-permutation layers. As a proof of the effectiveness of our approach, we improve the previous 3- and 4-round integral distinguishers of Kuznyechik and the 4-round one of PHOTON's internal permutation ($P_{288}$). We also report, for the fist time, a 4-round integral distinguisher for Kalyna block cipher and a 5-round integral distinguisher for PHOTON's internal permutation ($P_{288}$).
On the Cryptographic Deniability of the Signal Protocol
Offline deniability is the ability to a-posteriori deny having participated in a particular communication session. This property has been widely assumed for the Signal messaging application, yet no formal proof has appeared in the literature. In this paper, we present what we believe is the first formal study of the offline deniability of the Signal protocol. Our analysis shows that building a deniability proof for Signal is non-trivial and requires strong assumptions on the underlying mathematical groups where the protocol is run.
To do so, we study various *implicitly authenticated* key exchange protocols including MQV, HMQV and 3DH/X3DH, the latter being the core key agreement protocol in Signal. We first present examples of mathematical groups where running MQV results in a provably non-deniable interaction. While the concrete attack applies only to MQV, it also exemplifies the problems in attempting to prove the deniability of other implicitly authenticated protocols, such as 3DH. In particular, it shows that the intuition that the minimal transcript produced by these protocols suffices for ensuring deniability does not hold. We then provide a characterization of the groups where deniability holds, defined in terms of a knowledge assumption that extends the Knowledge of Exponent Assumption (KEA).
We conclude the paper by showing two additional positive results. The first is a general theorem that links the deniability of a communication session to the deniability of the key agreement protocol starting the session. This allows us to extend our results on the deniability of 3DH/X3DH to the entire Signal communication session.
Hydra: Succinct Fully Pipelineable Interactive Arguments of Knowledge
We present advancements for interactive arguments with Hydra, a novel verifiable computation system. Hydra introduces two new disjoint interactive argument scheme protocols geared towards the efficient pipelining of circuit verification. The first is specific to subcircuits, where a deep circuit is broken up into smaller parts and proven concurrently. The second is a more general scheme where all layers of the circuit can be proven in parallel, removing the dependency on the layer-wise synchronous execution of the protocol. Compared to non-interactive SNARKs which rely on knowledge type assumptions (or the Random Oracle model) and theoretical non-interactive arguments based on standard assumptions that are not useful in practice, Hydra achieves a sweet spot with a practical approach. From standard assumptions, Hydra collapses the round complexity to polylogarithmic to the width of the circuit, but only incurs polylogarithmic blowup in bandwidth and verifier time complexity. We implement the full verification flow, including both protocols and a logic parser used to convert traditional logic circuit compilation outputs into provable layered arithmetic representations. We perform experimental evaluations of our proposals and demonstrate protocol time efficiency improvements of up to 34.8 times and 4.3 times respectively compared to traditional approaches on parallel hardware.
Security and Trust in Open Source Security Tokens
Using passwords for authentication has been proven vulnerable in countless security incidents. Hardware authentication tokens effectively prevent most password-related security issues and improve security indisputably. However, we would like to highlight that there are new threats from attackers with physical access which need to be discussed. Supply chain adversaries may manipulate devices on a large scale and install backdoors before they even reach end users. In evil maid scenarios, specific devices may even be attacked while already in use. Hence, we thoroughly investigate the security and trustworthiness of eight commercially available open source authentication tokens, including devices from the two market leaders: SoloKeys and Nitrokey. Unfortunately, we identify and practically verify significant vulnerabilities in all eight examined tokens. Some of them based on severe, previously undiscovered, vulnerabilities of three major microcontroller products which are used at a large scale in various products. Our findings clearly emphasize the significant threat from supply chain and evil maid scenarios since the attacks are practical and only require moderate attacker efforts. Fortunately, we are able to describe software-based countermeasures as effective improvements to retrofit the examined devices. To improve the security and trustworthiness of future authentication tokens, we also derive important general design recommendations.
Indifferentiable Signatures: High Performance and Fallback Security
Digital signatures have been widely used as building blocks for constructing complex cryptosystems.
To facilitate the security analysis of a complex system, we expect the underlying building blocks to achieve desirable composability.
Notably, Canetti (FOCS 2001) and then Maurer et al (TCC 2004) propose analysis frameworks, the Universal Composability framework for cryptographic protocols, and the indifferentiability framework for cryptographic objects.
In this paper, we develop a “lifting strategy”, which allows us to compile multiple existing practical signature schemes using cyclic group (e.g., Schnorr, Boneh-Boyen), to achieve a very stringent security guarantee, in an idealized model of the generic (bilinear) group, without introducing much extra efficiency loss. What's more interesting is that, in our design, even the involved idealized model does not exist, our compiled construction will still be able to achieve the classical notion of unforgeability.
To achieve both indifferentiability and good efficiency, we develop new techniques in generic (bilinear) group model.
Efficient Constructions of Pairing Based Accumulators
Cryptographic accumulators are a crucial building block for a variety of applications where you need to represent a set of elements in a compact format while still being able to provide proofs of (non)membership. In this work, we give a number of accumulator constructions for the bilinear pairing setting in the trapdoor-based scenario, where a trusted manager maintains the accumulator.
Using modular accumulator techniques, we first present the first optimally efficient (in terms of communication cost) dynamic, positive accumulators in the pairing setting.
Additionally, we present a novel modular approach to construct universal accumulators that avoid costly non-membership proofs. We instantiate our generic construction and present
the first universal accumulator in the bilinear pairing setting, that achieves constant parameter size, constant cost for element additions/deletions and witness generation by the manager, constant witness updates by the users and constant (non)membership verification.
We finally show how our proposed universal accumulator construction can give rise to efficient ZK accumulators with constant non-membership witness updates.
Doubly-Affine Extractors, and their Applications
In this work we challenge the common misconception that information-theoretic (IT) privacy is too impractical to be used in the real-world: we propose to build simple and $\textit{reusable}$ IT-encryption solutions whose only efficiency penalty (compared to computationally-secure schemes) comes from a large secret key size, which is often a rather minor inconvenience, as storage is cheap. In particular, our solutions are $\textit{stateless}$ and $\textit{locally computable at the optimal rate}$, meaning that honest parties do not maintain state and read only (optimally) small portions of their large keys with every use.
Moreover, we also propose a novel architecture for outsourcing the storage of these long keys to a network of semi-trusted servers, trading the need to store large secrets with the assumption that it is hard to simultaneously compromise too many publicly accessible ad-hoc servers. Our architecture supports $\textit{everlasting privacy}$ and $\textit{post-application security}$ of the derived one-time keys, resolving two major limitations of a related model for outsourcing key storage, called bounded storage model.
Both of these results come from nearly optimal constructions of so called $\textit{doubly-affine extractors}$: locally-computable, seeded extractors $\textbf{Ext}$(X,S) which are linear functions of X (for any fixed seed S), and protect against bounded affine leakage on X. This holds unconditionally, even if (a) affine leakage may $\textit{adaptively depend}$ on the extracted key R = $\textbf{Ext}$(X, S); and (b) the seed S is only $\textit{computationally}$ secure. Neither of properties are possible with general-leakage extractors.
Communication Complexity of Private simultaneous Quantum Messages Protocols
The private simultaneous messages (PSM) model is a non-interactive version of the multiparty secure computation (MPC), which has been intensively studied to examine the communication cost of the secure computation. We consider its quantum counterpart, the private simultaneous quantum messages (PSQM) model, and examine the advantages of quantum communication and prior entanglement of this model.
In the PSQM model, $k$ parties $P_1,\ldots,P_k$ initially share a common random string (or entangled states in a stronger setting), and they have private classical inputs $x_1,\ldots, x_k$. Every $P_i$ generates a quantum message from the private input $x_i$ and the shared random string (entangled states), and then sends it to the referee $R$. Receiving the messages from the $k$ parties, $R$ computes $F(x_1,\ldots,x_k)$ from the messages. Then, $R$ learns nothing except for $F(x_1,\ldots,x_k)$ as the privacy condition.
We obtain the following results for this PSQM model. ($i$) We demonstrate that the privacy condition inevitably increases the communication cost in the two-party PSQM model as well as in the classical case presented by Applebaum, Holenstein, Mishra, and Shayevitz [Journal of Cryptology 33(3), 916--953 (2020)]. In particular, we prove a lower bound $(3-o(1))n$ of the communication complexity in PSQM protocols with a shared random string for random Boolean functions of $2n$-bit input, which is larger than the trivial upper bound $2n$ of the communication complexity without the privacy condition. ($ii$) We demonstrate a factor two gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings by designing a multiparty PSQM protocol with shared entangled states for a total function that extends the two-party equality function. ($iii$) We demonstrate an exponential gap between the communication complexity of PSQM protocols with shared entangled states and with shared random strings for a two-party partial function.
symKrypt: A General-purpose and Lightweight Symmetric-Key Cryptography
Symmetric-key cryptography is used widely due to its capability to provide a strong defense against diverse attacks; however, it is prone to cryptanalysis attacks. Therefore, we propose a novel and highly secure symmetric-key cryptography, symKrypt for short, to defend against diverse attacks and provide absolute security. Our proposed algorithm changes private keys in each block of communication, i.e., symKrypt uses multiple private keys to encrypt a single block of a message. Moreover, symKrypt keeps secret the bit mixing of the original message with the private keys. Also, the number of private keys is kept secret. In addition, the private keys are generated dynamically based on the initial inputs using a pseudo-random number generator which is highly unpredictable and secure. In this article, we theoretically analyze the capabilities of symKrypt and provide experimental demonstration using millions of private keys to prove its correctness. Furthermore, we demonstrate the proposed pseudo-random number generator algorithm experimentally in NIST SP 800-22 statistical test suite. Our propose pseudo-random number generator passes all 15 tests in the said test suite. symKrypt is the first model to use multiple private keys in encryption yet lightweight and powerful.
Setting Up Efficient TFHE Parameters for Multivalue Plaintexts and Multiple Additions
Uncategorized
Uncategorized
Unlike traditional and/or standardized ciphers, TFHE offers much space for the setup of its parameters. Not only the parameter choice affects the plaintext space size and security, it also greatly impacts the performance of TFHE, in particular, its bootstrapping.
In this paper, we provide an exhaustive description of TFHE, including its foundations, (functional) bootstrapping and error propagation during all operations. In addition, we outline a bootstrapping scenario without the key switching step. Based on our thorough summary, we suggest an approach for the setup of TFHE parameters with particular respect to bootstrapping efficiency. Finally, we propose twelve setups of real-world TFHE parameters for six different scenarios with and without key switching, respectively, and we compare their performance.
N.b.: This is a technical paper, which is mainly intended for researchers interested in TFHE. However, due to its self-containment, it shall be accessible also for readers with a basic knowledge of TFHE.
CTIDH: faster constant-time CSIDH
This paper introduces a new key space for CSIDH and a new algorithm for constant-time evaluation of the CSIDH group action. The key space is not useful with previous algorithms, and the algorithm is not useful with previous key spaces,
but combining the new key space with the new algorithm produces speed records for constant-time CSIDH. For example, for CSIDH-512 with a 256-bit key space, the best previous constant-time results used 789000 multiplications and more than 200 million Skylake cycles; this paper uses 438006 multiplications and 125.53 million cycles.
Internet Computer Consensus
We present the Internet Computer Consensus (ICC) family of protocols for
atomic broadcast (a.k.a., consensus),
which underpin the Byzantine fault-tolerant
replicated state machines of the Internet Computer.
The ICC protocols are leader-based protocols that
assume partial synchrony, and that are
fully integrated with a blockchain.
The leader changes probabilistically in every round.
These protocols are extremely simple and robust:
in any round where the leader is corrupt (which itself happens
with probability less than $1/3$), each ICC protocol will
effectively allow another party to take over as leader
for that round, with very little fuss, to move the protocol
forward to the next round in a timely fashion.
Unlike in many other protocols,
there are no complicated subprotocols (such as "view change'' in PBFT)
or unspecified subprotocols (such as "pacemaker'' in HotStuff).
Moreover, unlike in many other protocols
(such as PBFT and HotStuff), the task of reliably disseminating the blocks to
all parties is an integral part the protocol,
and not left to some other unspecified subprotocol.
An additional property enjoyed by the ICC protocols
(just like PBFT and HotStuff, and unlike others, such as Tendermint)
is optimistic responsiveness, which means that
when the leader is honest, the protocol will proceed at the pace
of the actual network delay, rather than some upper bound on
the network delay.
We present three different protocols (along with various minor variations
on each).
One of these protocols (ICC1) is designed to be integrated
with a peer-to-peer gossip sub-layer, which reduces
the bottleneck created at the leader for disseminating large blocks,
a problem that all leader-based protocols, like PBFT and HotStuff,
must address, but typically do not.
Our Protocol ICC2 addresses the same problem by substituting
a low-communication reliable broadcast subprotocol
(which may be of independent interest)
for the gossip sub-layer.
SwapCT: Swap Confidential Transactions for Privacy-Preserving Multi-Token Exchanges
Decentralized token exchanges allow for secure trading of tokens without a trusted third party. However, decentralization is mostly achieved at the expense of transaction privacy. For a fair exchange, transactions must remain private to hide the participants and volumes while maintaining the possibility for non-interactive execution of trades. In this paper we present a swap confidential transaction system (SwapCT) which is related to ring confidential transactions (e.g. used in Monero) but supports multiple token types to trade among and enables secure, partial transactions for non-interactive swaps. We prove that SwapCT is secure in a strict, formal model and present its efficient performance in a prototype implementation with logarithmic signature sizes for large anonymity sets. For our construction we design an aggregatable signature scheme which might be of independent interest. Our SwapCT system thereby enables a secure and private exchange for tokens without a trusted third party.
Non-Interactive CCA2-Secure Threshold Cryptosystems: Achieving Adaptive Security in the Standard Model Without Pairings
We consider threshold public-key encryption, where the decryption servers distributively hold the private key shares, and we need a threshold of these servers to decrypt the message (while the system remains secure when less than the threshold is corrupt). We investigate the notion of chosen-ciphertext secure threshold systems which has been historically hard to achieve.
We further require the systems to be, both, adaptively secure (i.e., secure against a strong adversary making corruption decisions dynamically during the protocol), and non-interactive (i.e., where decryption servers do not interact amongst themselves but rather efficiently contribute, each, a single message). To date, only pairing-based implementations were known to achieve security in the standard security model without relaxation (i.e., without assuming the random oracle idealization) under the above stringent requirements. Here, we investigate how to achieve the above using other assumptions
(in order to understand what other algebraic building blocks and mathematical assumptions are needed to extend the domain of encryption methods achieving the above). Specifically, we show realizations under the Decision Composite Residuosity (DCR) and Learning-With-Errors (LWE) assumptions.
SoK: How private is Bitcoin? Classification and Evaluation of Bitcoin Mixing Techniques
Blockchain is a disruptive technology that promises a multitude of benefits such as transparency, traceability, and immutability. However, this unique bundle of key characteristics rapidly proved to be a double-edged sword that can put user privacy at risk.
Unlike traditional systems, Bitcoin transactions are publicly and permanently recorded, and anyone can access the full history of the records. Despite using pseudonymous identities, an adversary can undermine the financial privacy of users and reveal their actual identities using advanced heuristics and techniques to identify eventual links between transactions, senders, receivers, and consumed services (e.g., online purchases). In this regard, a multitude of approaches has been proposed in an attempt to overcome financial transparency and enhance user anonymity. These techniques range from using mixing services to off-chain transactions and address different privacy issues. In this survey, we particularly focus on comparing and evaluating mixing techniques in the Bitcoin blockchain, present their limitations, and highlight the new challenges.
The Availability-Accountability Dilemma and its Resolution via Accountability Gadgets
For applications of Byzantine fault tolerant (BFT) consensus protocols where the participants are economic agents, recent works highlighted the importance of accountability: the ability to identify participants who provably violate the protocol. At the same time, being able to reach consensus under dynamic levels of participation is desirable for censorship resistance. We identify an availability-accountability dilemma: in an environment with dynamic participation, no protocol can simultaneously be accountably-safe and live. We provide a resolution to this dilemma by constructing a provably secure optimally-resilient accountability gadget to checkpoint a longest chain protocol, such that the full ledger is live under dynamic participation and the checkpointed prefix ledger is accountable. Our accountability gadget construction is black-box and can use any BFT protocol which is accountable under static participation. Using HotStuff as the black box, we implemented our construction as a protocol for the Ethereum 2.0 beacon chain, and our Internet-scale experiments with more than 4000 nodes show that the protocol achieves the required scalability and has better latency than the current solution Gasper, which was shown insecure by recent attacks.
VeRSA: Verifiable Registries with Efficient Client Audits from RSA Authenticated Dictionaries
Verifiable registries allow clients to securely access a key-value mapping maintained by an untrusted server. Registries must be audited to ensure global invariants are preserved, which, in turn, allows for efficient monitoring of individual registry entries by their owners. To this end, existing proposals either assume trusted third-party auditors or rely on incrementally verifiable computation (IVC) via expensive recursive SNARKs to make registries client-auditable.
In this work, we give new client-auditable verifiable registries with throughputs up to $100\times$ greater than baseline IVC solutions. Our approach relies on an authenticated dictionary based on RSA accumulators for which we develop a new constant-size invariant proof. We use this as a replacement for Merkle trees to optimize the baseline IVC approach, but also provide a novel construction which dispenses with SNARKs entirely. This latter solution adopts a new checkpointing method to ensure client view consistency.
Help, my Signal has bad Device! Breaking the Signal Messenger’s Post-CompromiseSecurity through a Malicious Device
In response to ongoing discussions about data usage by companies and governments, and its implications for privacy, there is a growing demand for secure communication techniques. While during their advent, most messenger apps focused on features rather than security, this has changed in the recent years: Since then, many have adapted end-to-end encryption as a standard feature. One of the most popular solutions is the Signal messenger, which aims to guarantee forward secrecy (i.e. security of previous communications in case of leakage of long-term secrets) and future secrecy (i.e. security of future communications in case of leakage of short-term secrets).
If every user uses exactly one device, it is known that Signal achieves forward secrecy and even post-compromise security (i.e. security of future communications in case of leakage of long-term secrets).
But the Signal protocol also allows for the use of multiple devices via the Sesame protocol.
This multi-device setting is typically ignored in the security analysis of Signal.
In this work, we discuss the security of the Signal messenger in this multi-device setting.
We show that the current implementation of the device registration allows an attacker to register an own, malicious device, which gives them unrestricted access to all future communication of their victim, and even allows full impersonation.
This directly shows that the current Signal implementation does not guarantee post-compromise security.
We discuss several countermeasures, both simple ones aiming to increase detectability of our attack, as well as a broader approach that seeks to solve the root issue, namely the weak device registration flow.
Plactic key agreement (insecure?)
Plactic key agreement is a new key agreement scheme that uses Knuth’s multiplication of semistandard tableaus from combinatorial algebra. The security of plactic key agreement relies on the difficulty of some computational problems, such as division of semistandard tableaus.
Division by erosion uses backtracking to divide tableaus. Division by erosion is estimated to be infeasible against public keys of 768 or more bytes. If division by erosion is the best attack against plactic key agreement, then secure plactic key agreement could be practical.
Chris Monico found a new attack on plactic key agreement, which is fast, potentially polynomial-time, and could very well make plactic key agreement insecure.
Group Structure in Correlations and its Applications in Cryptography
Correlated random variables are a key tool in cryptographic applications like secure multi-party computation. We investigate the power of a class of correlations that we term group correlations: A group correlation is a uniform distribution over pairs $(x,y) \in G^2$ such that $x+y\in S$, where $G$ is a (possibly non-abelian) group and $S$ is a subset of $G$. We also introduce bi-affine correlations and show how they relate to group correlations. We present several structural results, new protocols, and applications of these correlations. The new applications include a completeness result for black-box group computation, perfectly secure protocols for evaluating a broad class of black box ``mixed-groups'' circuits with bi-affine homomorphism, and new information-theoretic results. Finally, we uncover a striking structure underlying OLE: In particular, we show that OLE over $\mathrm{GF}(2^n)$, is isomorphic to a group correlation over $\mathbb{Z}_4^n$.
Mining in Logarithmic Space
Blockchains maintain two types of data: Application data and consensus data. Towards long-term blockchain scalability, both of these must be pruned. While a large body of literature has explored the pruning of application data (UTXOs, account balances, and contract state), little has been said about the permanent pruning of consensus data (block headers). We present a protocol which allows pruning the blockchain by garbage collecting old blocks as they become unnecessary. These blocks can simply be discarded and are no longer stored by any miner. We show that all miners can be light miners with no harm to security. Our protocol is based on the notion of superblocks, blocks that have achieved an unusually high difficulty. We leverage them to represent underlying proof-of-work without ever illustrating it, storing it, or transmitting it. After our pruning is applied, the storage and communication requirements for consensus data is reduced exponentially.
We develop new probabilistic mathematical methods to analyze our protocol in the random oracle model. We prove our protocol is both secure and succinct under an uninterrupted honest majority assumption for $1/3$ adversaries. Our protocol is the first to achieve always secure, always succinct, and online Non-Interactive Proofs of Proof-of-Work, all necessary components for a logarithmic space mining scheme. Our work has applications beyond mining and also constitutes an improvement in state-of-the-art superlight clients and cross-chain bridges.
Stealth: A Highly Secured End-to-End Symmetric Communication Protocol
Symmetric key cryptography is applied in almost all secure communications to protect all sensitive information from attackers, for instance, banking, and thus, it requires extra attention due to diverse applications. Moreover, it is vulnerable to various attacks, for example, cryptanalysis attacks. Cryptanalysis attacks are possible due to a single-keyed encryption system. The state-of-the-art symmetric communication protocol uses a single secret key to encrypt/decrypt the entire communication to exchange data/message that poses security threats. Therefore, in this paper, we present a new secure communication protocol based on Diffie-Hellman cryptographic algorithms, called Stealth. It is a symmetric-key cryptographic protocol to enhance the security of modern communication with truly random numbers. Additionally, it applies a pseudo-random number generator. Initially, Stealth uses the Diffie-Hellman algorithm to compute four shared secret keys. These shared secret keys are used to generate four different private keys to encrypt for the first block of the message for symmetric communication. Stealth changes its private keys in each communication, making it very hard to break the security protocol. Moreover, the four shared secret keys create additional complexity for the adversary to overcome, and hence, it can provide highly tight security in communications. Stealth neither replaces the existing protocol nor authentication mechanism, but it creates another security layer to the existing protocol to ensure the security measurement's tightness.
R-SWAP: Relay based atomic cross-chain swap protocol
In this paper, we consider the problem of cross-chain transactions where parties that do not trust each other safely exchange digital assets across blockchains. Open blockchains models are decentralized ledgers that keep records of transactions. They are comparable with distributed account books. While they have proven their potential as a store of value, exchanging assets across several blockchains remains a challenge. Our paper proposes a new protocol, R-SWAP, for cross-chain swaps that outperforms existing solutions. Our protocol is built on top of two abstractions: relays and adapters that we formalize for the first time in this paper. Furthermore, we prove the correctness of R-SWAP and analytically evaluate its performances, in terms of cost and latency. Moreover, we evaluate the performances of R-SWAP in two case studies showing the generality of our approach: atomic swaps between Ethereum and Bitcoin (two popular permissionless blockchains) and atomic swaps between Ethereum and Tendermint (one permissionless and one permissioned blockchain).
Algebraic attacks on block ciphers using quantum annealing
This paper presents method for transformation of algebraic equations of symmetric cipher into the QUBO problem. After transformation of given equations $f_1, f_2, \dots, f_n$ to equations over integers $f'_1, f'_2, \dots, f'_n$, one has to linearize each, obtaining $f'_{lin_i}=lin(f'_i)$, where $lin$ denotes linearization operation. Finally, one can obtain problem in the QUBO form as $\left( f'_{lin_1} \right)^2+\dots+\left( f'_{lin_n} \right)^2+Pen$, where $Pen$ denotes penalties obtained during linearization of equations and $n$ is the number of equations.
In this paper, we show examples of the transformation of some block ciphers to the QUBO problem. What is more, we present the results of the transformation of the full AES-128 cipher to the QUBO problem, where the number of variables of equivalent QUBO problem is equal to $237,915$, which means, at least theoretically, that problem may be solved using the D-Wave Advantage quantum annealing computer. Unfortunately, it is hard to estimate the time this process would require.
Polar Coding for Ring-LWE-Based Public Key Encryption
Cryptographic constructions based on $\textit{ring learning with errors}$ (RLWE) have emerged as one of the front runners for the standardization of post quantum public key cryptography. As the standardization process continues, optimizing specific parts of proposed schemes becomes a worthwhile endeavor. In this work we focus on using error correcting codes to alleviate a natural trade-off present in most schemes; namely, we would like a wider error distribution to increase security, but a wider error distribution comes at the cost of an increased probability of decryption error. The motivation of this work is to improve the security level of RLWE-based public key encryption (PKE) while keeping the target decryption failure rate (DFR) achievable using error-correcting codes. Specifically, we explore how to implement a family member of error correcting codes, known as polar codes, in RLWE-based PKE schemes in order to effectively lower the DFR. The dependency existing in the additive noise term is handled by mapping every error term (e.g., $e,t,s,e_1,e_2$) under canonical embedding to the space $H$ where a product in the number field $K$ gives rise to a coordinate-wise product in $H$. An attempt has been made to make the modulation constellation (message basis) fit in with the canonical basis. Furthermore, we exploit the actuality of some error terms known by the decoder to further lower the DFR. Using our method, the DFR is expected to be as low as $2^{-298}$ for code rate 0.25, $n=1024,q=12289$ and binomial parameter $k=8$ as is exactly the setting of the post-quantum scheme NewHope; DFR is $2^{-156}$ for code rate 0.25, $n=1024,q=12289,k=16$. This new DFR margin enables us to improve the security level by $9.4\%$ compared with NewHope. Moreover, polar encoding and decoding have quasi-linear complexity $O(N\log N)$ and they can be implemented in constant time.
Quantum Secure Privacy Preserving Technique to Obtain the Intersection of Two Datasets for Contact Tracing
Contact tracing has emerged as a powerful and effective measure to curb the spread of
contagious diseases. It is a robust tool, but on the downside, it possesses a risk of privacy
violations as contact tracing requires gathering a lot of personal information. So there is
a need for a cryptographic primitive that obfuscate the personal data of the user. Taking
everything into account, private set intersection seems to be the natural choice to address
the problem. Nearly all of the existing PSI protocols are relying on the number theoretic
assumption based hard problems. However, these problems are not secure in quantum
domain. As a consequence, it becomes essential to designing PSI that can resist quantum
attack and provide long-term security. One may apply quantum cryptography to develop
such PSI protocol. This paper deals with the design of PSI using quantum cryptography
(QC), where the security depends on the principles of basic quantum mechanics. Our
scheme achieves long-term security and remains secure against quantum attacks due to the
use of QC. As opposed to the existing quantum PSI protocols, the communication and
computation costs of our scheme are independent of the size of universal set. In particular,
the proposed protocol achieves optimal communication and computation costs in the
domain of quantum PSI. Moreover, we require only single photon quantum resources and
simple single-particle projective measurements, unlike most of the existing quantum PSI
protocols.
Quantum Encryption with Certified Deletion, Revisited: Public Key, Attribute-Based, and Classical Communication
Broadbent and Islam (TCC '20) proposed a quantum cryptographic primitive called quantum encryption with certified deletion.
In this primitive, a receiver in possession of a quantum ciphertext can generate a classical certificate that the encrypted message is deleted.
Although their construction is information-theoretically secure, it is limited to the setting of one-time symmetric key encryption (SKE), where a sender and receiver have to share a common key in advance and the key can be used only once. Moreover, the sender has to generate a quantum state and send it to the receiver over a quantum channel in their construction.
Although deletion certificates are privately verifiable, which means a verification key for a certificate has to be kept secret, in the definition by Broadbent and Islam, we can also consider public verifiability.
In this work, we present various constructions of encryption with certified deletion.
- Quantum communication case: We achieve (reusable-key) public key encryption (PKE) and attribute-based encryption (ABE) with certified deletion.
Our PKE scheme with certified deletion is constructed assuming the existence of IND-CPA secure PKE, and our ABE scheme with certified deletion is constructed assuming the existence of indistinguishability obfuscation and one-way function. These two schemes are privately verifiable.
- Classical communication case: We also achieve PKE with certified deletion that uses only classical communication.
We give two schemes, a privately verifiable one and a publicly verifiable one. The former is constructed assuming the LWE assumption in the quantum random oracle model. The latter is constructed assuming the existence of one-shot signatures and extractable witness encryption.
An Efficient and Generic Construction for Signal's Handshake (X3DH): Post-Quantum, State Leakage Secure, and Deniable
The Signal protocol is a secure instant messaging protocol that underlies the security of numerous applications such as WhatsApp, Skype, Facebook Messenger among many others. The Signal protocol consists of two sub-protocols known as the X3DH protocol and the double ratchet protocol, where the latter has recently gained much attention. For instance, Alwen, Coretti, and Dodis (Eurocrypt'19) provided a concrete security model along with a generic construction based on simple building blocks that are instantiable from versatile assumptions, including post-quantum ones. In contrast, as far as we are aware, works focusing on the X3DH protocol seem limited.
In this work, we cast the X3DH protocol as a specific type of authenticated key exchange (AKE) protocol, which we call a Signal-conforming AKE protocol, and formally define its security model based on the vast prior works on AKE protocols. We then provide the first efficient generic construction of a Signal-conforming AKE protocol based on standard cryptographic primitives such as key encapsulation mechanisms (KEM) and signature schemes. Specifically, this results in the first post-quantum secure replacement of the X3DH protocol on well-established assumptions. Similar to the X3DH protocol, our Signal-conforming AKE protocol offers a strong (or stronger) flavor of security, where the exchanged key remains secure even when all the non-trivial combinations of the long-term secrets and session-specific secrets are compromised. Moreover, our protocol has a weak flavor of deniability and we further show how to progressively strengthen it using ring signatures and/or non-interactive zero-knowledge proof systems. Finally, we provide a full-fledged, generic C implementation of our (weakly deniable) protocol. We instantiate it with several Round 3 candidates (finalists and alternates) to the NIST post-quantum standardization process and compare the resulting bandwidth and computation performances. Our implementation is publicly available.
A Tutorial on Concurrent Zero Knowledge
In this tutorial, we provide a brief overview of Concurrent Zero Knowledge and next present a simple proof of the existence of Concurrent Zero-knowledge arguments for N P based on one-way permutations.
Unprovability of Leakage-Resilient Cryptography Beyond the Information-Theoretic Limit
In recent years, leakage-resilient cryptography---the design of cryptographic protocols resilient to bounded leakage of honest players' secrets---has received significant attention. A major limitation of known provably-secure constructions (based on polynomial hardness assumptions) is that they require the secrets to have sufficient actual (i.e., information-theoretic), as opposed to computational, min-entropy even after the leakage.
In this work, we present barriers to provably-secure constructions beyond the ``information-theoretic barrier'': Assume the existence of collision-resistant hash functions. Then, no NP search problem with $(2^{n^{\epsilon}})$-bounded number of witnesses can be proven (even worst-case) hard in the presence of $O(n^{\epsilon})$ bits of computationally-efficient leakage of the witness, using a black-box reduction to any $O(1)$-round assumption. In particular, this implies that $O(n^{\epsilon})$-leakage resilient injective one-way functions, and more generally, one-way functions with at most $2^{n^{\epsilon}}$ pre-images, cannot be based on any ``standard'' complexity assumption using a black-box reduction.
Attribute-Based Conditional Proxy Re-Encryption in the Standard Model under LWE
Attribute-based conditional proxy re-encryption (AB-CPRE) allows delegators to carry out attribute-based control on the delegation of decryption by setting policies and attribute vectors. The fine-grained control of AB-CPRE makes it suitable for a variety of applications, such as cloud storage and distributed file systems. However, all existing AB-CPRE schemes are constructed under classical number-theoretic assumptions, which are vulnerable to quantum cryptoanalysis. Therefore, we propose the first AB-CPRE scheme based on the learning with errors (LWE) assumption. Constructed from fully key-homomorphic encryption (FKHE) and key-switching techniques, our scheme is unidirectional, single-hop, and enables a polynomial-deep boolean circuit as its policy. Furthermore, we split the ciphertext into two independent parts to avoid two-level or multi-level encryption/decryption mechanisms. Taking advantage of it, we then extend our single-hop AB-CPRE into an efficient and concise multi-hop one. No matter how many transformations are performed, the re-encrypted ciphertext is in constant size, and only one encryption/decryption algorithm is needed. Both of our schemes are proved to be selective secure against chosen-plaintext attacks (CPA) in the standard model.
Privacy-preserving Density-based Clustering
Clustering is an unsupervised machine learning technique that outputs clusters containing similar data items. In this work, we investigate privacy-preserving density-based clustering which is, for example, used in financial analytics and medical diagnosis. When (multiple) data owners collaborate or outsource the computation, privacy concerns arise.
To address this problem, we design, implement, and evaluate the first practical and fully private density-based clustering scheme based on secure two-party computation. Our protocol privately executes the DBSCAN algorithm without disclosing any information (including the number and size of clusters). It can be used for private clustering between two parties as well as for private outsourcing of an arbitrary number of data owners to two non-colluding servers. Our implementation of the DBSCAN algorithm privately clusters data sets with 400 elements in 7 minutes on commodity hardware. Thereby, it flexibly determines the number of required clusters and is insensitive to outliers, while being only factor 19x slower than today's fastest private K-means protocol (Mohassel et al., PETS'20) which can only be used for specific data sets. We then show how to transfer our newly designed protocol to related clustering algorithms by introducing a private approximation of the TRACLUS algorithm for trajectory clustering which has interesting real-world applications like financial time series forecasts and the investigation of the spread of a disease like COVID-19.
Some Applications of Hamming Weight Correlations
It is a well-known fact that the power consumption during certain
stages of a cryptographic algorithm exhibits a strong correlation
with the Hamming Weight of its underlying variables. This phenomenon
has been widely exploited in the cryptographic literature in various
attacks targeting a broad range of schemes such as block ciphers or public-key
cryptosystems.
A common way of breaking this correlation is through the inclusion of countermeasures involving additional randomness
into the computation in the form of hidden (undisclosed) component functions
or masking strategies that complicate the inference of any sensitive information from the
gathered power traces.
In this work, we revisit the tight correlation between the Hamming Weight
and the observed power consumption of an algorithm and demonstrate,
in the first part, a practical reverse-engineering attack of
proprietary AES-like constructions with secret internal components
like the SubBytes, MixColumns and ShiftRows functions.
This approach is used in some
commercial products such as the Dynamic Encryption
package from the communication services provider Dencrypt as an extra layer of security.
We recover the encryption key alongside
the hidden substitution and permutation layer as well as the
MixColumns matrix on both 8-bit and 32-bit architectures.
In a second effort, we shift our attention to a masked implementation
of AES, specifically the secAES proposal put forward by the
French National Cybersecurity Agency (ANSSI) that concisely
combines several side-channel countermeasure techniques. We show its
insecurity in a novel side-channel-assisted statistical key-recovery attack
that only necessitates a few hundreds of collected power traces.
A Weighted Bit Flipping Decoder for QC-MDPC-based Cryptosystems
A new ``Weighted Bit-flipping'' (WBF) iterative decoder is presented and analyzed with respect to its Decoding Failure Rate (DFR). We show that the DFR is indeed lower than that of the BGF decoder as suggested by the BIKE third round submission to the NIST PQC standardization process. The WBF decoder requires more iterations to complete than BGF, but by creating a hybrid decoder we show that a lower DFR compared to that of the BGF decoder can still be achieved while keeping the computational tradeoff to a minimum.
FairMM: A Fast and Frontrunning-Resistant Crypto Market-Maker
Frontrunning is a major problem in DeFi applications, such as blockchain-based exchanges. Albeit, existing solutions are not practical and/or they make external trust assumptions. In this work we propose a market-maker-based crypto-token exchange, which is both more efficient than existing solutions and offers provable resistance to frontrunning attack. Our approach combines in a clever way a game theoretic analysis of market-makers with new cryptography and blockchain tools to defend against all three ways by which an exchange might front-run, i.e., (1) reorder trade requests, (2) adaptively drop trade requests, and (3) adaptively insert (its own) trade requests. Concretely, we propose novel light- weight cryptographic tools and smart-contract-enforced incentives to eliminate reordering attacks and ensure that dropping requests have to be oblivious (uninformed) of the actual trade. We then prove that with these attacks eliminated, a so-called monopolistic market-maker has no longer incentives to add or drop trades. We have implemented and benchmarked our exchange and provide concrete evidence of its advantages over existing solutions.
Layering diverse cryptography to lower risks of future and secret attacks: post-quantum estimates
Layering diverse cryptography is a general method to lower the risk of a future, or secret, cryptanalytic attack on a system. This report describes methods to quantifiably estimate this risk reduction.
Diversity is especially helpful in forward security because future attackers have more time to discover new attacks, making attack independence of diverse cryptography the major contribution to risk reduction. Post-quantum security is a part of forward security.
Estimates for highly sensitive data say that the security advantage of diverse layering is worth the extra usage cost, thus advising a decision to layer diverse cryptography.
Signed (Group) Diffie-Hellman Key Exchange with Tight Security
We propose the first tight security proof for the ordinary two-message signed Diffie-Hellman key exchange protocol in the random oracle model. Our proof is based on the strong computational Diffie-Hellman assumption and the multi-user security of a digital signature scheme. With our security proof, the signed DH protocol can be deployed with optimal parameters, independent of the number of users or sessions, without the need to compensate any security loss.
We abstract our approach with a new notion called verifiable key exchange.
In contrast to a known tight three-message variant of the signed Diffie-Hellman protocol (Gjøsteen and Jager, CRYPTO 2018), we do not require any modification to the original protocol, and our tightness result is proven in the ``Single-Bit-Guess'' model which we know can be tightly composed with symmetric cryptographic primitives to establish a secure channel.
Finally, we extend our approach to the group setting and construct the first tightly secure group authenticated key exchange protocol.
ZK-PCPs from Leakage-Resilient Secret Sharing
Zero-Knowledge PCPs (ZK-PCPs; Kilian, Petrank, and Tardos, STOC `97) are PCPs with the additional zero-knowledge guarantee that the view of any (possibly malicious) verifier making a bounded number of queries to the proof can be efficiently simulated up to a small statistical distance. Similarly, ZK-PCPs of Proximity (ZK-PCPPs; Ishai and Weiss, TCC `14) are PCPPs in which the view of an adversarial verifier can be efficiently simulated with few queries to the input.
Previous ZK-PCP constructions obtained an exponential gap between the query complexity $q$ of the honest verifier, and the bound $q^*$ on the queries of a malicious verifier (i.e., $q=polylog(q^*)$), but required either exponential-time simulation, or adaptive honest verification. This should be contrasted with standard PCPs, that can be verified non-adaptively (i.e., with a single round of queries to the proof). The problem of constructing such ZK-PCPs, even when $q^*=q$, has remained open since they were first introduced more than 2 decades ago. This question is also open for ZK-PCPPs, for which no construction with non-adaptive honest verification is known (not even with exponential-time simulation).
We resolve this question by constructing the first ZK-PCPs and ZK-PCPPs which simultaneously achieve efficient zero-knowledge simulation and non-adaptive honest verification. Our schemes have a square-root query gap, namely $q^*/q=O(sqrt(n))$ where $n$ is the input length.
Our constructions combine the "MPC-in-the-head" technique (Ishai et al., STOC `07) with leakage-resilient secret sharing. Specifically, we use the MPC-in-the-head technique to construct a ZK-PCP variant over a large alphabet, then employ leakage-resilient secret sharing to design a new alphabet reduction for ZK-PCPs which preserves zero-knowledge.