All papers in 2018 (Page 7 of 1249 results)
No-signaling Linear PCPs
In this paper, we give a no-signaling linear probabilistically checkable proof (PCP) system for polynomial-time deterministic computation, i.e., a PCP system for P such that (1) the honest PCP oracle is a linear function and (2) the soundness holds against any (computational) no-signaling cheating prover, who is allowed to answer each query according to a distribution that depends on the entire query set in a certain way. To the best of our knowledge, our construction is the first PCP system that satisfies these two properties simultaneously.
As an application of our PCP system, we obtain a 2-message delegating computation scheme by using a known transformation. Compared with the existing 2-message delegating computation schemes that are based on standard cryptographic assumptions, our scheme requires preprocessing but has a simpler structure and makes use of different (possibly cheaper) standard cryptographic primitives, namely additive/multiplicative homomorphic encryption schemes.
Semi-Commutative Masking: A Framework for Isogeny-based Protocols, with an Application to Fully Secure Two-Round Isogeny-based OT
We define semi-commutative invertible masking structures which aim to capture the methodology of exponentiation-only protocol design (such as discrete logarithm and isogeny-based cryptography). We discuss two instantiations: the first is based on commutative group actions and captures both the action of exponentiation in the discrete logarithm setting and the action of the class group of commutative endomorphism rings of elliptic curves, in the style of the CSIDH key-exchange protocol; the second is based on the semi-commutative action of isogenies of supersingular elliptic curves, in the style of the SIDH key-exchange protocol.
We then construct two oblivious transfer protocols using this new structure and prove that these UC-securely realise the oblivious transfer functionality in the random-oracle-hybrid model against passive adversaries with static corruptions.
Moreover, by starting from one of these two protocols and using the compiler introduced by Döttling et al. (Eurocrypt 2020), we
achieve the first fully UC-secure two-round OT protocol based on supersingular isogenies.
A new perspective on the powers of two descent for discrete logarithms in finite fields
A new proof is given for the correctness of the powers of two descent method for computing discrete logarithms. The result is slightly stronger than the original work, but more importantly we provide a unified geometric argument, eliminating the need to analyse all possible subgroups of $\mathrm{PGL}_2(\mathbb{F}_q)$. Our approach sheds new light on the role of $\mathrm{PGL}_2$, in the hope to eventually lead to a complete proof that discrete logarithms can be computed in quasi-polynomial time in finite fields of fixed characteristic.
Pseudo Flawed-Smudging Generators and Their Application to Indistinguishability Obfuscation
We introduce Pseudo Flawed-smudging Generators (PFGs). A PFG is an expanding function whose outputs $\mathbf Y$ satisfy a weak form of pseudo-randomness. Roughly speaking, for some polynomial bound $B$, and every distribution $\chi$ over $B$-bounded noise vectors, it guarantees that the distribution of $(\mathbf e,\ \mathbf Y + \mathbf e)$ is indistinguishable from that of $(\mathbf e', \mathbf Y + \mathbf e)$, where $\mathbf e \gets \chi$ is a random sample from $\chi$, and $\mathbf e'$ is another independent sample from $\chi$ conditioned on agreeing with $\mathbf e$ at a few, $o(\lambda)$, coordinates. In other words, $\mathbf Y$ "hides" $\mathbf e$ at all but a few coordinates. We show that assuming LWE and the existence of constant-locality Pseudo-Random Generators (PRGs), there is a construction of IO from 1) a PFG that has polynomial stretch and polynomially bounded outputs, and 2) a Functional Encryption (FE) scheme able to compute this PFG. Such FE can be built from degree $d$ multilinear map if the PFG is computable by a degree $d$ polynomial.
Toward basing IO on bilinear maps, inspired by [Ananth et. al. Eprint 2018], we further consider PFGs with partial pubic input --- they have the form $g(\mathbf{x}, \mathbf{y})$ and satisfy the aforementioned pseudo flawed-smudging property even when $\mathbf{x}$ is public. When using such PFGs, it suffices to replace FE with a weaker notion of partially hiding FE (PHFE) whose decryption reveals the public input $\mathbf{x}$ in addition to the output of the computation. We construct PHFE for polynomials $g$ that are quadratic in the private input $\mathbf{y}$, but have up to polynomial degree in the public input $\mathbf{x}$, subject to certain size constraints, from the SXDH assumption over bilinear map groups.
Regarding candidates of PFGs with partial public input, we note that the family of cubic polynomials proposed by Ananth et. al. can serve as candidate PFGs, and can be evaluated by our PHFE from bilinear maps. Toward having more candidates, we present a transformation for converting the private input $\mathbf{x}$ of a constant-degree PFG $g(\mathbf{x}, \mathbf{y})$ into a public input, by hiding $\mathbf{x}$ as noises in LWE samples, provided that $\mathbf{x}$ is sampled from a LWE noise distribution and $g$ satisfies a stronger security property.
Mitigating the One-Use Restriction in Attribute-Based Encryption
Uncategorized
Uncategorized
We present a key-policy attribute-based encryption scheme that is adaptively secure under a static assumption and is not directly affected by an attribute "one-use restriction." Our construction improves upon the only other such scheme (Takashima '17) by mitigating its downside of a ciphertext size that is dependent on the maximum size of any supported attribute set.
Hide The Modulus: A Secure Non-Interactive Fully Verifiable Delegation Scheme for Modular Exponentiations via CRT
Uncategorized
Uncategorized
Security protocols using public-key cryptography often requires large number of costly modular exponentiations (MEs). With the
proliferation of resource-constrained (mobile) devices and advancements in cloud computing, delegation of such expensive computations to powerful server providers has gained lots of attention. In this paper, we address the problem of verifiably secure delegation of MEs using two servers, where at most one of which is assumed to be malicious (the OMTUP-model). We first show verifiability issues of two recent schemes: We show that a scheme from IndoCrypt 2016 does not offer full verifiability, and that a scheme for $n$ simultaneous MEs from AsiaCCS 2016 is verifiable only with a probability $0.5909$ instead of the author's claim with a probability $0.9955$ for $n=10$. Then, we propose the first non-interactive fully verifiable secure delegation scheme by hiding the modulus via Chinese Remainder Theorem (CRT). Our scheme improves also the computational efficiency of the previous schemes considerably. Hence, we provide a lightweight delegation enabling weak clients to securely and verifiably delegate MEs without any expensive local computation (neither online nor offline). The proposed scheme is highly useful for devices having (a) only ultra-lightweight memory, and (b) limited computational power (e.g. sensor nodes, RFID tags).
XCLAIM: Trustless, Interoperable Cryptocurrency-Backed Assets
Building trustless cross-blockchain trading protocols is challenging. Centralized exchanges thus remain the preferred route to execute transfers across blockchains. However, these services require trust and therefore undermine the very nature of the blockchains on which they operate. To overcome this, several decentralized exchanges have recently emerged which offer support for atomic cross-chain swaps (ACCS). ACCS enable the trustless exchange of cryptocurrencies across blockchains, and are the only known mechanism to do so. However, ACCS suffer significant limitations; they are slow, inefficient and costly, meaning that they are rarely used in practice.
We present XCLAIM: the first generic framework for achieving trustless and efficient cross-chain exchanges using cryptocurrency-backed assets (CbAs). XCLAIM offers protocols for issuing, transferring, swapping and redeeming CbAs securely in a non-interactive manner on existing blockchains. We instantiate XCLAIM between Bitcoin and Ethereum and evaluate our implementation; it costs less than USD 0.50 to issue an arbitrary amount of Bitcoin-backed tokens on Ethereum. We show XCLAIM is not only faster, but also significantly cheaper than atomic cross-chain swaps. Finally, XCLAIM is compatible with the majority of existing blockchains without modification, and enables several novel cryptocurrency applications, such as cross-chain payment channels and efficient multi-party swaps.
Commit-Chains: Secure, Scalable Off-Chain Payments
Current permissionless blockchains suffer from scalability limitations. To scale without changing the underlying blockchain, one avenue is to lock funds into blockchain smart-contracts (collateral) and enact transactions outside, or off- the blockchain, via accountable peer-to-peer messages. Disputes among peers are resolved with appropriate collateral redistribution on the blockchain. In this work we lay the foundations for commit-chains, a novel off-chain scaling solution for existing blockchains where an untrusted and non-custodial operator commits the state of its user account balances via constant-sized, periodic checkpoints. Users dispute operator misbehavior via a smart contract. The commit-chain paradigm enables for the first time that off-chain users can receive payments while being offline. Moreover, locked funds can be managed efficiently at constant communication costs, alleviating collateral fragmentation.
We instantiate two account-based commit-chain constructions: NOCUST, based on a cost-effective challenge-response dispute mechanism; and NOCUST-ZKP, which provides provably correct operation via zkSNARKs. These constructions offer a trade-off between correctness, verification, and efficiency while both are practical and ensure key properties such as balance safety; that is, no honest user loses coins. We implemented both constructions on a smart contract enabled blockchain. Our evaluation demonstrates that NOCUST's operational costs in terms of computation and communication scale logarithmically in the number of users and transactions, and allow very efficient lightweight clients (a user involved in e.g. 100 daily transactions only needs to store a constant 46 kb of data, allowing secure payments even on mobile devices). NOCUST is operational in production since March 2019.
Membership Privacy for Fully Dynamic Group Signatures
Uncategorized
Uncategorized
Group signatures present a trade-off between the traditional goals of digital signatures and the signer's desire for privacy, allowing for the creation of unforgeable signatures in the name of a group which reveal nothing about the actual signer's identity beyond their group membership. Considering the desired properties formally opens up a possibility space of different security goals under various assumptions on trust placed in the designated entities of any scheme. Many models differ in their consideration of the variability of group membership as well, yet a formal treatment of the privacy of group membership status is lacking in all models, thus far.
We address this issue, starting from the vantage point of the comprehensive model due to Bootle et al. (ACNS'16), who prove that any scheme secure in their model is also secure in the previous models. Their model allows for fully dynamic management of group membership by segmenting the scheme's lifetime into epochs during which group membership is static but between which users may join or leave the group.
We extend the model of Bootle et al. by introducing formal notions of membership privacy. We then propose an efficient generic construction for a fully dynamic group signature scheme with membership privacy that is based on signatures with flexible public key (SFPK) and signatures on equivalence classes (SPSEQ). We instantiate the construction using a SFPK scheme based on the bilinear decisional Diffie-Hellman assumption and SPSEQ scheme by Fuchsbauer and Gay (PKC'18). The resulting scheme provides shorter signatures than existing schemes from standard assumption, while at the same time achieving stronger security guarantees.
Lower Bounds on Structure-Preserving Signatures for Bilateral Messages
Lower bounds for structure-preserving signature (SPS) schemes based on non-interactive assumptions have only been established in the case of unilateral messages, i.e. schemes signing tuples of group elements all from the same source group. In this paper, we consider the case of bilateral messages, consisting of elements from both source groups. We show that, for Type-III bilinear groups, SPS’s must consist of at least 6 group elements: many more than the 4 elements needed in the unilateral case, and optimal, as it matches a known upper bound from the literature. We also obtain the first non-trivial lower bounds for SPS’s in Type-II groups: a minimum of 4 group elements, whereas constructions with 3 group elements are known from interactive assumptions.
Function-Dependent Commitments for Verifiable Multi-Party Computation
In cloud computing, delegated computing raises the security issue of guaranteeing data authenticity during a remote computation. Existing solutions do not simultaneously provide fast correctness verification, strong security properties, and information-theoretic confidentiality. We introduce a novel approach, in the form of function-dependent commitments, that combines these strengths. We also provide an instantiation of function-dependent commitments for linear functions that is unconditionally, i.e. information-theoretically, hiding and relies on standard hardness assumptions. This powerful construction can for instance be used to build verifiable computing schemes providing information-theoretic confidentiality. As an example, we introduce a verifiable multi-party computation scheme for shared data providing public verifiability and unconditional privacy towards the servers and parties verifying the correctness of the result. Our scheme can be used to perform verifiable computations on secret shares while requiring only a single party to compute the audit data for verification. Furthermore, our verification procedure is asymptotically even more efficient than performing operations locally on the shared data. Thus, our solution improves the state of the art for authenticated computing, verifiable computing and multi-party computation.
BurnBox: Self-Revocable Encryption in a World of Compelled Access
Dissidents, journalists, and others require technical means to protect their privacy in the face of compelled access to their digital devices (smartphones, laptops, tablets, etc.). For example, authorities increasingly force disclosure of all secrets, including passwords, to search devices upon national border crossings. We therefore present the design, implementation, and evaluation of a new system to help victims of compelled searches. Our system, called BurnBox, provides self-revocable encryption: the user can temporarily disable their access to specific files stored remotely, without revealing which files were revoked during compelled searches, even if the adversary also compromises the cloud storage service. They can later restore access. We formalize the threat model and provide a construction that uses an erasable index, secure erasure of keys, and standard cryptographic tools in order to provide security supported by our formal analysis. We report on a prototype implementation, which showcases the practicality of BurnBox.
Efficient Fully Homomorphic Encryption Scheme
Since Gentry discovered in 2009 the first fully homomorphic encryption scheme, the last few years have witnessed dramatic progress on designing more efficient homomorphic encryption schemes, and some of them have been implemented for applications. The main bottlenecks are in bootstrapping and large cipher expansion (the ratio of the size of ciphertexts to that of messages). Ducas and Micciancio (2015) show that homomorphic computation of one bit operation on LWE ciphers can be done in less than a second, which is then reduced by Chillotti et al. (2016, 2017) to 13ms. This paper presents a compact fully homomorphic encryption scheme that has the following features: (a) its cipher expansion is 6 with private-key encryption and 20 with public-key encryption; (b) all ciphertexts after any number (unbounded) of homomorphic bit operations have the same size and are always valid with the same error size; (c) its security is based on the LWE and RLWE problems (with binary secret keys) and the cost of breaking the scheme by the current approaches is at least $2^{160}$ bit operations. The scheme protects function privacy and provides a simple solution for secure two-party computation and zero knowledge proof of any language in NP.
Lattice-Based Dual Receiver Encryption and More
Uncategorized
Uncategorized
Dual receiver encryption (DRE), proposed by Diament et al. at ACM CCS 2004, is a special extension notion of public-key encryption, which enables two independent receivers to decrypt a ciphertext into a same plaintext. This primitive is quite useful in designing combined public key cryptosystems and denial of service attack-resilient protocols. Up till now, a series of DRE schemes are constructed from bilinear pairing groups and lattices. In this work, we introduce a construction of lattice-based DRE. Our scheme is indistinguishable against chosen-ciphertext attacks (IND-CCA) from the standard Learning with Errors (LWE) assumption with a public key of bit-size about $2nm\log q$, where $m$ and $q$ are small polynomials in $n$. Additionally, for the DRE notion in the identity-based setting, identity-based DRE (IB-DRE), we also give a lattice-based IB-DRE scheme that achieves chosen-plaintext and adaptively chosen identity security based on the LWE assumption with public parameter size about $(2\ell +1)nm\log q$, where $\ell$ is the bit-size of the identity in the scheme.
On linear hulls in one round of DES
At Indocrypt 2016, Ashur et al. showed that linear hulls are sometimes formed in a single round of a cipher (exemplifying on Simon ciphers) and showed that the success rate of an attack may be influenced by the quality of the estimation of one-round correlations. This paper improves the understanding regarding one-round linear hulls and trails, being dedicated to the study of one-round linear hulls of the DES cipher, more exactly of its $f$-function. It shows that, in the case of DES, the existence of one-round hulls is related to the number of active Sboxes and its correlation depends on a fixed set of key bits. All the ideas presented in this paper are followed by examples and are verified experimentally.
Partially Specified Channels: The TLS 1.3 Record Layer without Elision
We advance the study of secure stream-based channels (Fischlin et al., CRYPTO ’15) by considering the multiplexing of many data streams over a single channel, an essential feature of real world protocols such as TLS. Our treatment adopts the definitional perspective of Rogaway and Stegers (CSF ’09), which offers an elegant way to reason about what standardizing documents actually provide: a partial specification of a protocol that admits a collection of compliant, fully realized implementations. We formalize partially specified channels as the component algorithms of two parties communicating over a channel. Each algorithm has an oracle that provides specification details; the algorithms abstract the things that must be explicitly specified, while the oracle abstracts the things that need not be. Our security notions, which capture a variety of privacy and integrity goals, allow the adversary to respond to these oracle queries; security relative to these notions implies that the channel withstands attacks in the presence of worst-case (i.e., adversarial) realizations of the specification details. We apply this framework to a formal treatment of the TLS 1.3 record and, in doing so, show that its security hinges crucially upon details left unspecified by the standard.
New Methods for Indistinguishability Obfuscation: Bootstrapping and Instantiation
Uncategorized
Uncategorized
Constructing indistinguishability obfuscation (iO) [BGI+01] is a central open question in cryptography. We provide new methods to make progress towards this goal. Our contributions may be summarized as follows:
1. {\textbf Bootstrapping}. In a recent work, Lin and Tessaro [LT17] (LT) show that iO may be constructed using i) Functional Encryption (FE) for polynomials of degree $L$ , ii) Pseudorandom Generators (PRG) with blockwise locality $L$ and polynomial expansion, and iii) Learning With Errors (LWE). Since there exist constructions of FE for quadratic polynomials from standard assumptions on bilinear maps [Lin17, BCFG17], the ideal scenario would be to set $L = 2$, yielding iO from widely believed assumptions.
Unfortunately, it was shown soon after [LV17,BBKK17] that PRG with block locality $2$ and the expansion factor required by the LT construction, concretely $\Omega(n\cdot 2^{b(3+\epsilon)})$, where $n$ is the input length and $b$ is the block length, do not exist. In the worst case, these lower bounds rule out 2-block local PRG with stretch $\Omega(n \cdot 2^{b(2+\epsilon)})$. While [LV17,BBKK17] provided strong negative evidence for constructing iO based on bilinear maps, they could not rule out the possibility completely; a tantalizing gap has remained. Given the current state of lower bounds, the existence of 2 block local PRG with expansion factor $\Omega(n\cdot 2^{b(1+\epsilon)})$ remains open, although this stretch does not suffice for the LT bootstrapping, and is hence unclear to be relevant for iO.
In this work, we improve the state of affairs as follows.
(a) Weakening requirements on PRGs: In this work, we show that the narrow window of expansion factors left open by lower bounds do suffice for iO. We show a new method to construct FE for $NC_1$ from i) FE for degree L polynomials, ii) PRGs of block locality $L$ and expansion factor $\Omega(n\cdot2^{b(2+\epsilon)})$, and iii) LWE (or RLWE). Our method of bootstrapping is completely different from all known methods and does not go via randomizing polynomials. This re-opens the possibility of realizing iO from $2$ block local PRG, SXDH on Bilinear maps and LWE.
(b) Broadening class of sufficient PRGs: Our bootstrapping theorem may be instantiated with a broader class of pseudorandom generators than hitherto considered for iO, and may circumvent lower bounds known for the arithmetic degree of iO -sufficient PRGs [LV17,BBKK17]; in particular, these may admit instantiations with arithmetic degree $2$, yielding iO with the additional assumptions of SXDH on Bilinear maps and LWE. In more detail, we may use the following two classes of PRG:
i) Non-Boolean PRGs: We may use pseudorandom generators whose inputs and outputs need not be Boolean but may be integers restricted to a small (polynomial) range. Additionally, the outputs are not required to be pseudorandom but must only satisfy a milder indistinguishability property. We tentatively propose initializing these PRGs using the multivariate quadratic assumption (MQ) which has been widely studied in the literature [MI88,Wol05,DY09] and against the general case of which, no efficient attacks are known.
We note that our notion of non Boolean PRGs is qualitatively equivalent to the notion of $\Delta$ RGs defined in the concurrent work of Ananth, Jain, Khurana and Sahai [AJKS18] except that $\Delta$ RG are weaker, in that they allow the adversary to win the game with $1/poly$ probability whereas we require that the adversary only wins with standard negligible probability. By relying on the security amplification theorem of [AJKS18] in a black box way, our construction can also make do with the weaker notion of security considered by [AJKS18].
ii) Correlated Noise Generators: We introduce an even weaker class of pseudorandom generators, which we call correlated noise generators (CNG) which may not only be non-Boolean but are required to satisfy an even milder (seeming) indistinguishability property.
(c) Assumptions and Efficiency. Our bootstrapping theorems can be based on the hardness of the Learning With Errors problem (LWE) or its ring variant (RLWE) and can compile FE for degree $L$ polynomials directly to FE for $NC_1$. Previous work compiles FE for degree $L$ polynomials to FE for $NC_0$ to FE for $NC_1$ to iO [LV16,Lin17,AS17,GGHRSW13].
2. Instantiating Primitives. In this work, we provide the first direct candidate of FE for constant degree polynomials from new assumptions on lattices. Our construction is new and does not go via multilinear maps or graded encoding schemes as all previous constructions. In more detail, let $\mathcal{F}$ be the class of circuits with depth $d$ and output length $\ell$. Then, for any $f \in \mathcal{F}$, our scheme achieves ${\sf Time({keygen})} = O\big(poly(\kappa, |f|)\big)$, and ${\sf Time({encrypt})} =O(|\vecx|\cdot 2^d \cdot \poly(\kappa))$ where $\kappa$ is the security parameter. This suffices to instantiate the bootstrapping step above. Our construction is based on the ring learning with errors assumption (RLWE) as well as new untested assumptions on NTRU rings.
We provide a detailed security analysis and discuss why previously known attacks in the context of multilinear maps, especially zeroizing attacks and annihilation attacks, do not appear to apply to our setting. We caution that the assumptions underlying our construction must be subject to rigorous cryptanalysis before any confidence can be gained in their security. However, their significant departure from known multilinear map based constructions make them, we feel, a potentially fruitful new direction to explore. Additionally, being based entirely on lattices, we believe that security against classical attacks will likely imply security against quantum attacks. Note that this feature is not enjoyed by instantiations that make any use of bilinear maps even if secure instances of weak PRGs, as defined by the present work, the follow-up by Lin and Matt [LM18] and the independent work by Ananth, Jain, Khurana and Sahai [AJKS18] are found.
CHARIOT: Cloud-Assisted Access Control for the Internet of Things
The Internet of Things (IoT) technology has expanded widely across the world, promising new data management opportunities for industries, companies and individuals in different sectors, such as health services or transport logistics. This trend relies on connecting devices/things to collect, exchange and store data.
The exponentially increasing number of IoT devices, their origin diversity, their limited capabilities in terms of resources, as well as the ever-increasing amount of data, raise new challenges for security and privacy protection, precluding traditional access control solutions to be integrated to this new environment. In this paper, we propose a reliable server-aided policy-based access control mechanism, named CHARIOT, that enables an IoT platform to verify credentials of different devices requesting access (read/write) to the data stored within it. CHARIOT permits IoT devices to authenticate themselves to the platform without compromising their privacy by using attribute-based signatures. Our solution also allows secure delegation of costly computational operations to a cloud server, hence relieving the workload at IoT devices' side.
Efficient Construction of the Boomerang Connection Table
Recently, the Boomerang Connection Table was introduced by Cid et al. as a tool to better evaluate the probability of a boomerang distinguisher. To compute the BCT of an $n$-bit to $n$-bit S-box, the inventors of the BCT proposed an algorithm that takes $O(2^{3n})$ time. We show that one can construct the same table in only $O(2^{2n})$ time.
Characterizing overstretched NTRU attacks
Overstretched NTRU, an NTRU variant with a large modulus, has been used as a building block for several cryptographic schemes in recent years. Recently, two lattice \emph{subfield attacks} and a \emph{subring attack} were proposed that broke some suggested parameters for overstretched NTRU. These attacks work by decreasing the dimension of the lattice to be reduced, which improves the performance of the lattice basis reduction algorithm. However, there are a number of conflicting claims in the literature over which of these attacks has the best performance. These claims are typically based on experiments more than analysis. Furthermore, the metric for comparison has been unclear in some prior work. In this paper, we argue that the correct metric should be the lattice dimension. We show both analytically and experimentally that the subring attack succeeds on a smaller dimension lattice than the subfield attack for the same problem parameters, and also succeeds with a smaller modulus when the lattice dimension is fixed.
Context Hiding Multi-Key Linearly Homomorphic Authenticators
Demanding computations are increasingly outsourced to cloud platforms. For such outsourced computations, the efficient verifiability of results is a crucial requirement. When sensitive data is involved, the verification of a computation should preserve the privacy of the input values: it should be context hiding. Context hiding verifiability is enabled by existing homomorphic authenticator schemes. However, until now, no context hiding homomorphic authenticator scheme supports multiple independent clients, e.g. multiple keys. Multi-key support is necessary for datasets involving input authenticated by different clients, e.g. multiple hospitals in e-health scenarios. In this paper, we propose the first perfectly context hiding, publicly verifiable multi-key homomorphic authenticator scheme supporting linear functions. Our scheme is provably unforgeable in the standard model, and succinct. Verification time depends only linearly on the number of clients, in an amortized sense.
Last updated: 2018-07-23
Dynamic Searchable Symmetric Encryption Schemes Supporting Range Queries with Forward (and Backward) Security
Uncategorized
Uncategorized
Dynamic searchable symmetric encryption (DSSE) is a useful cryptographic tool in the encrypted cloud storage. However, it has been reported that DSSE usually suffers from the file-injection attacks and content leak of deleted documents. To mitigate these attacks, forward security and backward security have been proposed. Nevertheless, the existing forward/backward-secure DSSE schemes can only support single keyword queries. To address this problem, in this paper, we propose two DSSE schemes supporting range queries. One is forward-secure and supports a large number of documents. The other can achieve both forward security and backward security, while it can only support a limited number of documents. Finally, we also give the security proofs of the proposed DSSE schemes in the random oracle model.
Simple Verifiable Delay Functions
We construct a verifable delay function (VDF) by showing how the Rivest-Shamir-Wagner time-lock puzzle can be made publicly verifiable.
Concretely, we give a statistically sound public-coin protocol to prove that a tuple $(N,x,T,y)$ satisfies $y=x^{2^T}\pmod N$ where the prover doesn't know the factorization of $N$ and its running time is dominated by solving the puzzle, that is, compute $x^{2^T}$, which is conjectured to require $T$ sequential squarings. To get a VDF we make this protocol non-interactive using the Fiat-Shamir heuristic.
The motivation for this work comes from the Chia blockchain design, which uses a VDF as a key ingredient. For typical parameters ($T\le 2^{40},N=2048$), our proofs are of size around $10KB$, verification cost around three RSA exponentiations and computing the proof is $8000$ times faster than solving the puzzle even without any parallelism.
Efficient Evaluation of Low Degree Multivariate Polynomials in Ring-LWE Homomorphic Encryption Schemes
Homomorphic encryption schemes allow to perform computations over encrypted data.
In schemes based on RLWE assumption the plaintext data is a ring polynomial.
In many use cases of homomorphic encryption only the degree-0 coefficient of this polynomial is used to encrypt data.
In this context any computation on encrypted data can be performed.
It is trickier to perform generic computations when more than one coefficient per ciphertext is used.
In this paper we introduce a method to efficiently evaluate low-degree multivariate polynomials over encrypted data.
The main idea is to encode several messages in the coefficients of a plaintext space polynomial.
Using ring homomorphism operations and multiplications between ciphertexts, we compute multivariate monomials up to a given degree.
Afterwards, using ciphertext additions we evaluate the input multivariate polynomial.
We perform extensive experimentations of the proposed evaluation method.
As example, evaluating an arbitrary multivariate degree-3 polynomial with 100 variables over Boolean space takes under 13 seconds.
Better Than Advertised: Improved Collision-Resistance Guarantees for MD-Based Hash Functions
Uncategorized
Uncategorized
The MD transform that underlies the MD and SHA families iterates a compression function $\mathsf{h}$ to get a hash function $\mathsf{H}$. The question we ask is, what property X of $\mathsf{h}$ guarantees collision resistance (CR) of $\mathsf{H}$? The classical answer is that X itself be CR. We show that weaker conditions X, in particular forms of what we call constrained-CR, suffice. This reduces demands on compression functions, to the benefit of security, and also, forensically, explains why collision-finding attacks on compression functions have not, historically, lead to immediate breaks of the corresponding hash functions. We obtain our results via a definitional framework called RS security, and a parameterized treatment of MD, that also serve to unify prior work and variants of the transform.
Formal Analysis of Vote Privacy using Computationally Complete Symbolic Attacker
We analyze the FOO electronic voting protocol in the provable secu- rity model using the technique of Computationally Complete Symbolic Attacker (CCSA). The protocol uses commitments, blind signatures and anonymous chan- nels to achieve vote privacy. Unlike the Dolev-Yao analyses of the protocol, we assume neither perfect cryptography nor existence of perfectly anonymous chan- nels. Our analysis reveals new attacks on vote privacy, including an attack that arises due to the inadequacy of the blindness property of blind signatures and not due to a specific implementation of anonymous channels. With additional assumptions and modifications, we were able to show that the protocol satisfies vote privacy. Our techniques demonstrate effectiveness of the CCSA technique for both attack detection and verification.
Efficient verifiable delay functions
We construct a verifiable delay function (VDF). A VDF is a function whose evaluation requires running a given number of sequential steps, yet the result can be efficiently verified. They have applications in decentralised systems, such as the generation of trustworthy public randomness in a trustless environment, or resource-efficient blockchains. To construct our VDF, we actually build a trapdoor VDF. A trapdoor VDF is essentially a VDF which can be evaluated efficiently by parties who know a secret (the trapdoor). By setting up this scheme in a way that the trapdoor is unknown (not even by the party running the setup, so that there is no need for a trusted setup environment), we obtain a simple VDF. Our construction is based on groups of unknown order such as an RSA group, or the class group of an imaginary quadratic field. The output of our construction is very short (the result and the proof of correctness are each a single element of the group), and the verification of correctness is very efficient.
New techniques for Multi-value input Homomorphic Evaluation and Applications
In this paper, we propose a new technique to perform several homomorphic operations in one bootstrapping call over a multi-value plaintext space. Our construction relies on the FHEW-based gate bootstrapping; we analyze its structure and propose a strategy we call multi-value bootstrapping which allows to bootstrap an arbitrary function in an efficient way.
The security of our scheme relies on the LWE assumption over the torus. We give three possible applications: we first describe how to efficiently evaluate an arbitrary boolean function (LUT) and combine LUTs in circuits. We also explain how to apply our procedure to optimize the circuit bootstrapping from (Asiacrypt'2017) which allows to compose circuits in a leveled mode. And we finally present a simple method which makes use of the multi-value bootstrapping to evaluate a encrypted neural network.
We have implemented the proposed method and were able to evaluate an arbitrary 6-to-6 LUTs under 1.6 seconds. Our implementation is based on the TFHE library but can be easily integrated into other homomorphic libraries based on the same structure, such as FHEW (Eurocrypt'2015). The number of LUT outputs does not influence the execution time by a lot, e.g. evaluation of additional 128 outputs on the same 6 input bits takes only 0.05 more seconds.
Cache-Attacks on the ARM TrustZone implementations of AES-256 and AES-256-GCM via GPU-based analysis
The ARM TrustZone is a security extension which is used in recent Samsung flagship smartphones to create a Trusted Execution Environment (TEE) called a Secure World, which runs secure processes (Trustlets). The Samsung TEE includes cryptographic key storage and functions inside the Keymaster trustlet.
The secret key used by the Keymaster trustlet is derived by a hardware device and is inaccessible to the Android OS. However, the ARM32 AES implementation used by the Keymaster is vulnerable to side channel cache-attacks.
The Keymaster trustlet uses AES-256 in GCM mode, which makes mounting a cache attack against this target much harder. In this paper we show that it is possible to perform a successful cache attack against this AES implementation, in AES-256/GCM mode, using widely available hardware. Using a laptop's GPU to parallelize the analysis, we are able to extract a raw AES-256 key with 7 minutes of measurements and under a minute of analysis time and an AES-256/GCM key with 40 minutes of measurements and 30 minutes of analysis.
STELLAR: A Generic EM Side-Channel Attack Protection through Ground-Up Root-cause Analysis
The threat of side-channels is becoming increasingly prominent for resource-constrained internet-connected devices. While numerous power side-channel countermeasures have been proposed, a promising approach to protect the non-invasive electromagnetic side-channel attacks has been relatively scarce. Today's availability of high-resolution electromagnetic (EM) probes mandates the need for a low-overhead solution to protect EM side-channel analysis (SCA) attacks. This work, for the first time, performs a white-box analysis to root-cause the origin of the EM leakage from an integrated circuit. System-level EM simulations with Intel 32 nm CMOS technology interconnect stack, as an example, reveals that the EM leakage from metals above layer 8 can be detected by an external non-invasive attacker with the commercially available state-of-the-art EM probes. Equipped with this `white-box' understanding, this work proposes \textit{STELLAR}: Signature aTtenuation Embedded CRYPTO with Low-Level metAl Routing, which is a two-stage solution to eliminate the critical signal radiation from the higher-level metal layers. Firstly, we propose routing of the entire cryptographic cores power traces using the local lower-level metal layers, whose leakage cannot be picked up by an external attacker. Then, the entire crypto IP is embedded within a Signature Attenuation Hardware (SAH) which in turn suppresses the critical encryption signature before it routes the current signature to the highly radiating top-level metal layers. System-level implementation of the STELLAR hardware with local lower-level metal routing in TSMC 65 nm CMOS technology, with an AES-128 encryption engine (as an example cryptographic block) operating at 40 MHz, shows that the system remains secure against EM SCA attack even after $1 M$ encryptions, with $67\%$ energy efficiency and $1.23\times$ area overhead compared to the unprotected AES.
Is there an Oblivious RAM Lower Bound for Online Reads?
Uncategorized
Uncategorized
Oblivious RAM (ORAM), introduced by Goldreich and Ostrovsky (JACM 1996), can be used to read and write to memory in a way that hides which locations are being accessed. The best known ORAM schemes have an $O(\log n)$ overhead per access, where $n$ is the data size. The work of Goldreich and Ostrovsky gave a lower bound showing that this is optimal for ORAM schemes that operate in a ``balls and bins'' model, where memory blocks can only be shuffled between different locations but not manipulated otherwise. The lower bound even extends to weaker settings such as offline ORAM, where all of the accesses to be performed need to be specified ahead of time, and read-only ORAM, which only allows reads but not writes. But can we get lower bounds for general ORAM, beyond ``balls and bins''?
The work of Boyle and Naor (ITCS '16) shows that this is unlikely in the offline setting. In particular, they construct an offline ORAM with $o(\log n)$ overhead assuming the existence of small sorting circuits. Although we do not have instantiations of the latter, ruling them out would require proving new circuit lower bounds. On the other hand, the recent work of Larsen and Nielsen (CRYPTO '18) shows that there indeed is an $\Omega(\log n)$ lower bound for general online ORAM.
This still leaves the question open for online read-only ORAM or for read/write ORAM where we want very small overhead for the read operations. In this work, we show that a lower bound in these settings is also unlikely. In particular, our main result is a construction of online ORAM where reads (but not writes) have an $o(\log n)$ overhead, assuming the existence of small sorting circuits as well as very good locally decodable codes (LDCs). Although we do not have instantiations of either of these with the required parameters, ruling them out is beyond current lower bounds.
On some methods for constructing almost optimal S-Boxes and their resilience against side-channel attacks
Substitution Boxes (S-Boxes) are crucial components in the design of many symmetric ciphers. The security of these ciphers against linear, differential, algebraic cryptanalyses and side-channel attacks is then strongly dependent on the choice of the S-Boxes. To construct S-Boxes having good resistive properties both towards classical cryptanalysis as well side-channel attacks is not a trivial task. In this article we propose new methods for generating S-Boxes with strong cryptographic
properties and therefore study the resilience of such S-Boxes against side-channel attacks in terms of its theoretical metrics and masking possibility.
Two Notions of Differential Equivalence on Sboxes
In this work, we discuss two notions of differential equivalence on Sboxes.
First, we introduce the notion of DDT-equivalence which applies to vectorial Boolean functions that share the same difference distribution table (DDT).
Next, we compare this notion to what we call the $\gamma$-equivalence, applying to vectorial Boolean functions whose DDTs have the same support.
We discuss the relation between these two equivalence notions, demonstrate that the number of DDT- or $\gamma$-equivalent functions is invariant under EA- and CCZ-equivalence and provide an algorithm for computing the DDT-equivalence and the $\gamma$-equivalence classes of a given function.
We study the sizes of these classes for some families of Sboxes.
Finally, we prove a result that shows that the rows of the DDT of an APN permutation are pairwise distinct.
Matrioska: A Compiler for Multi-Key Homomorphic Signatures
Multi-Key Homomorphic Signatures (MKHS)
enable clients in a system to sign and upload messages to an untrusted server.
At any later point in time, the server can perform a computation $C$
on data provided by $t$ different clients, and
return the output $y$ and a short signature $\sigma{C, y}$
vouching for the correctness
of $y$ as the output of the function $f$ on the signed data.
Interestingly, MKHS enable verifiers to check the validity of the signature
using solely the public keys of the signers whose messages were used in the computation.
Moreover, the signatures $\sigma{C, y}$ are succinct,
namely their size depends at most linearly in the number of clients,
and only logarithmically in the total number of inputs of $C$.
Existing MKHS are constructed based either on standard assumptions over lattices
(Fiore et al., ASIACRYPT'16),
or on non-falsifiable assumptions (SNARKs) (Lai et al., ePrint'16).
In this paper, we investigate connections between
single-key and multi-key homomorphic signatures.
We propose a generic compiler, called \matrioska, which turns any (sufficiently expressive)
single-key homomorphic signature scheme into a multi-key scheme.
Matrioska establishes a formal connection between these two
primitives
and is the first alternative to the only known construction
under standard falsifiable assumptions.
Our result relies on a novel technique that exploits the homomorphic property of a single-key HS scheme
to compress an arbitrary number of signatures from $t$ different users into only $t$ signatures.
Indistinguishability Obfuscation Without Multilinear Maps: iO from LWE, Bilinear Maps, and Weak Pseudorandomness
Uncategorized
Uncategorized
The existence of secure indistinguishability obfuscators (iO) has far-reaching implications, significantly expanding the scope of problems amenable to cryptographic study. All known approaches to
constructing iO rely on $d$-linear maps which allow the encoding of elements from a large domain, evaluating degree $d$ polynomials on them, and testing if the output is zero. While secure bilinear maps are well established in cryptographic literature, the security of candidates for $d>2$ is poorly understood.
We propose a new approach to constructing iO for general circuits. Unlike all previously known realizations of iO, we avoid the use of $d$-linear maps of degree $d \ge 3$.
At the heart of our approach is the assumption that a new weak pseudorandom object exists, that we call a perturbation resilient generator ($\Delta\mathsf{RG}$). Informally, a $\Delta\mathsf{RG}$ maps $n$ integers to $m$ integers, and has the property that for any sufficiently short vector $a \in \mathbb{Z}^m$, all efficient adversaries must fail to distinguish the distributions $\Delta\mathsf{RG}(s)$ and ($\Delta\mathsf{RG}(s)+a$), with at least some probability that is inverse polynomial in the security parameter. $\Delta\mathsf{RG}$s have further implementability requirements; most notably they must be computable by a family of degree-3 polynomials over $\mathbb{Z}$. We use techniques building upon the Dense Model Theorem to deal with adversaries that have nontrivial but non-overwhelming distinguishing advantage. In particular, we obtain a new security amplification theorem for functional encryption.
As a result, we obtain iO for general circuits assuming:
\begin{itemize}
\item Subexponentially secure LWE
\item Bilinear Maps
\item $\poly(\lambda)$-secure 3-block-local PRGs
\item $(1-1/\poly(\lambda))$-secure $\Delta\mathsf{RG}$s
\end{itemize}
A Note on Key Rank
Uncategorized
Uncategorized
In recent years key rank has become an important aspect of side-channel analysis, enabling an evaluation lab to analyse the security of a device after a side-channel attack. In particular, it enables the lab to do so when the enumeration effort would be beyond their computing power. Due to its importance there has been a host of work investigating key rank over the last few years. In this work we build upon the existing literature to make progress on understanding various properties of key rank. We begin by showing when two different "scoring methods" will provide the same rank. This has been implicitly used by various algorithms in the past but here it is shown for a large class of functions. We conclude by giving the computational complexity of key rank. This implies that it is unlikely for, considerably, better algorithms to exist.
One-Message Zero Knowledge and Non-Malleable Commitments
We introduce a new notion of one-message zero-knowledge (1ZK) arguments that satisfy a weak soundness guarantee — the number of false statements that a polynomial-time non-uniform adversary can convince the verifier to accept is not much larger than the size of its non-uniform advice. The zero-knowledge guarantee is given by a simulator that runs in (mildly) super-polynomial time.
We construct such 1ZK arguments based on the notion of multi-collision-resistant keyless hash functions, recently introduced by Bitansky, Kalai, and Paneth (STOC 2018). Relying on the constructed
1ZK arguments, subexponentially-secure time-lock puzzles, and other standard assumptions, we construct one-message fully-concurrent non-malleable commitments. This is the first construction that is based on assumptions that do not already incorporate non-malleability, as well as the first based on (subexponentially) falsifiable assumptions.
Burning Zerocoins for Fun and for Profit: A Cryptographic Denial-of-Spending Attack on the Zerocoin Protocol
Zerocoin (Miers et. al, IEEE S&P’13), designed as an extension to Bitcoin and similar cryptocurrencies, was the first anonymous cryptocurrency proposal which supports large anonymity sets. We identify a cryptographic denial-of-spending attack on the original Zerocoin protocol and a second Zerocoin protocol (Groth and Kohlweiss, EUROCRYPT’15), which enables a network attacker to destroy money of honest users. The attack leads to real-world vulnerabilities in multiple cryptocurrencies, which rely on implementations of the original Zerocoin protocol.
The existence of the attack does not contradict the formal security analyses of the two Zerocoin protocols but exposes the lack of an important missing property in the security model of Zerocoin. While the security definitions model that the attacker should not be able to create money out of thin air or steal money from honest users, it does not model that the attacker cannot destroy money of honest users. Fortunately, there are simple fixes for the security model and for both protocols.
Is Java Card ready for hash-based signatures?
The current Java Card platform does not seem to allow for fast implementations of hash-based signature schemes. While the underlying implementation of the cryptographic primitives provided by the API can be fast, thanks to implementations in native code or in hardware, the cumulative overhead of the many separate API calls results in prohibitive performance for many common applications. In this work, we present an implementation of XMSS$^{MT}$ on the current Java Card platform, and make suggestions how to improve this platform in future versions.
Hierarchical Attribute-based Signatures
Attribute-based Signatures (ABS) are a powerful tool allowing users with attributes issued by authorities to sign messages while also proving that their attributes satisfy some policy. ABS schemes provide a flexible and privacy-preserving approach to authentication since the signer's identity and attributes remain hidden within the anonymity set of users sharing policy-conform attributes. Current ABS schemes exhibit some limitations when it comes to the management and issue of attributes. In this paper we address the lack of support for hierarchical attribute management, a property that is prevalent in traditional PKIs where certification authorities are organised into hierarchies and signatures are verified along roots of trust.
Hierarchical Attribute-based Signatures (HABS) introduced in this work support delegation of attributes along paths from the top-level authority down to the users while also ensuring that signatures produced by these users do not leak their delegation paths, thus extending the original privacy guarantees of ABS schemes. Our generic HABS construction also ensures unforgeability of signatures in the presence of collusion attacks and contains an extended traceability property allowing a dedicated tracing authority to identify the signer and reveal its attribute delegation paths. We include a public verification procedure for the accountability of the tracing authority.
We anticipate that HABS will be useful for privacy-preserving authentication in applications requiring hierarchical delegation of attribute-issuing rights and where knowledge of delegation paths might leak information about signers and their attributes, e.g., in intelligent transport systems where vehicles may require certain attributes to authenticate themselves to the infrastructure but remain untrackable by the latter.
Improved Results on Factoring General RSA Moduli with Known Bits
We revisit the factoring with known bits problem on general RSA moduli in the forms of $N=p^r q^s$ for $r,s\ge 1$, where two primes $p$ and $q$ are of the same bit-size. The relevant moduli are inclusive of $pq$, $p^r q$ for $r>1$, and $p^r q^s$ for $r,s>1$, which are used in the standard RSA scheme and other RSA-type variants. Previous works acquired the results mainly by solving univariate modular equations.
In contrast, we investigate how to efficiently factor $N=p^r q^s$ with given leakage of the primes by the integer method using the lattice-based technique in this paper. More precisely, factoring general RSA moduli with known most significant bits (MSBs) of the primes can be reduced to solving bivariate integer equations, which was first proposed by Coppersmith to factor $N=pq$ with known high bits. Our results provide a unifying solution to the factoring with known bits problem on general RSA moduli. Furthermore, we reveal that there exists an improved factoring attack via the integer method for particular RSA moduli like $p^3 q^2$ and $p^5 q^3$.
Domain-specific Accelerators for Ideal Lattice-based Public Key Protocols
Uncategorized
Uncategorized
Post Quantum Lattice-Based Cryptography (LBC) schemes are increasingly gaining attention in traditional and emerging security problems, such as encryption, digital signature, key exchange, homomorphic encryption etc, to address security needs of both short and long-lived devices — due to their foundational properties and ease of implementation. However, LBC schemes induce higher computational demand compared to classic schemes (e.g., DSA, ECDSA) for equivalent security guarantees, making domain-specific acceleration a viable option for improving security and favor early adoption of LBC schemes by the semiconductor industry. In this paper, we present a workflow to explore the design space of domain-specific accelerators for LBC schemes, to target a diverse set of host devices, from resource-constrained IoT devices to high-performance computing platforms. We present design exploration results on workloads executing NewHope and BLISSB-I schemes accelerated by our domain-specific accelerators, with respect to a baseline without acceleration. We show that achieved performance with acceleration makes the execution of NewHope and BLISSB-I comparable to classic key exchange and digital signature schemes while retaining some form of general purpose programmability. In addition to 44% and 67% improvement in energy-delay product (EDP), we enhance performance (cycles) of the sign and verify steps in BLISSB-I schemes by 24% and 47%, respectively. Performance (EDP) improvement of server and client side of the NewHope key exchange is improved by 37% and 33% (52% and 48%), demonstrating the utility of the design space exploration framework.
SEEMless: Secure End-to-End Encrypted Messaging with less trust
End-to-end encrypted messaging (E2E) is only secure if participants have a way to retrieve the correct public key for the desired recipient. However, to make these systems usable, users must be able to replace their keys (e.g. when they lose or reset their devices, or reinstall their app), and we cannot assume any cryptographic means of authenticating the new keys. In the current E2E systems, the service provider manages the directory of public keys of its registered users; this allows a compromised or coerced service provider to introduce their own keys and execute a man in the middle attack.
Building on the approach of CONIKS (Melara et al, USENIX Security `15), we formalize the notion of a Privacy-Preserving Verifiable Key Directory (VKD): a system which allows users to monitor the keys that the service is distributing on their behalf. We then propose a new VKD scheme which we call SEEMless, which improves on prior work in terms of privacy and scalability. In particular, our new approach allows key changes to take effect almost immediately; we show experimentally that our scheme easily supports delays less than a minute, in contrast to previous work which proposes a delay of one hour.
Continuously Non-Malleable Codes with Split-State Refresh
Non-malleable codes for the split-state model allow to encode a message into two parts, such that arbitrary independent tampering on each part, and subsequent decoding of the corresponding modified codeword, yields either the same as the original message, or a completely unrelated value. Continuously non-malleable codes further allow to tolerate an unbounded (polynomial) number of tampering attempts, until a decoding error happens. The drawback is that, after an error happens, the system must self-destruct and stop working, otherwise generic attacks become possible.
In this paper we propose a solution to this limitation, by leveraging a split-state refreshing procedure. Namely, whenever a decoding error happens, the two parts of an encoding can be locally refreshed (i.e.,\ without any interaction), which allows to avoid the self-destruct mechanism in some applications. Additionally, the refreshing procedure can be exploited in order to obtain security against continual leakage attacks. We give an abstract framework for building refreshable continuously non-malleable codes in the common reference string model, and provide a concrete instantiation based on the external Diffie-Hellman assumption.
Finally, we explore applications in which our notion turns out to be essential. The first application is a signature scheme tolerating an arbitrary polynomial number of split-state tampering attempts, without requiring a self-destruct capability, and in a model where refreshing of the memory happens only after an invalid output is produced. This circumvents an impossibility result from a recent work by Fuijisaki and Xagawa (Asiacrypt 2016). The second application is a compiler for tamper-resilient read-only RAM programs. In comparison to other tamper-resilient RAM compilers, ours has several advantages, among which the fact that, in some cases, it does not rely on the self-destruct feature.
N-term Karatsuba Algorithm and its Application to Multiplier designs for Special Trinomials
In this paper, we propose a new type of non-recursive Mastrovito multiplier for $GF(2^m)$ using a $n$-term Karatsuba algorithm (KA), where $GF(2^m)$ is defined by an irreducible trinomial, $x^m+x^k+1, m=nk$. We show that such a type of trinomial combined with the $n$-term KA can fully exploit the spatial correlation of entries in related Mastrovito product matrices and lead to a low complexity architecture. The optimal parameter $n$ is further studied.
As the main contribution of this study, the lower bound of the space complexity of our proposal is about $O(\frac{m^2}{2}+m^{3/2})$. Meanwhile, the time complexity matches the best Karatsuba multiplier known to date. To the best of our knowledge, it is the first time that Karatsuba-based multiplier has reached such a space complexity bound while maintaining relatively low time delay.
Attack on Kayawood Protocol: Uncloaking Private Keys
We analyze security properties of a two-party key-agreement protocol recently proposed by I. Anshel, D. Atkins, D. Goldfeld, and P. Gunnels, called Kayawood protocol. At the core of the protocol is an action (called E-multiplication) of a braid group on some finite set. The protocol assigns a secret element of a braid group to each party (private key). To disguise those elements, the protocol uses a so-called cloaking method that multiplies private keys on the left and on the right by specially designed elements (stabilizers for E-multiplication).
We present a heuristic algorithm that allows a passive eavesdropper to recover Alice's private key by removing cloaking elements. Our attack has 100% success rate on randomly generated instances of the protocol for the originally proposed parameter values and for recent proposals that suggest to insert many cloaking elements at random positions of the private key. Our implementation of the attack is available on GitHub.
Actively Secure OT-Extension from q-ary Linear Codes
We consider recent constructions of $1$-out-of-$N$ OT-extension from Kolesnikov and Kumaresan (CRYPTO 2013) and from Orrú et al. (CT-RSA 2017), based on binary error-correcting codes. We generalize their constructions such that $q$-ary codes can be used for any prime power $q$. This allows to reduce the number of base $1$-out-of-$2$ OT's that are needed to instantiate the construction for any value of $N$, at the cost of increasing the complexity of the remaining part of the protocol. We analyze these trade-offs in some concrete cases.
On the Universally Composable Security of OpenStack
OpenStack is the prevalent open-source, non-proprietary package for managing cloud services and data centers. It is highly complex and consists of multiple inter-related components which are developed by separate, loosely coordinated groups. We initiate an effort to provide a rigorous and holistic security analysis of OpenStack. Our analysis has the following key features:
-It is user-centric: It stresses the security guarantees given to users of the system, in terms of privacy, correctness, and timeliness of the services.
-It provides defense in depth: It considers the security of OpenStack even when some of the components are compromised. This departs from the traditional design approach of OpenStack, which assumes that all services are fully trusted.
-It is modular: It formulates security properties for individual components and uses them to assert security properties of the overall system.
We base our modeling and security analysis in the universally composable (UC) security framework, which has been so far used mainly for analyzing security of cryptographic protocols. Indeed, demonstrating how the UC framework can be used to argue about security-sensitive systems which are mostly non-cryptographic in nature is another main contribution of this work.
Our analysis covers only a number of core components of OpenStack. Still, it uncovers some basic and important security trade-offs in the design. It also naturally paves the way to a more comprehensive analysis of OpenStack.
Verifiable Delay Functions
We study the problem of building a verifiable delay function (VDF). A VDF requires a specified number of sequential steps to evaluate, yet produces a unique output that can be efficiently and publicly verified. VDFs have many applications in decentralized systems, including public randomness beacons, leader election in consensus protocols, and proofs of replication. We formalize the requirements for VDFs and present new candidate constructions that are the first to achieve an exponential gap between evaluation and verification time.
GRANULE: An Ultra lightweight cipher design for embedded security
In this paper we proposed an ultra-lightweight cipher GRANULE. It is based on Feistel network which encrypts 64 bits of data with 80/128 bits of key. GRANULE needs very less memory space as compared to existing lightweight ciphers .GRANULE needs 1288 GEs for 80 bit and 1577 GEs for 128 bit key size. It also shows good resistance against linear and differential cryptanalysis. GRANULE needs very small footprint area and provides robust secure design which thwart attacks like biclique attack, zero correlation attack, meet in the middle attack ,key schedule attack and key collision attack. GRANULE is having a strong S-box which is the key designing aspect in any cipher design. In this paper GRANULE is proposed with 32 rounds which are enough to provide resistance against all possible types of attacks. GRANULE consumes very less power as compared to other modern lightweight ciphers. We believe GRANULE cipher is the best suited cipher for providing robust security in applications like IoT.
CHQS: Publicly Verifiable Homomorphic Signatures Beyond the Linear Case
Sensitive data is often outsourced to cloud servers, with the server performing computation on the data. Computational correctness must be efficiently verifiable by a third party while the input data remains confidential. This paper introduces CHQS, a homomorphic signature scheme from bilinear groups fulfilling these requirements. CHQS is the first such scheme to be both context hiding and publicly verifiable for arithmetic circuits of degree two. It also achieves amortized efficiency: after a precomputation, verification can be faster than the evaluation of the circuit itself.
Trends in design of ransomware viruses
Uncategorized
Uncategorized
The ransomware nightmare is taking over the internet impacting common users,small businesses and large ones. The interest and investment which are pushed into this market each month, tells us a few things about the evolution of both technical and social engineering and what to expect in the short-coming future from them. In this paper we analyze how ransomware programs developed in the last few years and how they were released in certain market segments throughout the deep web via RaaS, exploits or SPAM, while learning from their own mistakes
to bring profit to the next level. We will also try to highlight some mistakes that were made, which allowed recovering the encrypted data, along with the ransomware authors preference for specific encryption types, how they got to distribute, the silent agreement between ransomwares, coin-miners and bot-nets and some edge cases of encryption, which may prove to be exploitable in the short-coming future.
Consolidating Security Notions in Hardware Masking
In this paper, we revisit the security conditions of masked hardware implementations. We describe a new, succinct, information-theoretic condition called d-glitch immunity which is both necessary and sufficient for security in the presence of glitches. We show that this single condition includes, but is not limited to, previous security notions such as those used in higher-order threshold implementations and in abstractions using ideal gates. As opposed to these previously known necessary conditions, our new condition is also sufficient. On the other hand, it excludes avoidable notions such as uniformity. We also treat the notion of (strong) non-interference from an information-theoretic point-of-view in order to unify the different security concepts and pave the way to the verification of composability in the presence of glitches. We conclude the paper by demonstrating how the condition can be used as an efficient and highly generic flaw detection mechanism for a variety of functions and schemes based on different operations.
Continuous NMC Secure Against Permutations and Overwrites, with Applications to CCA Secure Commitments
Uncategorized
Uncategorized
Non-Malleable Codes (NMC) were introduced by Dziembowski, Pietrzak and Wichs in ICS 2010 as a relaxation of error correcting codes and error detecting codes. Faust, Mukherjee, Nielsen, and Venturi in TCC 2014 introduced an even stronger notion of non-malleable codes called continuous non-malleable codes where security is achieved against continuous tampering of a single codeword without re-encoding. We construct information theoretically secure CNMC resilient to bit permutations and overwrites, this is the first Continuous NMC constructed outside of the split-state model. In this work we also study relations between the CNMC and parallel CCA commitments. We show that the CNMC can be used to bootstrap a self-destruct parallel CCA bit commitment to a self-destruct parallel CCA string commitment, where self-destruct parallel CCA is a weak form of parallel CCA security. Then we can get rid of the self-destruct limitation obtaining a parallel CCA commitment, requiring only one-way functions.
Last updated: 2018-07-17
Randomness analysis for multiple-recursive matrix generator
Uncategorized
Uncategorized
Randomness testing of binary sequences generated by any keystream generator is of paramount importance to both designer and attacker. Here we consider a word-oriented keystream generator known as multiple-recursive matrix generator, which was introduced by Niederreiter (1993). Using NIST statistical test suite as well as DieHarder statistical package, we analyze randomness properties of binary sequences generated by multiple-recursive matrix generator and show that these sequences are not really adequate for any cryptographic applications. We also propose nonlinearly filtered multiple-recursive matrix generator based a special nonlinear function and establish that sequences generated by such a nonlinearly filtered multiple-recursive matrix generator provide good randomness results. Moreover, we compare our randomness test results with some of the recent lightweight stream ciphers like Lizard, Fruit, and Plantlet.
Cryptanalysis of SFN Block Cipher
SFN is a lightweight block cipher designed to be compact in hardware environment and also efficient in software platforms. Compared to the conventional block ciphers that are either Feistel or Substitution-Permutation (SP) network based, SFN has a different encryption method which uses both SP network structure and Feistel network structure to encrypt.
SFN supports key lengths of 96 bits and its block length is 64 bits. In this paper, we propose an attack on full SFN by using the related key distinguisher. With this attack, we are able to recover the keys with a time complexity of $2^{60.58}$ encryptions. The data and memory complexity of the attacks are negligible. In addition, in the single key mode, we present a meet in the middle attack against the full rounds block cipher for which the time complexity is $2^{80}$ SFN calculations and the memory complexity is $2^{87}$ bytes. The date complexity of this attack is only a single known plaintext and its corresponding ciphertext.
Ramanujan graphs in cryptography
In this paper we study the security of a proposal for Post-Quantum Cryptography from both a number theoretic and cryptographic perspective. Charles-Goren-Lauter in 2006 proposed two hash functions based on the hardness of finding paths in Ramanujan graphs. One is based on Lubotzky--Phillips--Sarnak (LPS) graphs and the other one is based on Supersingular Isogeny Graphs. A 2008 paper by Petit-Lauter-Quisquater breaks the hash function based on LPS graphs. On the Supersingular Isogeny Graphs proposal, recent work has continued to build cryptographic applications on the hardness of finding isogenies between supersingular elliptic curves. A 2011 paper by De Feo-Jao-Plût proposed a cryptographic system based on Supersingular Isogeny Diffie--Hellman as well as a set of five hard problems. In this paper we show that the security of the SIDH proposal relies on the hardness of the SIG path-finding problem introduced in [CGL06]. In addition, similarities between the number theoretic ingredients in the LPS and Pizer constructions suggest that the hardness of the path-finding problem in the two graphs may be linked. By viewing both graphs from a number theoretic perspective, we identify the similarities and differences between the Pizer and LPS graphs.
XS-circuits in Block Ciphers
XS-circuits describe block ciphers that utilize 2 operations: X) bitwise modulo 2 addition of binary words and S) substitution of words using key-dependent S-boxes with possibly complicated internal structure. We propose a model of XS-circuits which, despite the simplicity, covers a rather wide range of block ciphers. In our model, several instances of a simple round circuit, which contains only one S~operation, are linked together and form a compound circuit called a cascade. S operations of a cascade are interpreted as independent round oracles. We deal with diffusion characteristics of cascades. These characteristics are related to the cryptographic strength of corresponding block ciphers. We obtain results on invertibility, transitivity and 2-transitivity of mappings induced by round circuits and their cascades. We provide estimates on the first and second activation times where the i-th activation time is the minimum number of rounds which guarantees that at least i round oracles get different queries while processing two different cascade's inputs. The activation times are related to differential cryptanalysis. We introduce the similarity and duality relations between round circuits. Cascades of related circuits have the same or dual diffusion characteristics. We find canonical representatives of classes of similar circuits and show that the duality between circuits is related to duality between differential and linear attacks against corresponding block ciphers. We discuss families of circuits with growing number of inputs. Such families can be used to build wide-block ciphers.
4-bit crypto S-boxes: Generation with irreducible polynomials over Galois field GF(24) and cryptanalysis.
4-bit crypto S-boxes play a significant role in encryption and decryption of many cipher algorithms from last 4 decades. Generation and cryptanalysis of generated 4-bit crypto S-boxes is one of the major concerns of modern cryptography till now. In this paper 48, 4-bit crypto S-boxes are generated with addition of all possible additive constants to the each element of crypto S-box of corresponding multiplicative inverses of all elemental polynomials (EPs) under the concerned irreducible polynomials (IPs) over Galois field GF(2^4). Cryptanalysis of 48 generated 4-bit crypto S-boxes is done with all relevant cryptanalysis algorithms of 4-bit crypto S-boxes. The result shows the better security of generated 4-bit crypto S-boxes.
The Twin Conjugacy Search Problem and Applications
We propose a new computational problem over the noncommutative group, called the twin conjugacy search problem. This problem is related to the conjugacy search problem and can be used for almost all of the same cryptographic constructions that are based on the conjugacy search problem. However, our new problem is at least as hard as the conjugacy search problem.
Moreover, the twin conjugacy search problem has many applications. One of the most important applications, we propose a trapdoor test which can replace the function of the decision oracle. We also show other applications of the problem, including: a non-interactive key exchange protocol and a key exchange protocol, a new encryption scheme which is secure against chosen ciphertext attack, with a very simple and tight security proof and short ciphertexts, under a weak assumption, in the random oracle model.
Implementation and Performance Evaluation of RNS Variants of the BFV Homomorphic Encryption Scheme
Homomorphic encryption is an emerging form of encryption that provides the ability to compute on encrypted data without ever decrypting them. Potential applications include aggregating sensitive encrypted data on a cloud environment and computing on the data in the cloud without compromising data privacy. There have been several recent advances resulting in new homomorphic encryption schemes and optimized variants. We implement and evaluate the performance of two optimized variants, namely Bajard-Eynard-Hasan-Zucca (BEHZ) and Halevi-Polyakov-Shoup (HPS), of the most promising homomorphic encryption scheme in CPU and GPU. The most interesting (and also unexpected) result of our performance evaluation is that the HPS variant in practice scales significantly better (typically by 15%-30%) with increase in multiplicative depth of the computation circuit than BEHZ, implying that the HPS variant will always outperform BEHZ for most practical applications. For the multiplicative depth of 98, our fastest GPU implementation performs homomorphic multiplication in 51 ms for 128-bit security settings, which is faster by two orders of magnitude than prior results and already practical for cloud environments supporting GPU computations. Large multiplicative depths supported by our implementations are required for applications involving deep neural networks, logistic regression learning, and other important machine learning problems.
BISEN: Efficient Boolean Searchable Symmetric Encryption with Verifiability and Minimal Leakage
The prevalence and availability of cloud infrastructures has made them the de facto solution for storing and
archiving data, both for organizations and individual users. Nonetheless, the cloud’s wide spread adoption is still hindered by
dependability and security concerns, particularly in applications with large data collections where efficient search and retrieval
services are also major requirements. This leads to an increased tension between security, efficiency, and search expressiveness, which current state of the art solutions try to balance through complex cryptographic protocols that tradeoff efficiency and expressiveness for near optimal security.
In this paper we tackle this tension by proposing BISEN, a new provably-secure boolean searchable symmetric encryption
scheme that improves these three complementary dimensions byexploring the design space of isolation guarantees offered by novel commodity hardware such as Intel SGX, abstracted as Isolated Execution Environments (IEEs). BISEN is the first scheme to enable highly expressive and arbitrarily complex boolean queries, with minimal information leakage regarding performed queries and accessed data, and verifiability regarding fully malicious adversaries. Furthermore, by exploiting trusted hardware and the IEE abstraction, BISEN reduces communication costs between the client and the cloud, boosting query execution performance. Experimental validation and comparison with the state of art shows that BISEN provides better performance with enriched search semantics and security properties.
Offline Witness Encryption from Witness PRF and Randomized Encoding in CRS model
Witness pseudorandom functions (witness PRFs) generate a pseudorandom value corresponding to an instance x of an NP language and the same pseudorandom value can be recomputed if a witness w that x is in the language is known. Zhandry (TCC 2016) introduced the idea of witness PRFs and gave a construction using multilinear maps. Witness PRFs can be interconnected with the recent powerful cryptographic primitive called witness encryption. In witness encryption, a message can be encrypted with respect to an instance x of an NP language and a decryptor that knows a witness w corresponding to the instance x can recover the message from the ciphertext. Mostly, witness encryption was constructed using obfuscation or multilinear maps.
In this work, we build (single relation) witness PRFs using a puncturable pseudorandom function and a randomized encoding in common reference string (CRS) model. Next, we propose construction of an offline witness encryption having short ciphertexts from a public-key encryption scheme, an extractable witness PRF and a randomized encoding in CRS model. Furthermore, we show how to convert our single relation witness PRF into a multi-relation witness PRF and the offline witness encryption into an offline functional witness encryption scheme.
Lower Bounds on Lattice Enumeration with Extreme Pruning
Uncategorized
Uncategorized
At Eurocrypt '10,
Gama, Nguyen and Regev introduced lattice enumeration with extreme pruning:
this algorithm is implemented in state-of-the-art lattice reduction software
and used in challenge records.
They showed that extreme pruning provided an exponential speed-up over full enumeration.
However, no limit on its efficiency was known,
which was problematic for long-term security estimates of lattice-based cryptosystems.
We prove the first lower bounds on lattice enumeration with
extreme pruning: if the success probability
is lower bounded, we can lower bound the global running time taken by extreme pruning.
Our results are based on geometric properties
of cylinder intersections and some form of isoperimetry.
We discuss their impact on lattice security estimates.
Polynomial Functional Encryption Scheme with Linear Ciphertext Size
In this paper, we suggest a new selective secure
functional encryption scheme
for degree $d$ polynomial.
The number of ciphertexts for a message with length $\ell$ in our scheme
is $O(\ell)$ regardless of $d$,
while it is at least $\ell^{d/2}$ in the previous works.
Our main idea is to generically combine two abstract encryption schemes
that satisfies some special properties.
We also gives an instantiation of our scheme by
combining ElGamal scheme and Ring-LWE based homomorphic encryption scheme,
whose ciphertext length is exactly $2\ell+1,$ for any degree $d.$
Bounded Fully Homomorphic Encryption from Monoid Algebras
We present a new method that produces bounded FHE schemes (see Definition 3), starting with encryption schemes that support one algebraic operation. We use this technique to construct examples of encryption schemes that, theoretically can handle any algebraic function on encrypted data.
Ring Homomorphic Encryption Schemes
We analyze the structure of commutative ring homomorphic encryption schemes and show that they are not quantum IND-CCA secure.
Pisa: Arbitration Outsourcing for State Channels
State channels are a leading approach for improving the scalability of blockchains and cryptocurrencies. They allow a group of distrustful parties to optimistically execute an application-defined program amongst themselves, while the blockchain serves as a backstop in case of a dispute or abort. This effectively bypasses the congestion, fees and performance constraints of the underlying blockchain in the typical case. However, state channels introduce a new and undesirable assumption that a party must remain on-line and synchronised with the blockchain at all times to defend against execution fork attacks. An execution fork can revert a state channel’s history, potentially causing financial damage to a party that is innocent except for having crashed. To provide security even to parties that may go off-line for an extended period of time, we present Pisa, a protocol enables such parties to delegate to a third party, called the custodian, to cancel execution forks on their
behalf. To evaluate Pisa, we provide a proof-of-concept implementation for a simplified Sprites and we demonstrate that it is cost-efficient to deploy on the Ethereum network.
Smart contracts for bribing miners
We present three smart contracts that allow a briber to fairly exchange bribes to miners who pursue a mining strategy benefiting the briber. The first contract, CensorshipCon, highlights that Ethereum’s uncle block reward policy can directly subsidise the cost of bribing miners. The second contract, HistoryRevisionCon, rewards miners via an in-band payment for reversing transactions or enforcing a new state of another contract. The third contract, GoldfingerCon, rewards miners in one cryptocurrency for reducing the utility of another cryptocurrency. This work is motivated by the need to understand the extent to which smart contracts can impact the incentive mechanisms involved in Nakamoto-style consensus protocols.
Secure MPC: Laziness Leads to GOD
Motivated by what we call "honest but lazy‚" parties in the context of secure multi party computation, we revisit the notion of multi-key FHE schemes (MFHE). In MFHE, any message encrypted using a public key $pk_i$ can be "expanded" so that the resulting ciphertext is encrypted with respect to a set of public keys $(pk_1,..,pk_n)$. Such expanded ciphertexts can be homomorphically evaluated with respect to any circuit to generate a ciphertext $ct$. Then, this ciphertext $ct$ can be partially decrypted using a secret key $sk_i$ (corresponding to the public key $pk_i$) to produce a partial decryption $p_i$. Finally, these partial decryptions $\{p_{i}\}_{i\in [n]}$ can be combined to recover the output. However, this definition of MFHE works only for $n$-out-of-$n$ access structures and, thus, each node in the system is a point of failure. In the context of "honest but lazy‚" parties, it is necessary to be able to decrypt even when only given a subset of partial decryptions (say $t$ out of $n$). In order to solve this problem, we introduce a new notion of multi-key FHE designed to handle arbitrary access patterns that can reconstruct the output. We call it a threshold multi-key FHE scheme (TMFHE). \\
Our main contributions are the following:
We formally define and construct TMFHE for any access structure given by a monotone boolean formula, assuming LWE.
We construct the first simulation-extractable multi-string NIZK from polynomially hard LWE.
We use TMFHE and our multi-string NIZK to obtain the first round-optimal (three round) MPC protocol in the plain model with guaranteed output delivery secure against malicious adversaries or, more generally, mixed adversaries (which supports "honest but lazy‚" parties), assuming LWE.
Our MPC protocols simultaneously achieve security against the maximum number of corruptions under which guaranteed output delivery is achievable, depth-proportional communication complexity, and reusability.
PIR-PSI: Scaling Private Contact Discovery
An important initialization step in many social-networking applications is contact discovery, which allows a user of the service to identify which of its existing social contacts also use the service. Naive approaches to contact discovery reveal a user's entire set of social/professional contacts to the service, presenting a significant tension between functionality and privacy.
In this work, we present a system for private contact discovery, in which the client learns only the intersection of its own contact list and a server's user database, and the server learns only the (approximate) size of the client's list. The protocol is specifically tailored to the case of a small client set and large user database. Our protocol has provable security guarantees and combines new ideas with state-of-the-art techniques from private information retrieval and private set intersection.
We report on a highly optimized prototype implementation of our system, which is practical on real-world set sizes. For example, contact discovery between a client with 1024 contacts and a server with 67 million user entries takes 1.36 sec (when using server multi-threading) and uses only 4.28 MiB of communication.
Optimizing Authenticated Garbling for Faster Secure Two-Party Computation
Wang et al. (CCS 2017) recently proposed a protocol for malicious secure two-party computation that represents the state-of-the- art with regard to concrete efficiency in both the single-execution and amortized settings, with or without preprocessing. We show here several optimizations of their protocol that result in a significant improvement in the overall communication and running time. Specifically:
- We show how to make the “authenticated garbling” at the heart of their protocol compatible with the half-gate optimization of Zahur et al. (Eurocrypt 2015). We also show how to avoid sending an information-theoretic MAC for each garbled row. These two optimizations give up to a 2.6x improvement in communication, and make the communication of the online phase essentially equivalent to that of state-of-the-art semi-honest secure computation.
- We show various optimizations to their protocol for generating AND triples that, overall, result in a 1.5x improvement in the communication and a 2x improvement in the computation for that step.
Fast Distributed RSA Key Generation for Semi-Honest and Malicious Adversaries
We present two new, highly efficient, protocols for securely generating a distributed RSA key pair in the two-party setting. One protocol is semi-honestly secure and the other maliciously secure. Both are constant round and do not rely on any specific number-theoretic assumptions and improve significantly over the state-of-the-art by allowing a slight leakage (which we show to not affect security).
For our maliciously secure protocol our most significant improvement comes from executing most of the protocol in a ``strong'' semi-honest manner and then doing a single, light, zero-knowledge argument of correct execution. We introduce other significant improvements as well. One such improvement arrives in showing that certain, limited leakage does not compromise security, which allows us to use lightweight subprotocols. Another improvement, which may be of independent interest, comes in our approach for multiplying two large integers using OT, in the malicious setting, without being susceptible to a selective-failure attack.
Finally, we implement our malicious protocol and show that its performance is an order of magnitude better than the best previous protocol, which provided only semi-honest security.
Simpler Constructions of Asymmetric Primitives from Obfuscation
We revisit constructions of asymmetric primitives from obfuscation and give simpler alternatives. We consider public-key encryption, (hierarchical) identity-based encryption ((H)IBE), and predicate encryption. Obfuscation has already been shown to imply PKE by Sahai and Waters (STOC'14) and full-fledged functional encryption by Garg et al. (FOCS'13). We simplify all these constructions and reduce the necessary assumptions on the class of circuits that the obfuscator needs to support. Our PKE scheme relies on just a PRG and does not need any puncturing. Our IBE and bounded HIBE schemes convert natural key-delegation mechanisms from (recursive) applications of puncturable PRFs to IBE and HIBE schemes. Our most technical contribution is an unbounded HIBE, which uses (public-coin) differing-inputs obfuscation for circuits and whose proof relies on a recent pebbling-based hybrid argument by Fuchsbauer et al. (ASIACRYPT'14). All our constructions are anonymous, support arbitrary inputs, and have compact keys and ciphertexts.
An Algorithmic Framework for the Generalized Birthday Problem
Uncategorized
Uncategorized
The generalized birthday problem (GBP) was introduced by Wagner in 2002 and has shown to have many applications in cryptanalysis. In its typical variant, we are given access to a function $H:\{0,1\}^{\ell} \rightarrow \{0,1\}^n$ (whose specification depends on the underlying problem) and an integer $K>0$. The goal is to find $K$ distinct inputs to $H$ (denoted by $\{x_i\}_{i=1}^{K}$) such that $\sum_{i=1}^{K}H(x_i) = 0$. Wagner's K-tree algorithm solves the problem in time and memory complexities of about $N^{1/(\lfloor \log K \rfloor + 1)}$ (where $N= 2^n$). Two important open problems raised by Wagner were (1) devise efficient time-memory tradeoffs for GBP, and (2) reduce the complexity of the K-tree algorithm for $K$ which is not a power of 2.
In this paper, we make progress in both directions. First, we improve the best know GBP time-memory tradeoff curve (published by independently by Nikolić and Sasaki and also by Biryukov and Khovratovich) for all $K \geq 8$ from $T^2M^{\lfloor \log K \rfloor -1} = N$ to $T^{\lceil (\log K)/2 \rceil + 1 }M^{\lfloor (\log K)/2 \rfloor} = N$, applicable for a large range of parameters. For example, for $K = 8$ we improve the best previous tradeoff from $T^2M^2 = N$ to $T^3M = N$ and for $K = 32$ the improvement is from $T^2M^4 = N$ to $T^4M^2 = N$.
Next, we consider values of $K$ which are not powers of 2 and show that in many cases even more efficient time-memory tradeoff curves can be obtained. Most interestingly, for $K \in \{6,7,14,15\}$ we present algorithms with the same time complexities as the K-tree algorithm, but with significantly reduced memory complexities. In particular, for $K=6$ the K-tree algorithm achieves $T=M=N^{1/3}$, whereas we obtain $T=N^{1/3}$ and $M=N^{1/6}$. For $K=14$, Wagner's algorithm achieves $T=M=N^{1/4}$, while we obtain $T=N^{1/4}$ and $M=N^{1/8}$. This gives the first significant improvement over the K-tree algorithm for small $K$.
Finally, we optimize our techniques for several concrete GBP instances and show how to solve some of them with improved time and memory complexities compared to the state-of-the-art.
Our results are obtained using a framework that combines several algorithmic techniques such as variants of the Schroeppel-Shamir algorithm for solving knapsack problems (devised in works by Howgrave-Graham and Joux and by Becker, Coron and Joux) and dissection algorithms (published by Dinur, Dunkelman, Keller and Shamir). It then builds on these techniques to develop new GBP algorithms.
Correctness and Fairness of Tendermint-core Blockchains
Uncategorized
Uncategorized
Tendermint-core blockchains (e.g. Cosmos) are considered today one of the most viable alternatives for the highly energy consuming proof-of-work blockchains such as Bitcoin and Ethereum. Their particularity is that they
aim at offering strong consistency (no forks) in an open system combining two ingredients (i) a set of validators that generate blocks via a variant of Practical Byzantine Fault Tolerant (PBFT) consensus protocol and (ii) a selection strategy that dynamically selects nodes to be validators for the next block via a proof-of-stake mechanism.
However, the exact assumptions on the system model under which Tendermint underlying algorithms are correct and the exact properties Tendermint verifies have never been formally analyzed. The contribution of this paper is two-fold. First, while formalizing Tendermint algorithms we precisely characterize the system model and the exact problem solved by Tendermint. We prove that in eventual synchronous systems a modified version of Tendermint solves (i) under additional assumptions, a variant of one-shot consensus for the validation of one single block and (ii) a variant of the repeated consensus problem for multiple blocks. These results hold even if the set of validators is hit by Byzantine failures, provided that for each one-shot consensus instance less than one third of the validators is Byzantine. Our second contribution relates to the fairness of the rewarding mechanism. It is common knowledge that in permisionless blockchain systems the main threat is the tragedy of commons that may yield the system to collapse if the rewarding mechanism is not adequate. Ad minimum the rewarding mechanism must be $fair$, i.e. distributing the rewards in proportion to the merit of participants. We prove, for the first time in blockchain systems, that in repeated-consensus based blockchains there exists an (eventual) fair rewarding mechanism if and only if the system is (eventual) synchronous.
We also show that the original Tendermint rewarding is not fair, however, a modification of the original protocol makes it eventually fair.
Improved Lightweight Implementations of CAESAR Authenticated Ciphers
Uncategorized
Uncategorized
Authenticated ciphers offer potential benefits to resource-constrained devices in the Internet of Things (IoT). The CAESAR competition seeks optimal authenticated ciphers based on several criteria, including performance in resource-constrained (i.e., low-area, low-power, and low-energy) hardware. Although the competition specified a ”lightweight” use case for Round 3, most hardware submissions to Round 3 were not lightweight implementations, in that they employed architectures optimized for best throughput-to-area (TP/A) ratio, and used the Pre- and PostProcessor modules from the CAE-SAR Hardware (HW) Development Package designed for high-speed applications. In this research, we provide true lightweight implementations of selected ciphers (ACORN, NORX, CLOC-AES, SILC-AES, and SILC-LED). These implementations use an improved version of the CAESAR HW DevelopmentPackage designed for lightweight applications, and are fully compliant with the CAESAR HW Application programming interface for Authenticated Ciphers. Our lightweight implementations achieve an average of 55% reduction in area and40% reduction in power compared to their corresponding high-speed versions. Although the average energy per bit of lightweight ciphers increases by a factor of 3.6, the lightweight version of NORX actually uses 47% less energy per bit than its corresponding high-speed implementation.
Round-Optimal Secure Multiparty Computation with Honest Majority
Uncategorized
Uncategorized
We study the exact round complexity of secure multiparty computation (MPC) in the honest majority setting. We construct several round-optimal $n$-party protocols, tolerating any $t<\frac{n}{2}$ corruptions.
- Security with abort: We give the first construction of two round MPC for general functions that achieves security with abort against malicious adversaries in the plain model. The security of our protocol only relies on one-way functions.
- Guaranteed output delivery: We also construct protocols that achieve security with guaranteed output delivery: (i) Against fail-stop adversaries, we construct two round MPC either in the (bare) public-key infrastructure model with no additional assumptions, or in the plain model assuming two-round semi-honest oblivious transfer. In three rounds, however, we can achieve security assuming only one-way functions. (ii) Against malicious adversaries, we construct three round MPC in the plain model, assuming public-key encryption and Zaps. Previously, such protocols were only known based on specific learning assumptions and required the use of common reference strings.
All of our results are obtained via general compilers that may be of independent interest.
Limits of Practical Sublinear Secure Computation
Uncategorized
Uncategorized
Secure computations on big data call for protocols that have sublinear communication complexity in the input length. While fully homomorphic encryption (FHE) provides a general solution to the problem, employing it on a large scale is currently quite far from being practical. This is also the case for secure computation tasks that reduce to weaker forms of FHE such as ''somewhat homomorphic encryption'' or single-server private information retrieval (PIR).
Quite unexpectedly, Aggarwal, Mishra, and Pinkas (Eurocrypt 2004), Brickell and Shmatikov (Asiacrypt 2005), and shelat and Venkitasubramaniam (Asiacrypt 2015) have shown that in several natural instances of secure computation on big data, there are practical sublinear communication protocols that only require sublinear local computation and minimize the use of expensive public-key operations. This raises the question of whether similar protocols exist for other natural problems.
In this paper we put forward a framework for separating ''practical'' sublinear protocols from ''impractical'' ones, and establish a methodology for identifying ''provably hard'' big-data problems that do not admit practical protocols. This is akin to the use of NP-completeness to separate hard algorithmic problems from easy ones. We show that while the previous protocols of Aggarwal et al., Brickell and Shmatikov, and shelat and Venkitasubramaniam are indeed classified as being ''practical'' in this framework, slight variations of the problems they solve and other natural computational problems on big data are hard.
Our negative results are established by showing that the problem at hand is ''PIR-hard'' in the sense that any secure protocol for the problem implies PIR on a large database. This imposes a barrier on the local computational cost of secure protocols for the problem. We also identify a new natural relaxation of PIR that we call semi-PIR, which is useful for establishing ''intermediate hardness'' of several practically motivated secure computation tasks. We show that semi-PIR implies slightly sublinear PIR via an adaptive black-box reduction and that ruling out a stronger black-box reduction would imply a major breakthrough in complexity theory. We also establish information-theoretic separations between semi-PIR and PIR, showing that some problems that we prove to be semi-PIR-hard are not PIR-hard.
Fast Large-Scale Honest-Majority MPC for Malicious Adversaries
Uncategorized
Uncategorized
Protocols for secure multiparty computation enable a set of parties to compute a function of their inputs without revealing anything but the output. The security properties of the protocol must be preserved in the presence of adversarial behavior. The two classic adversary models considered are semi-honest (where the adversary follows the protocol specification but tries to learn more than allowed by examining the protocol transcript) and malicious (where the adversary may follow any arbitrary attack strategy). Protocols for semi-honest adversaries are often far more efficient, but in many cases the security guarantees are not strong enough.
In this paper, we present new protocols for securely computing any functionality represented by an arithmetic circuit. We utilize a new method for verifying that the adversary does not cheat, that yields a cost of just twice that of semi-honest protocols in some settings. Our protocols are information-theoretically secure in the presence of a malicious adversaries, assuming an honest majority. We present protocol variants for small and large fields, and show how to efficiently instantiate them based on replicated secret sharing and Shamir sharing. As with previous works in this area aiming to achieve high efficiency, our protocol is secure with abort and does not achieve fairness, meaning that the adversary may receive output while the honest parties do not.
We implemented our protocol and ran experiments for different numbers of parties, different network configurations and different circuit depths. Our protocol significantly outperforms the previous best for this setting (Lindell and Nof, CCS 2017); for a large number of parties, our implementation runs almost an order of magnitude faster than theirs.
Dissection-BKW
Uncategorized
Uncategorized
The slightly subexponential algorithm of Blum, Kalai and Wasserman (BKW) provides a basis for assessing LPN/LWE security. However, its huge memory consumption strongly limits its practical applicability, thereby preventing precise security estimates for cryptographic LPN/LWE instantiations.
We provide the first time-memory trade-offs for the BKW algorithm. For instance, we show how to solve LPN in dimension $k$ in time $2^{\frac 43\frac k{\log k}}$ and memory $2^{\frac 23\frac k{\log k}}$. Using the Dissection technique due to Dinur et al. (Crypto ’12) and a novel, slight generalization thereof, we obtain fine-grained trade-offs for any available (subexponential) memory while the running time remains subexponential.
Reducing the memory consumption of BKW below its running time also allows us to propose a first quantum version QBKW for the BKW algorithm.
Finding Small Solutions of the Equation $Bx-Ay=z$ and Its Applications to Cryptanalysis of the RSA Cryptosystem
In this paper, we study the condition of finding small solutions $(x,y,z)=(x_0, y_0, z_0)$ of the equation $Bx-Ay=z$. The framework is derived from Wiener's small private exponent attack on RSA and May-Ritzenhofen's investigation about the implicit factorization problem, both of which can be generalized to solve the above equation. We show that these two methods, together with Coppersmith's method, are equivalent for solving $Bx-Ay=z$ in the general case. Then based on Coppersmith's method, we present two improvements for solving $Bx-Ay=z$ in some special cases. The first improvement pays attention to the case where either $\gcd(x_0,z_0,A)$ or $\gcd(y_0,z_0,B)$ is large enough. As the applications of this improvement, we propose some new cryptanalysis of RSA, such as new results about the generalized implicit factorization problem, attacks with known bits of the prime factor, and so on. The motivation of these applications comes from oracle based complexity of factorization problems. The second improvement assumes that the value of $C \equiv z_0\ (\mathrm{mod}\ x_0)$ is known. We present two attacks on RSA as its applications. One focuses on the case with known bits of the private exponent together with the prime factor, and the other considers the case with a small difference of the two prime factors. Our new attacks on RSA improve the previous corresponding results respectively, and the correctness of the approach is verified by experiments.
On the Security Properties of e-Voting Bulletin Boards
Uncategorized
Uncategorized
In state-of-the-art e-voting systems, a bulletin board (BB) is a critical component for preserving election integrity and availability. Although it is common in the literature to assume that a BB is a centralized entity that is trusted, in the recent works of Culnane and Schneider [CSF 2014] and Chondros et al. [ICDCS 2016], the importance of removing BB as a single point of failure has been extensively discussed.
Motivated by these works, we introduce a framework for the formal security analysis of the BB functionality modeled as a distributed system that comprises
(i) a subsystem of item collection (IC) peers that receive and store the submitted items, and
(ii) a subsystem of audit board (AB) peers, where the IC subsystem publishes the stored items for verification.
Our framework treats a secure BB as a robust public transaction ledger, defined by Garay et al. [Eurocrypt 2015], that additionally supports the generation of receipts for successful posting. Namely, in our model, a secure BB system achieves Persistence and Liveness that are confirmable, in the sense that any malicious behavior of the BB system can be detected via a verification mechanism.
As a case study for our framework, we analyze the BB system of Culnane and Schneider and point out its weaknesses.
We demonstrate an attack revealing that the said system does not achieve Confirmable Liveness, even in the case where the adversary is computationally bounded and covert, i.e., it may deviate from the protocol specification but does not want to be detected. In addition, we show that special care should be taken for the choice of the underlying cryptographic primitives, so that the claimed fault tolerance threshold of N/3 out-of N corrupted IC peers is preserved.
Next, based on our analysis, we introduce a new BB protocol that upgrades the [CSF 2014] protocol. We prove that it tolerates any number less than
N/3 out-of N corrupted IC peers both for Persistence and Confirmable Liveness, against a computationally bounded general Byzantine adversary. Furthermore, Persistence can also be Confirmable, if we distribute the AB (originally a centralized entity in [CSF 2014]) as a replicated service with honest majority.
Private Circuits: A Modular Approach
Uncategorized
Uncategorized
We consider the problem of protecting general computations against constant-rate random leakage. That is, the computation is performed by a randomized boolean circuit that maps a randomly encoded input to a randomly encoded output, such that even if the value of every wire is independently leaked with some constant probability $p > 0$, the leakage reveals essentially nothing about the input.
In this work we provide a conceptually simple, modular approach for solving the above problem, providing a simpler and self-contained alternative to previous constructions of Ajtai (STOC 2011) and Andrychowicz et al.\ (Eurocrypt 2016). We also obtain several extensions and generalizations of this result. In particular, we show that for every leakage probability $p<1$, there is a finite basis $\mathbb{B}$ such that leakage-resilient computation with leakage probability $p$ can be realized using circuits over the basis $\mathbb{B}$.
We obtain similar positive results for the stronger notion of leakage tolerance, where the input is not encoded, but the leakage from the entire computation can be simulated given random $p'$-leakage of input values alone, for any $p<p'<1$. Finally, we complement this by a negative result, showing that for every basis $\mathbb{B}$ there is some leakage probability $p<1$ such that for any $p'<1$, leakage tolerance as above cannot be achieved in general.
Last updated: 2018-10-10
Homomorphic Encryption for Approximate Matrix Arithmetic
Uncategorized
Uncategorized
Homomorphic Encryption for Arithmetic of Approximate Numbers (HEAAN) with its vector packing technique proved its potential in cryptographic applications. In this paper, we propose MHEAAN - a generalization of the HEAAN scheme to a multivariate case. Our design takes advantage of the HEAAN scheme, that the precision losses during the evaluation are limited by the depth of the circuit, and it exceeds no more than one bit compared to unencrypted approximate arithmetic, such as floating point operations. In addition, with a multivariate structure of the plaintext space, we suggest a general method of packing multidimensional structures as matrices and tensors in a single ciphertext. We provide a concrete two-dimensional construction and show the efficiency of our scheme on several matrix operations, such as matrix transposition, matrix multiplication, and inverse.
Impossibility on Tamper-Resilient Cryptography with Uniqueness Properties
Uncategorized
Uncategorized
In this work, we show negative results on the tamper-resilience of a wide class of cryptographic primitives with uniqueness properties, such as unique signatures, verifiable random functions, signatures with unique keys, injective one-way functions, and encryption schemes with a property we call unique-message property. Concretely, we prove that for these primitives, it is impossible to derive their (even extremely weak) tamper-resilience from any common assumption, via black-box reductions. Our proofs exploit the simulatable attack paradigm proposed by Wichs (ITCS ’13), and the tampering model we treat is the plain model, where there is no trusted setup.
Multi-client Predicate-only Encryption for Conjunctive Equality Tests
Uncategorized
Uncategorized
We propose the first multi-client predicate-only encryption scheme capable of efficiently testing the equality of two encrypted vectors. Our construction can be used for the privacy-preserving monitoring of relations among multiple clients. Since both the clients’ data and the predicates are encrypted, our system is suitable for situations in which this information is considered sensitive. We prove our construction plaintext and predicate private in the generic bilinear group model using random oracles, and secure under chosen-plaintext attack with unbounded corruptions under the symmetric external Diffie-Hellman assumption. Additionally, we provide a proof-of-concept implementation that is capable of evaluating one thousand predicates defined over the inputs of ten clients in less than a minute on commodity hardware.
maskVerif: automated analysis of software and hardware higher-order masked implementations
Power and electromagnetic based side-channel attacks are serious threats against the security of cryptographic embedded devices. In order to mitigate these attacks, implementations use countermeasures, among which masking is currently the most investigated and deployed choice. Unfortunately, commonly studied forms of masking rely on underlying assumptions that are difficult to satisfy in practice. This is due to physical defaults, such as glitches or transitions, which can recombine the masked data in a way that concretely reduces an implementation’s security.
We develop and implement an automated approach for verifying security of masked implementations in presence of physical defaults (glitches or transitions). Our approach helps to recover the main strengths of masking: rigorous foundations, composability guarantees, automated verification under more realistic assumptions. Our work follows the approach of (Barthe et al, EUROCRYPT 2015) and thus contributes to demonstrate the benefits of language-based approaches (specifically probabilistic information flow) for masking.
Blockchain Abstract Data Type
The presented work continues the line of recent distributed computing community efforts dedicated to the theoretical aspects of blockchains. This paper is the first to specify blockchains as a composition of abstract data types all together with a hierarchy of consistency criteria that formally characterizes the histories admissible for distributed programs that use them. Our work is based on an original oracle-based construction that, along with new consistency definitions, captures the eventual convergence process in blockchain systems. The paper presents as well some results on implementability of the presented abstractions and a mapping of representative existing blockchains from both academia and industry in our framework.
Sub-Linear Lattice-Based Zero-Knowledge Arguments for Arithmetic Circuits
Uncategorized
Uncategorized
We propose the first zero-knowledge argument with sub-linear communication complexity for arithmetic circuit satisfiability over a prime $p$ whose security is based on the hardness of the short integer solution (SIS) problem. For a circuit with $N$ gates, the communication complexity of our protocol is $O\left(\sqrt{N\lambda\log^3{N}}\right)$, where $\lambda$ is the security parameter. A key component of our construction is a surprisingly simple zero-knowledge proof for pre-images of linear relations whose amortized communication complexity depends only logarithmically on the number of relations being proved. This latter protocol is a substantial improvement, both theoretically and in practice, over the previous results in this line of research of Damgård et al. (CRYPTO 2012), Baum et al. (CRYPTO 2016), Cramer et al. (EUROCRYPT 2017) and del Pino and Lyubashevsky (CRYPTO 2017), and we believe it to be of independent interest.
Proofs of Work from Worst-Case Assumptions
Uncategorized
Uncategorized
We give Proofs of Work (PoWs) whose hardness is based on well-studied worst-case assumptions from fine-grained complexity theory. This extends the work of (Ball et al., STOC '17), that presents PoWs that are based on the Orthogonal Vectors, 3SUM, and All-Pairs Shortest Path problems. These, however, were presented as a `proof of concept' of provably secure PoWs and did not fully meet the requirements of a conventional PoW: namely, it was not shown that multiple proofs could not be generated faster than generating each individually. We use the considerable algebraic structure of these PoWs to prove that this non-amortizability of multiple proofs does in fact hold and further show that the PoWs' structure can be exploited in ways previous heuristic PoWs could not.
This creates full PoWs that are provably hard from worst-case assumptions (previously, PoWs were either only based on heuristic assumptions or on much stronger cryptographic assumptions (Bitansky et al., ITCS '16)) while still retaining significant structure to enable extra properties of our PoWs. Namely, we show that the PoWs of (Ball et al, STOC '17) can be modified to have much faster verification time, can be proved in zero knowledge, and more.
Finally, as our PoWs are based on evaluating low-degree polynomials originating from average-case fine-grained complexity, we prove an average-case direct sum theorem for the problem of evaluating these polynomials, which may be of independent interest. For our context, this implies the required non-amortizability of our PoWs.
Simplifying Game-Based Definitions: Indistinguishability up to Correctness and Its Application to Stateful AE
Uncategorized
Uncategorized
Often the simplest way of specifying game-based cryptographic definitions
is apparently barred because the adversary would have some trivial win.
Disallowing or invalidating these wins can
lead to complex or unconvincing definitions.
We suggest a generic way around this difficulty.
We call it indistinguishability up to correctness, or IND|C.
Given games G and H
and a correctness condition C
we define an advantage measure Adv_{G,H,C}^indc wherein
G/H distinguishing attacks are effaced
to the extent that they are inevitable due to C.
We formalize this in the language of oracle silencing,
an alternative to exclusion-style and penalty-style definitions.
We apply our ideas to a domain where game-based definitions have
been cumbersome: stateful authenticated-encryption (sAE).
We rework existing sAE notions and encompass new ones,
like replay-free AE permitting a specified degree of out-of-order message delivery.
Non-Interactive Zero-Knowledge Proofs for Composite Statements
Uncategorized
Uncategorized
The two most common ways to design non-interactive zero-knowledge (NIZK) proofs are based on Sigma protocols and QAP-based SNARKs. The former is highly efficient for proving algebraic statements while the latter is superior for arithmetic representations.
Motivated by applications such as privacy-preserving credentials and privacy-preserving audits in cryptocurrencies, we study the design of NIZKs for composite statements that compose algebraic and arithmetic statements in arbitrary ways. Specifically, we provide a framework for proving statements that consist of ANDs, ORs and function compositions of a mix of algebraic and arithmetic components. This allows us to explore the full spectrum of trade-offs between proof size, prover cost, and CRS size/generation cost. This leads to proofs for statements of the form: knowledge of $x$ such that $SHA(g^x)=y$ for some public $y$ where the prover's work is 500 times fewer exponentiations compared to a QAP-based SNARK at the cost of increasing the proof size to 2404 group and field elements. In application to anonymous credentials, our techniques result in 8 times fewer exponentiations for the prover at the cost of increasing the proof size to 298 elements.
The Curse of Small Domains: New Attacks on Format-Preserving Encryption
Uncategorized
Uncategorized
Format-preserving encryption (FPE) produces ciphertexts which have the same format as the plaintexts. Building secure FPE is very challenging, and recent attacks (Bellare, Hoang, Tessaro, CCS'16; Durak and Vaudenay, CRYPTO'17) have highlighted security deficiencies in the recent NIST SP800-38G standard. This has left the question open of whether practical schemes with high security exist.
In this paper, we continue the investigation of attacks against FPE schemes. Our first contribution are new known-plaintext message recovery attacks against Feistel-based FPEs (such as FF1/FF3 from the NIST SP800-38G standard) which improve upon previous work in terms of amortized complexity in multi-target scenarios, where multiple ciphertexts are to be decrypted. Our attacks are also qualitatively better in that they make no assumptions on the correlation between the targets to be decrypted and the known plaintexts. We also surface a new vulnerability specific to FF3 and how it handles odd length domains, which leads to a substantial speedup in our attacks.
We also show the first attacks against non-Feistel based FPEs. Specifically, we show a strong message-recovery attack for FNR, a construction proposed by Cisco which replaces two rounds in the Feistel construction with a pairwise-independent permutation, following the paradigm by Naor and Reingold (JoC,'99). We also provide a strong ciphertext-only attack against a variant of the DTP construction by Brightwell and Smith, which is deployed by Protegrity within commercial applications.
All of our attacks show that existing constructions fall short of achieving desirable security levels. For Feistel and the FNR schemes, our attacks become feasible on small domains, e.g., 8 bits, for suggested round numbers. Our attack against the DTP construction is practical even for large domains. We provide proof-of-concept implementations of our attacks that verify our theoretical findings.
Limits on the Power of Garbling Techniques for Public-Key Encryption
Uncategorized
Uncategorized
Understanding whether public-key encryption can be based on one-way functions is a fundamental open problem in cryptography. The seminal work of Impagliazzo and Rudich [STOC'89] shows that black-box constructions of public-key encryption from one-way functions are impossible. However, this impossibility result leaves open the possibility of using non-black-box techniques for achieving this goal.
One of the most powerful classes of non-black-box techniques, which can be based on one-way functions (OWFs) alone, is Yao's garbled circuit technique [FOCS'86]. As for the non-black-box power of this technique, the recent work of Dottling and Garg [CRYPTO'17] shows that the use of garbling allows us to circumvent known black-box barriers in the context of identity-based encryption.
We prove that garbling of circuits that have OWF (or even random oracle) gates in them are insufficient for obtaining public-key encryption. Additionally, we show that this model also captures (non-interactive) zero-knowledge proofs for relations with OWF gates. This indicates that currently known OWF-based non-black-box techniques are perhaps insufficient for realizing public-key encryption.
A new class of irreducible pentanomials for polynomial based multipliers in binary fields
Uncategorized
Uncategorized
We introduce a new class of irreducible pentanomials over ${\mathbb F}_{2^m}$ of the form
$f(x) = x^{2b+c} + x^{b+c} + x^b + x^c + 1$. Let $m=2b+c$ and use $f$ to
define the finite field extension of degree $m$. We give the exact number of
operations required for computing the reduction modulo $f$. We also provide
a multiplier based on Karatsuba algorithm in $\mathbb{F}_2[x]$
combined with our reduction process. We give the total cost of the multiplier and found that the
bit-parallel multiplier defined by this new class of polynomials
has improved XOR and AND complexity. Our multiplier has comparable time delay when compared to
other multipliers based on Karatsuba algorithm.
Optimal Channel Security Against Fine-Grained State Compromise: The Safety of Messaging
Uncategorized
Uncategorized
We aim to understand the best possible security of a (bidirectional) cryptographic channel against an adversary that may arbitrarily and repeatedly learn the secret state of either communicating party. We give a formal security definition and a proven-secure construction. This construction provides better security against state compromise than the Signal Double Ratchet Algorithm or any other known channel construction. To facilitate this we define and construct new forms of public-key encryption and digital signatures that update their keys over time.
On the Complexity of Compressing Obfuscation
Uncategorized
Uncategorized
Indistinguishability obfuscation has become one of the most exciting cryptographic primitives due to its far reaching applications in cryptography and other fields. However, to date, obtaining a plausibly secure construction has been an illusive task, thus motivating the study of seemingly weaker primitives that imply it, with the possibility that they will be easier to construct.
In this work, we provide a systematic study of compressing obfuscation, one of the most natural and simple to describe primitives that is known to imply indistinguishability obfuscation when combined with other standard assumptions. A compressing obfuscator is roughly an indistinguishability obfuscator that outputs just a slightly compressed encoding of the truth table. This generalizes notions introduced by Lin et al.~(PKC 2016) and Bitansky et al.~(TCC 2016) by allowing for a broader regime of parameters.
We view compressing obfuscation as an independent cryptographic primitive and show various positive and negative results concerning its power and plausibility of existence, demonstrating significant differences from full-fledged indistinguishability obfuscation.
First, we show that as a cryptographic building block, compressing obfuscation is weak. In particular, when combined with one-way functions, it cannot be used (in a black-box way) to achieve public-key encryption, even under (sub-)exponential security assumptions. This is in sharp contrast to indistinguishability obfuscation, which together with one-way functions implies almost all cryptographic primitives.
Second, we show that to construct compressing obfuscation with perfect correctness, one only needs to assume its existence with a very weak correctness guarantee and polynomial hardness. Namely, we show a correctness amplification transformation with optimal parameters that relies only on polynomial hardness assumptions. This implies a universal construction assuming only polynomially secure compressing obfuscation with approximate correctness. In the context of indistinguishability obfuscation, we know how to achieve such a result only under sub-exponential security assumptions together with derandomization assumptions.
Lastly, we characterize the existence of compressing obfuscation with \emph{statistical} security. We show that in some range of parameters and for some classes of circuits such an obfuscator exists, whereas it is unlikely to exist with better parameters or for larger classes of circuits. These positive and negative results reveal a deep connection between compressing obfuscation and various concepts in complexity theory and learning theory.
Structured Encryption and Leakage Suppression
Uncategorized
Uncategorized
Structured encryption (STE) schemes encrypt data structures in such a way that they can be privately queried. One aspect of STE that is still poorly understood is its leakage. In this work, we describe a general framework to design STE schemes that do not leak the query/search pattern (i.e., if and when a query was previously made).
Our framework consists of two compilers. The first can be used to make any dynamic STE scheme rebuildable in the sense that the encrypted structures it produces can be rebuilt efficiently using only O(1) client storage. The second transforms any rebuildable scheme that leaks the query/search pattern into a new scheme that does not. Our second compiler is a generalization of Goldreich and Ostrovsky's square root oblivious RAM (ORAM) solution but does not make use of black-box ORAM simulation. We show that our framework produces STE schemes with query complexity that is asymptotically better than ORAM simulation in certain (natural) settings and comparable to special-purpose oblivious data structures.
We use our framework to design a new STE scheme that is ``almost" zero-leakage in the sense that it reveals an, intuitively-speaking, small amount of information. We also show how the scheme can be used to achieve zero-leakage queries when one can tolerate a probabilistic guarantee of correctness. This construction results from applying our compilers to a new STE scheme we design called the piggyback scheme. This scheme is a general-purpose STE construction (in the sense that it can encrypt any data structure) that leaks the search/query pattern but hides the response length on non-repeating queries.
PRank: Fast Analytical Rank Estimation via Pareto Distributions
Rank estimation is an important tool for a side-channel evaluations laboratories. It allows estimating the remaining security after an attack has been performed, quantified as the time complexity and the memory consumption required to brute force the key given the leakages as probability distributions over $d$ subkeys (usually key bytes). These estimations are particularly useful where the key is not reachable with exhaustive search. We propose a new method called PRank for rank estimation, that is conceptually simple, and more time and memory efficient than previous proposals. Our main idea is to bound each subkey distribution by a Pareto-like function: since these are analytical functions, we can then estimate the rank by a closed formula. We evaluated the performance of PRank through extensive simulations based on two real SCA data corpora, and compared it to the currently-best histogram-based algorithm. We show that PRank gives a good rank estimation with much improved time and memory efficiency, especially for large ranks: For ranks between $2^{80}-2^{100}$ PRank estimation is at most 10 bits above the histogram rank and for ranks beyond $2^{100}$ the PRank estimation is only 4 bits above the histogram rank---yet it runs faster, and uses negligible memory. PRank gives a new and interesting method to solve the rank estimation problem based on reduction to analytical functions and calculating one closed formula hence using negligible time and space.