All papers in 2017 (Page 6 of 1262 results)
Private Collaborative Neural Network Learning
Machine learning algorithms, such as neural networks, create better predictive models when having access to larger datasets. In many domains, such as medicine and finance, each institute has only access to limited amounts of data, and creating larger datasets typically requires collaboration. However, there are privacy related constraints on these collaborations for legal, ethical, and competitive reasons. In this work, we present a feasible protocol for learning neural networks in a collaborative way while preserving the privacy of each record. This is achieved by combining Differential Privacy and Secure Multi-Party Computation with Machine Learning.
Anti-SAT: Mitigating SAT Attack on Logic Locking
Logic locking is a technique that's proposed to protect outsourced IC designs from piracy and counterfeiting by untrusted foundries. A locked IC preserves the correct functionality only when a correct key is provided. Recently, the security of logic locking is threatened by a new attack called SAT attack, which can decipher the correct key of most logic locking techniques within a few hours even for a reasonably large key-size. This attack iteratively solves SAT formulas which progressively eliminate the incorrect keys till the circuit is unlocked. In this paper, we present a circuit block (referred to as Anti-SAT block) to enhance the security of existing logic locking techniques against the SAT attack. We show using a mathematical proof that the number of SAT attack iterations to reveal the correct key in a circuit comprising an Anti-SAT block is an exponential function of the key-size thereby making the SAT attack computationally infeasible. Besides, we address the vulnerability of the Anti-SAT block to various removal attacks and investigate obfuscation techniques to prevent these removal attacks. More importantly, we provide a proof showing that these obfuscation techniques for making Anti-SAT un-removable would not weaken the Anti-SAT block's resistance to SAT attack. Through our experiments, we illustrate the effectiveness of our approach to securing modern chips fabricated in untrusted foundries.
Last updated: 2017-08-08
GIFT: A Small Present (Full version)
In this article, we revisit the design strategy of PRESENT, leveraging all the advances provided by the research community in construction and cryptanalysis since its publication, to push the design up to its limits. We obtain an improved version, named GIFT, that provides a much increased efficiency in all domains (smaller and faster), while correcting the well-known weakness of PRESENT with regards to linear hulls.
GIFT is a very simple and clean design that outperforms even SIMON or SKINNY for round-based implementations, making it one of the most energy efficient ciphers as of today. It reaches a point where almost the entire implementation area is taken by the storage and the Sboxes, where any cheaper choice of Sbox would lead to a very weak proposal. In essence, GIFT is composed of only Sbox and bit-wiring, but its natural bitslice data flow ensures excellent performances in all scenarios, from area-optimised hardware implementations to very fast software implementation on high-end platforms.
We conducted a thorough analysis of our design with regards to state-of-the-art cryptanalysis, and we provide strong bounds with regards to differential/linear attacks.
Simple Amortized Proofs of Shortness for Linear Relations over Polynomial Rings
For a public value $y$ and a linear function $f$, giving a zero-knowledge proof of knowledge of a secret value $x$ that satisfies $f(x)=y$ is a key ingredient in many cryptographic protocols. Lattice-based constructions, in addition, require proofs of ``shortness'' of $x$. Of particular interest are constructions where $f$ is a function over polynomial rings, since these are the ones that result in efficient schemes with short keys and outputs.
All known approaches for such lattice-based zero-knowledge proofs are not very practical because they involve a basic protocol that needs to be repeated many times in order to achieve negligible soundness error. In the amortized setting, where one needs to give zero-knowledge proofs for many equations for the same function $f$, the situation is more promising, though still not yet fully satisfactory. Current techniques either result in proofs of knowledge of $x$'s that are exponentially larger than the $x$'s actually used for the proof (i.e. the \emph{slack} is exponential), or they have polynomial slack but require the number of proofs to be in the several thousands before the amortization advantages ``kick in''.
In this work, we give a new approach for constructing amortized zero-knowledge proofs of knowledge of short solutions over polynomial rings. Our proof has small polynomial slack and is practical even when the number of relations is as small as the security parameter.
On Improving Integer Factorization and Discrete Logarithm Computation using Partial Triangulation
The number field sieve is the best-known algorithm for factoring integers and solving the discrete logarithm problem in prime fields. In this paper, we present some new improvements to various steps of the number field sieve. We apply these improvements on the current 768-bit discrete logarithm record and show that we are able to perform the overall computing time in about 1260 core$\cdot$years using these improvements instead of 2350 core$\cdot$years using the best known parameters for this problem. Moreover, we show that the pre-computation phase for a 768-bit discrete logarithm problem, that allows for example to build a massive decryption tool of IPsec traffic protected by the Oakley group~1, was feasible in reasonable time using technologies available before the year 2000.
CAKE: Code-based Algorithm for Key Encapsulation
Current widely-used key exchange (KE) mechanisms will be vulnerable to quantum attacks when sufficiently strong quantum computers become available. Therefore, devising quantum-resistant replacements that combine efficiency with solid security guarantees is an important and challenging task. This paper proposes several contributions towards this goal. First, we introduce CAKE, a key encapsulation algorithm based on the QC-MDPC McEliece encryption scheme, with two major improvements: a) the use of ephemeral keys that defeats a recent reaction attack against MDPC decoding of the corresponding encryption scheme and b) a highly efficient key generation procedure for QC-MDPC-based cryptosystems. Then, we present an authenticated key exchange protocol based on CAKE, which is suitable for the Internet Key Exchange (IKE) standard. We prove that CAKE is IND-CPA secure, that the protocol is SK-Secure, and suggest practical parameters. Compared to other post-quantum schemes, we believe that CAKE is a promising candidate for post-quantum key exchange standardization.
Verifiable Private Polynomial Evaluation
Delegating the computation of a polynomial to a server in a verifiable way is challenging. An even more challenging problem is ensuring that this polynomial remains hidden to clients who are able to query such a server. In this paper, we formally define the notion of \emph{Private Polynomial Evaluation} (PPE). Our main contribution is to design a rigorous security model along with relations between the different security properties. We define \emph{polynomial protection} (PP), \emph{proof unforgeability} (UNF), and \emph{indistinguishability against chosen function attack} (INDCFA), which formalizes the resistance of a PPE against attackers trying to guess which polynomial is used among two polynomials of their choice. As a second contribution, we give a cryptanalysis of two PPE schemes of the literature. Finally, we design a PPE scheme called PIPE and we prove that it is PP-, UNF- and INDCFA-secure under the decisional Diffie-Hellman assumption in the random oracle model.
Efficient, Reusable Fuzzy Extractors from LWE
A fuzzy extractor (FE), proposed for deriving cryptographic keys from biometric data, enables reproducible generation of high-quality randomness from noisy inputs having sufficient min-entropy. FEs rely in their operation on a public "helper string" that is guaranteed not to leak too much information about the original input. Unfortunately, this guarantee may not hold when multiple independent helper strings are generated from correlated inputs as would occur if a user registers their biometric data with multiple servers; reusable FEs are needed in that case. Although the notion of reusable FEs was introduced in 2004, it has received relatively little attention since then.
We first analyze an FE proposed by Fuller et al. (Asiacrypt 2013) based on the learning-with-errors (LWE) assumption, and show that it is not reusable. We then show how to adapt their construction to obtain a weakly reusable FE. We also show a generic technique for turning any weakly reusable FE to a strongly reusable one, in the random-oracle model. Finally, we give a direct construction of a strongly reusable FE based on the LWE assumption, that does not rely on random oracles.
Long-Term Secure Time-Stamping using Preimage-Aware Hash Functions
Commonly used digital signature schemes have a limited lifetime because their security is based on computational assumptions that will potentially break in the future when more powerful computers are available.
In 1993, Bayer et al.\ proposed to renew a digital signature by time-stamping the signature together with the signed document.
Based on their idea long-term timestamp schemes have been proposed and standardized that allow to protect data integrity over long periods of time.
To minimize the risk of a design failure that affects the security of these schemes, it is important to formally analyze their security.
However, many of the proposed schemes have not been subject to a formal security analysis yet.
In this paper, we address this issue by formally analyzing the security of a hash-based long-term timestamp scheme that is based on the ideas of Bayer et al.
Our analysis shows that the security level of this scheme degrades cubic over time, a security loss that needs to be taken into account when the scheme is used in practice.
CryptHOL: Game-based Proofs in Higher-order Logic
Game-based proofs are a well-established paradigm for structuring security arguments and simplifying their understanding. We present a novel framework, CryptHOL, for rigorous game-based proofs that is supported by mechanical theorem proving. CryptHOL is based on a new semantic domain with an associated functional programming language for expressing games. We embed our framework in the Isabelle/HOL theorem prover and, using the theory of relational parametricity, we tailor Isabelle’s existing proof automation to game-based proofs.
By basing our framework on a conservative extension of higher-order logic and providing automation support, the resulting proofs are trustworthy and comprehensible, and the framework is extensible and widely applicable.
We evaluate our framework by formalizing different game-based proofs from the literature and comparing the results with existing formal-methods tools.
A Note on Attribute-Based Group Homomorphic Encryption
Group Homomorphic Encryption (GHE), formally defined by Armknecht, Katzenbeisser and Peter, is a public-key encryption primitive where the decryption algorithm is a group homomorphism. Hence it supports homomorphic evaluation of a single algebraic operation such as modular addition or modular multiplication. Most classical homomorphic encryption schemes such as as Goldwasser-Micali and Paillier are instances of GHE. In this work, we extend GHE to the attribute-based setting. We introduce and formally define the notion of Attribute-Based GHE (ABGHE) and explore its properties. We then examine the algebraic structure on attributes induced by the group operation in an ABGHE. This algebraic stricture is a bounded semilattice. We consider some possible semilattices and how they can be realized by an ABGHE supporting inner product predicates. We then examine existing schemes from the literature and show that they meet our definition of ABGHE for either an additive or multiplicative homomorphism. Some of these schemes are in fact Identity-Based Group Homomorphic Encryption (IBGHE) schemes i.e. instances of ABGHE whose class of access policies are point functions. We then present a possibility result for IBGHE from indistinguishability obfuscation for any group for which a (public-key) GHE scheme exists.
Twisting Lattice and Graph Techniques to Compress Transactional Ledgers
Keeping track of financial transactions (e.g., in banks and blockchains) means keeping track of an ever-increasing list of exchanges between accounts. In fact, many of these transactions can be safely “forgotten”, in the sense that purging a set of them that compensate each other does not impact the network’s semantic meaning (e.g., the accounts’ balances). We call nilcatenation a collection of transactions having no effect on a network’s semantics. Such exchanges may be archived and removed, yielding a smaller, but equivalent ledger. Motivated by the computational and analytic benefits obtained from more compact representations of numerical data, we formalize the problem of finding nilcatenations, and propose detection methods based on graph and lattice-reduction techniques. Atop interesting applications of this work (e.g., decoupling of centralized and distributed databases), we also discuss the original idea of a “community-serving proof of work”: finding nilcatenations constitutes a proof of useful work, as the periodic removal of nilcatenations reduces the transactional graph’s size.
Adaptive-Secure VRFs with Shorter Keys from Static Assumptions
Verifiable random functions are pseudorandom functions producing publicly verifiable proofs for their outputs, allowing for efficient checks of the correctness of their computation. In this work, we introduce a new computational hypothesis, the n-Eigen-Value assumption, which can be seen as a relaxation of the U_n-MDDH assumption, and prove its equivalence with the n-Rank assumption. Based on the newly introduced computational hypothesis, we build the core of a verifiable random function having an exponentially large input space and reaching adaptive security under a static assumption. The final construction achieves shorter public and secret keys compared to the existing schemes reaching the same properties.
Searchable Encryption with Optimal Locality: Achieving Sublogarithmic Read Efficiency
We propose the first linear-space searchable encryption scheme with constant locality and \emph{sublogarithmic} read efficiency, strictly improving the previously best known read efficiency bound (Asharov et al., STOC 2016) from $\Theta(\log N \log \log N)$ to $O(\log ^{\gamma} N)$ where $\gamma=\frac{2}{3}+\delta$ for any fixed $\delta>0$. Our scheme employs four different allocation algorithms for storing the keyword lists, depending on the size of the list considered each time. For our construction we develop (i) new probability bounds for the offline two-choice allocation problem; (ii) and a new I/O-efficient oblivious RAM with $\tilde{O}(n^{1/3})$ bandwidth overhead and zero failure probability, both of which can be of independent interest.
Efficient reductions in cyclotomic rings - Application to R-LWE based FHE schemes
With Fully Homomorphic Encryption (FHE), it is possible to process encrypted data without having an access to the private-key. This has a
wide range of applications, most notably the offloading of
sensitive data processing. Most research on FHE has focused on the
improvement of its efficiency, namely by introducing schemes based on
the Ring-Learning With Errors (R-LWE) problem, and techniques such as batching, which allows for the encryption of
multiple messages in the same ciphertext. Much of the related research has focused on RLWE relying on power-of-two cyclotomic polynomials. While it is possible to achieve efficient arithmetic with such polynomials, one cannot exploit batching. Herein, the efficiency of ring arithmetic underpinned by non-power-of-two cyclomotic polynomials is analysed and improved. Two methods for polynomial reduction are proposed, one based on the Barrett reduction and
the other on a Montgomery representation. Speed-ups up to 2.66 are obtained for the reduction operation using an i7-5960X processor when compared with a straightforward implementation of the Barrett reduction. Moreover, the proposed methods are exploited to enhance homomorphic multiplication of FV and BGV encryption schemes, producing experimental speed-ups up to 1.37.
sLiSCP: Simeck-based Permutations for Lightweight Sponge Cryptographic Primitives
In this paper, we propose a family of lightweight cryptographic permutations called sLiSCP, with the sole aim to provide a realistic minimal design}that suits a variety of lightweight device applications. More precisely, we argue that for such devices the chip area dedicated for security purposes should, not only be consumed by an encryption or hashing algorithm, but also provide as many cryptographic functionalities as possible. Our main contribution is the design of a lightweight permutation employing a 4-subblock Type-2 Generalized-like Structure (GFS) and round-reduced unkeyed Simeck with either 48 or 64-bit block length as the two round functions, thus resulting in two lightweight instances of the permutation, sLiSCP-192 and sLiSCP-256. We leverage the extensive security analysis on both Simeck (Simon-like functions) and Type-2 GFSs and present bounds against differential and linear cryptanalysis. In particular, we provide an estimation on the maximum differential probability of the round-reduced Simeck and use it for bounding the maximum expected differential/linear characteristic probability for our permutation. Due to the iterated nature of the Simeck round function and the simple XOR and cyclic shift mixing layer of the GFS that fosters the propagation of long trails, the long trail strategy}is adopted to provide tighter bounds on both characteristics. Moreover, we analyze sLiSCP against a wide range of distinguishing attacks, and accordingly, claim that there exists no structural distinguishers for sLiSCP with a complexity below $2^{b/2}$ where $b$ is the state size. We demonstrate how sLiSCP can be used as a unified round function in the duplex sponge construction to build (authenticated) encryption and hashing functionalities.
The parallel hardware implementation area of the unified duplex mode of sLiSCP-192 (resp. sLiSCP-256) in CMOS $65\,nm$ ASIC is 2289 (resp. 3039) GEs with a throughput of 29.62 (resp. 44.44) kbps, and their areas in CMOS $130\, nm$ are 2498 (resp. 3319) GEs.
On the Tightness of Forward-Secure Signature Reductions
In this paper, we revisit the security of factoring-based signature schemes built via the Fiat-Shamir transform and show that they can admit tighter reductions to certain decisional complexity assumptions such as the quadratic-residuosity, the high-residuosity, and the $\phi$-hiding assumptions. We do so by proving that the underlying identification schemes used in these schemes are a particular case of the lossy identification notion recently introduced by Abdalla et al. at Eurocrypt 2012. Next, we show how to extend these results to the forward-security setting based on ideas from the Itkis-Reyzin forward-secure signature scheme. Unlike the original Itkis-Reyzin scheme, our construction can be instantiated under different decisional complexity assumptions and has a much tighter security reduction. Moreover, we also show that the tighter security reductions provided by our proof methodology can result in concrete efficiency gains in practice, both in the standard and forward-security setting, as long as the use of stronger security assumptions is deemed acceptable. Finally, we investigate the design of forward-secure signature schemes whose security reductions are fully tight.
A Quantum ``Magic Box'' for the Discrete Logarithm Problem
In their foundational paper on pseudorandom bit generation, Blum and Micali showed that the discrete logarithm problem could be solved efficiently given a ``magic box'' oracle that computes the most significant bit of the discrete logarithm with a slight advantage over guessing. This magic box can be realized on a quantum computer with a new, simplified variant of Shor's algorithm. The resulting combination of Blum and Micali's reduction and this new quantum magic box offers an intriguing hybrid approach to solving the discrete logarithm problem with a quantum computer. Because the only requirement on the quantum portion of the algorithm is that it provide an approximate estimate of a single bit of the discrete logarithm, the new algorithm may be easier to implement, more resilient to errors, and more amenable to optimization than previous approaches. Further analysis is needed to quantify the extent of these benefits in practice. The result applies to the discrete logarithm problem over both finite fields and elliptic curves. (The views expressed are my own and do not necessarily reflect those of my employer.)
Binary Hash Tree based Certificate Access Management
We present a certificate access management system to support the USDOT's proposed rule on Vehicle-to-Vehicle (V2V) communications, Federal Motor Vehicle Safety Standard (FMVSS) No.~150. Our proposal, which we call Binary Hash Tree based Certificate Access Management (BCAM) eliminates the need for vehicles to have bidirectional connectivity with the Security Credential Management System (SCMS) for certificate update. BCAM significantly improves the ability of the SCMS to manage large-scale software and/or hardware compromise events. Vehicles are provisioned at the start of their lifetime with all the certificates they will need. However, certificates and corresponding private key reconstruction values are provided to the vehicle encrypted, and the keys to decrypt them are only made available to the vehicles shortly before the start of the validity periods of those certificates. Vehicles that are compromised can be effectively removed from the V2V system by preventing them from decrypting the certificates. We demonstrate that the system is feasible with a broadcast channel for decryption keys and other revocation information, even if that channel has a relatively low capacity.
Cryptanalysis of 22 1/2 rounds of Gimli
Bernstein et al. have proposed a new permutation, Gimli, which aims to provide simple and performant implementations on a wide variety of platforms. One of the tricks used to make Gimli performant is that it processes data mostly in 96-bit columns, only occasionally swapping 32-bit words between them.
Here we show that this trick is dangerous by presenting a distinguisher for reduced-round Gimli. Our distinguisher takes the form of an attack on a simple and practical PRF that should be nearly 192-bit secure. Gimli has 24 rounds. Against 15.5 of those rounds, our distinguisher uses two known plaintexts, takes about $2^{64}$ time and uses enough memory for a set with $2^{64}$ elements. Against 19$\frac12$ rounds, the same attack uses three non-adaptively chosen plaintexts, and uses twice as much memory and about $2^{128}$ time. Against $22\frac12$ rounds, it requires about $2^{138.5}$ work, $2^{129}$ bits of memory and $2^{10.5}$ non-adaptively chosen plaintexts. The same attack would apply to 23$\frac12$ rounds if Gimli had more rounds.
Our attack does not use the structure of the SP-box at all, other than that it is invertible, so there may be room for improvement. On the bright side, our toy PRF puts keys and data in different positions than a typical sponge mode would do, so the attack might not work against sponge constructions.
Cryptanalysis of Compact-LWE
As an invited speaker of the ACISP 2017 conference, Dongxi Liu recently
introduced a new lattice-based encryption scheme (joint work with Li, Kim
and Nepal) designed for lightweight IoT applications, and announced plans
to submit it to the NIST postquantum competition. The new scheme is based
on a variant of standard LWE called Compact-LWE, but is claimed to
achieve high security levels in considerably smaller dimensions than
usual lattice-based schemes. In fact, the proposed parameters, allegedly
suitable for 138-bit security, involve the Compact-LWE assumption in
dimension only 13.
In this note, we show that this particularly aggressive choice of
parameters fails to achieve the stated security level. More precisely, we
show that ciphertexts in the new encryption scheme can be decrypted using
the public key alone with >99.9% probability in a fraction of a second
on a standard PC, which is not quite as fast as legitimate decryption,
but not too far off.
Last updated: 2017-08-07
Dynamic Searchable Public-Key Ciphertexts with Fast Performance and Practical Security
Public-key encryption with keyword search (PEKS) allows a sender to generate keyword-searchable ciphertexts using
a receiver’s public key and upload them to a server. Upon receiving a keyword-search trapdoor from the receiver, the
server finds all matching ciphertexts. Due to the characteristics of public-key encryption, PEKS is inherently suitable
for the application of numerous senders. Hence, PEKS is a well-known method to achieve secure keyword search over
the encrypted email system. However, we find that without a keyword-search trapdoor, the traditional concept of PEKS
still allows the server to have the obvious advantage to distinguish ciphertexts in practice. In other words, the traditional
PEKS cannot guarantee the well-recognized semantic security in practice. To solve this problem, this paper defines a new
concept called dynamic searchable public-key encryption (DSPE). It can hide the relationships between keyword-searchable
ciphertexts and their corresponding encrypted files, and guarantee semantic security in both theory and practice. In addition,
it allows the server to delete the intended ciphertexts according to the receiver’s requirement. Then, we construct a DSPE
instance with provable semantic security in the random oracle model. In terms of performance, the proposed instance also
has the advantage that it only requires sublinear complexity to determine all matching ciphertexts or to delete the intended
ciphertexts. Finally, we experimentally demonstrate the practicability of the instance.
Convolutional Neural Networks with Data Augmentation against Jitter-Based Countermeasures -- Profiling Attacks without Pre-Processing --
Uncategorized
Uncategorized
In the context of the security evaluation of cryptographic implementations, profiling attacks (aka Template Attacks) play a fundamental role. Nowadays the most popular Template Attack strategy consists in approximating the information leakages by Gaussian distributions. Nevertheless this approach suffers from the difficulty to deal with both
the traces misalignment and the high dimensionality of the data. This forces the attacker to perform critical preprocessing phases, such as the selection of the points of interest and the realignment of measurements. Some software and hardware countermeasures have been conceived exactly to create such a misalignment. In this paper we propose an end-to-end profiling attack strategy based on the Convolutional Neural Networks: this strategy greatly facilitates the attack roadmap, since it does not require a previous trace realignment nor a precise selection of points of interest. To significantly increase the performances of the CNN, we moreover propose to equip it with the data augmentation technique that is classical in other applications of Machine Learning. As a validation, we present several experiments against traces misaligned by different kinds of countermeasures, including the augmentation of the clock jitter effect in a secure hardware implementation over a modern chip. The excellent results achieved in these experiments prove that Convolutional Neural Networks approach combined with data augmentation gives a very efficient alternative to the state-of-the-art profiling attacks.
Last updated: 2017-09-01
Secure Storage with Replication and Transparent Deduplication
Uncategorized
Uncategorized
We seek to answer the following question: To what extent can we deduplicate replicated storage? To answer this question, we design ReDup, a secure storage system that provides users with strong integrity, reliability, and transparency guarantees about data that is outsourced at cloud storage providers. Users store multiple replicas of their data at different storage servers, and the data at each storage server is deduplicated across users. Remote data integrity mechanisms are used to check the integrity of replicas. We consider a strong adversarial model, in which collusions are allowed between storage servers and also between storage servers and dishonest users of the system. A cloud storage provider (CSP) could store less replicas than agreed upon by contract, unbeknownst to honest users. ReDup defends against such adversaries by making replica generation to be time consuming so that a dishonest CSP cannot generate replicas on the fly when challenged by the users.
In addition, ReDup employs transparent deduplication, which means that users get a proof attesting the deduplication level used for their files at each replica server, and thus are able to benefit from the storage savings provided by deduplication. The proof is obtained by aggregating individual proofs from replica servers, and has a constant size regardless of the number of replica servers. Our solution scales better than state of the art and is provably secure under standard assumptions.
Private Intersection-Sum Protocol with Applications to Attributing Aggregate Ad Conversions
Uncategorized
Uncategorized
In this work, we consider the Intersection-Sum problem: two parties hold datasets containing user identifiers, and the second party additionally has an integer value associated with each user identifier. The parties want to learn the number of users they have in common, and the sum of the associated integer values, but “nothing more”. We present a novel protocol tackling this problem using Diffie-Hellman style Private Set Intersection techniques together with Paillier homomorphic encryption. We prove security of our protocol in the honest-but-curious model. We also discuss applications for the protocol for attributing aggregate ad conversions. Finally, we present a variant of the protocol, which allows aborting if the intersection is too small, in which case neither party learns the intersection-sum.
SecReach: Secure Reachability Computation on Encrypted Location Check-in Data
Uncategorized
Uncategorized
Reachability, which answers whether one person is reachable
from another through a sequence of contacts within a period of time, is
of great importance in many domains such as social behavior analysis.
Recently, with the prevalence of various location-based services (LBSs),
a great amount of spatiotemporal location check-in data is generated by
individual GPS-equipped mobile devices and collected by LBS companies,
which stimulates research on reachability queries in these location
check-in datasets. Meanwhile, a growing trend is for LBS companies to
use scalable and cost-effective clouds to collect, store, and analyze data,
which makes it necessary to encrypt location check-in data before outsourcing
due to privacy concerns. In this paper, for the first time, we
propose a scheme, SecReach, to securely evaluate reachability queries
on encrypted location check-in data by using somewhat homomorphic
encryption (SWHE). We prove that our scheme is secure against a semihonest
cloud server. We also present a proof-of-concept implementation
using the state-of-the-art SWHE library (i.e., HElib), which shows the
efficiency and feasibility of our scheme.
SGX Remote Attestation is not Sufficient
Intel SGX enclaves provide hardware enforced confidentiality and integrity guarantees for running pure computations (\ie, OS-level side-effect-free code) in the cloud environment. In addition, SGX remote attestation enables enclaves to prove that a claimed enclave is indeed running inside a genuine SGX hardware and not some (adversary controlled) SGX simulator.
Since cryptographic protocols do not compose well, especially when run concurrently, SGX remote attestation is only a necessary pre-condition for securely instantiating an enclave. In practice, one needs to analyze all the different interacting enclaves as a \textit{single protocol} and make sure that no sub-computation of the protocol can be simulated outside of the enclave. In this paper we describe protocol design problems under (a) sequential-composition, (b) concurrent-composition, and (c) enclave state malleability that must be taken into account while designing new enclaves. We analyze Intel provided EPID \textsf{Provisioning} and \textsf{Quoting} enclave and report our (largely positive) findings. We also provide details about how SGX uses EPID Group Signatures and report (largely negative) results about claimed anonymity guarantees.
Faster Bootstrapping with Multiple Addends
As an important cryptographic primitive in cloud computing and outsourced computation, fully homomorphic encryption (FHE) is an animated area of modern cryptography. However, the efficiency of FHE has been a bottleneck that impeding its application. According to Gentry’s blueprint, bootstrapping, which is used to decrease ciphertext errors, is the most important process in FHE. However, bootstrapping is also the most expensive process that affecting the efficiency of the whole system. Firstly, we notice that, hundreds of serial homomorphic additions take most of the time of bootstrapping. We made use of the properties of Boolean circuit to reduce the number of serial homomorphic additions by two-thirds, and thus constructed an efficient FHE scheme with bootstrapping in 10ms. Secondly, the most expensive parts in our bootstrapping, EHCM and addition operations of multiple matrices, can be accelerated by parallel. This parallel may accelerate the bootstrapping. At last, we found a set of more efficient combination of parameters. As a result, our security parameter level is 128 bits and the correctness is elevated, compared with TFHE scheme in ASIACRYPT 2016. Experiments show that the running time of our bootstrapping is 10ms, which is only 52 percent of TFHE, and is less than CGGI17.
Round Optimal Concurrent Non-Malleability from Polynomial Hardness
Non-malleable commitments are a central cryptographic primitive that guarantee security against man-in-the-middle adversaries, and their exact round complexity has been a subject of great interest. Pass (TCC 2013, CC 2016) proved that non-malleable commitments with respect to commitment are impossible to construct in less than three rounds, via black-box reductions to polynomial hardness assumptions. Obtaining a matching positive result has remained an open problem so far.
While three-round constructions of non-malleable commitments have been achieved, beginning with the work of Goyal, Pandey and Richelson (STOC 2016), current constructions require super-polynomial assumptions.
In this work, we settle the question of whether three-round non-malleable commitments can be based on polynomial hardness assumptions. We give constructions based on polynomial hardness of Decisional Diffie-Hellman assumption or Quadratic Residuosity or Nth Residuosity, together with ZAPs. Our protocols also satisfy concurrent non-malleability.
Decoding Generalized Reed-Solomon Codes and Its Application to RLCE Encryption Scheme
This paper compares the efficiency of various algorithms
for implementing public key encryption scheme RLCE
on 64-bit CPUs. By optimizing various algorithms for polynomial
and matrix operations over finite fields, we obtained several
interesting (or even surprising) results.
For example, it is well known (e.g., Moenck 1976) that
Karatsuba's algorithm outperforms classical polynomial multiplication
algorithm from the degree 15 and above (practically, Karatsuba's algorithm only
outperforms classical polynomial multiplication
algorithm from the degree 35 and above ). Our experiments
show that 64-bit optimized Karatsuba's algorithm will only
outperform 64-bit optimized classical polynomial multiplication
algorithm for polynomials of degree 115 and above over finite field
$GF(2^{10})$. The second interesting (surprising) result shows
that 64-bit optimized Chien's search algorithm ourperforms all other 64-bit optimized
polynomial root finding algorithms such as BTA and FFT for polynomials
of all degrees over finite field $GF(2^{10})$. The third interesting (surprising)
result shows that 64-bit optimized Strassen matrix multiplication algorithm
only outperforms 64-bit optimized classical matrix multiplication algorithm
for matrices of dimension 750 and above over finite field
$GF(2^{10})$. It should be noted that existing literatures
and practices recommend Strassen matrix multiplication
algorithm for matrices of dimension 40 and above.
All experiments are done on a 64-bit MacBook Pro with i7 CPU with a single thread.
The reported results should be
appliable to 64 or larger bits CPU. For 32 or smaller bits CPUs, these results may not
be applicable. The source code
and library for the algorithms covered in this paper will be available at http://quantumca.org/.
Privacy-Preserving Ridge Regression Without Garbled Circuits
Uncategorized
Uncategorized
Ridge regression is an algorithm that takes as input a large number of data points and finds the best-fit linear curve through these points. It is a building block for many machine-learning operations. This report presents a system for privacy-preserving ridge regression. The system outputs the best-fit curve in the clear, but exposes no other information about the input data.
This problem was elegantly addressed by Nikolaenko et al. (S\&P 2013). They suggest an approach that combines homomorphic encryption and Yao garbled circuits. The solution presented in this report only involves homomorphic encryption. This improves the performance as Yao circuits were the main bottleneck in the previous solution.
Revisiting Difficulty Control for Blockchain Systems
The Bitcoin whitepaper states that security of the system is guaranteed as long as honest miners control more than half of the current total computational power. The whitepaper assumes a static difficulty, thus it is equally hard to solve a cryptographic proof-of-work puzzle for any given moment of the system history. However, the real Bitcoin network is using an adaptive difficulty adjustment mechanism.
In this paper we introduce and analyze a new kind of attack on a mining difficulty retargeting function used in Bitcoin. A malicious miner is increasing his mining profits from the attack, named coin-hopping attack, and, as a side effect, an average delay between blocks is increasing.
We propose an alternative difficulty adjustment algorithm in order to reduce an incentive to perform coin-hopping, and also to improve stability of inter-block delays. Finally, we evaluate the presented approach and show that the novel algorithm performs better than the original algorithm of Bitcoin.
Second Order Statistical Behavior of LLL and BKZ
The LLL algorithm (from Lenstra, Lenstra and Lovász) and its generalization BKZ (from Schnorr and Euchner) are widely used in cryptanalysis, especially for lattice-based cryptography.
Precisely understanding their behavior is crucial for deriving appropriate key-size for cryptographic schemes subject to lattice-reduction attacks. Current models, e.g. the Geometric Series Assumption and Chen-Nguyen's BKZ-simulator, have provided a decent first-order analysis of the behavior of LLL and BKZ. However, they only focused on the average behavior and were not perfectly accurate. In this work, we initiate a second order analysis of this behavior. We confirm and quantify discrepancies between models and experiments ---in particular in the head and tail regions--- and study their consequences. We also provide variations around the mean and correlations statistics, and study their impact. While mostly based on experiments, by pointing at and quantifying unaccounted phenomena, our study sets the ground for a theoretical and predictive understanding of LLL and BKZ performances at the second order.
Success Probability of Multiple/Multidimensional Linear Cryptanalysis Under General Key Randomisation Hypotheses
This work considers statistical analysis of attacks on block ciphers using several linear approximations. A general and unified
approach is adopted. To this end, the general key randomisation hypotheses for multidimensional and multiple linear cryptanalysis
are introduced. Expressions for the success probability in terms of the data complexity and the advantage are obtained using the
general key randomisation hypotheses for both multidimensional and multiple linear cryptanalysis and under the settings where the
plaintexts are sampled with or without replacement. Particularising to standard/adjusted key randomisation hypotheses gives rise
to success probabilities in 16 different cases out of which in only five cases expressions for success probabilities have been previously
reported. Even in these five cases, the expressions for success probabilities that we obtain are more general than what was previously
obtained. A crucial step
in the analysis is the derivation of the distributions of the underlying test statistics. While we carry out the analysis formally
to the extent possible, there are certain inherently heuristic assumptions that need to be made. In contrast to previous works which
have implicitly made such assumptions, we carefully highlight these and discuss why they are unavoidable. Finally, we provide a complete
characterisation of the dependence of the success probability on the data complexity.
Evidence-Based Trust Mechanism Using Clustering Algorithms for Distributed Storage Systems
In distributed storage systems, documents are shared among multiple Cloud providers and stored within their respective storage servers. In social secret sharing-based distributed storage systems, shares of the documents are allocated according to the trustworthiness of the storage servers. This paper proposes a trust mechanism using machine learning techniques to compute evidence-based trust values. Our mechanism mitigates the effect of colluding storage servers. More precisely, it becomes possible to detect unreliable evidence and establish countermeasures in order to discourage the collusion of storage servers. Furthermore, this trust mechanism is applied to the social secret sharing protocol AS$^3$, showing that this new evidence-based trust mechanism enhances the protection of the stored documents.
A note on the implementation of the Number Theoretic Transform
The Number Theoretic Transform (NTT) is the time critical function required by cryptographic protocols based on the Ring Learning With Errors problem (RLWE),a popular choice for post-quantum cryptography.
Here we apply a simple methodology to convert the NTT and its inverse from a mathematically correct (but side-channel vulnerable) description, to an efficient constant-time side-channel resistant version.
Proposal of primitive polynomials for Linux kernel PRNG
The polynomials defining the LFSRs of the linux Kernel PRNG are irreducible but not primitive. As a result, the space of numbers generated by these LFSRs does not fill all the space. We propose in this paper more optimal polynomials which increase by a factor of 3 the space of the random numbers generated by these LFSRs. The polynomials used in the current implementation of the PRNG and the point presented here, do not conclude a practical attack on the PRNG.
AS$^3$: Adaptive Social Secret Sharing for Distributed Storage Systems
Distributed storage allows to outsource a document to the cloud such that multiple users can easily access the file. The protection of the document stored relies on secret sharing, which generates and distributes shares of the document to the storage servers. However, the users have to trust that a certain amount of storage servers behaves honestly and do not lose (retrievability) or reveal (confidentiality) the document. To address this so called social secret sharing schemes were developed that allow to adjust the distribution of shares according to the experience made with the involved storage servers. In this work, we provide a framework called AS$^3$ that allows to build social secret sharing schemes based on dynamic secret sharing. The resulting protocol has more freedom in adjusting the parameters of the shares distribution and therefore leads to more efficient and accurate solutions as well as an optimal storage consumption. Furthermore, we provide measures to detect and to prevent that the document is lost or accidentally revealed to individual storage servers. We also demonstrate how to compute trust values for storage servers, how to initialize trust values for newcomers, and provide a proof of concept implementation.
Dynamic and Verifiable Hierarchical Secret Sharing
In this work we provide a framework for dynamic secret sharing and present the first dynamic and verifiable hierarchical secret sharing scheme based on Birkhoff interpolation. Since the scheme is dynamic it allows, without reconstructing the message distributed, to add and remove shareholders, to renew shares, and to modify the conditions for accessing the message. Furthermore, each shareholder can verify its share received during these algorithms protecting itself against malicious dealers and shareholders. While these algorithms were already available for classical Lagrange interpolation based secret sharing, corresponding techniques for Birkhoff interpolation based schemes were missing. Note that Birkhoff interpolation is currently the only technique available that allows to construct hierarchical secret sharing schemes that are efficient and allow to provide shares of equal size for all shareholder in the hierarchy. Thus, our scheme is an important contribution to hierarchical secret sharing.
An Equivalence Between Attribute-Based Signatures and Homomorphic Signatures, and New Constructions for Both
Uncategorized
Uncategorized
In Attribute-Based Signatures (ABS; first defined by Maji, Prabhakaran and Rosulek, CT-RSA 2011) an authority can generate multiple signing keys, where each key is associated with a constraint $f$. A key respective to $f$ can sign a message $x$ only if $f(x) = 0$. The security requirements are unforgeability and key privacy (signatures should not expose the specific signing key used). In Homomorphic Signatures (HS; first defined by Boneh and Freeman, PKC 2011), given a signature for a data-set $x$, one can evaluate a signature for the pair $(f(x),f)$, for functions $f$. In context-hiding HS, evaluated signatures do not reveal information about the pre-evaluated signature.
In this work we start by showing that these two notions are in fact equivalent. The first implication of this equivalence is a new lattice-based ABS scheme for polynomial-depth circuits, based on the HS construction of Gorbunov, Vaikuntanathan and Wichs (GVW; STOC 2015).
We then construct a new ABS candidate from a worst case lattice assumption (SIS), with different parameters. Using our equivalence again, now in the opposite direction, our new ABS implies a new lattice-based HS scheme with different parameter trade-off, compared to the aforementioned GVW.
A Simpler Rate-Optimal CPIR Protocol
In PETS 2015, Kiayias, Leonardos, Lipmaa, Pavlyk, and Tang proposed the first $(n, 1)$-CPIR protocol with rate $1 - o (1)$. They use advanced techniques from multivariable calculus (like the Newton-Puiseux algorithm) to establish optimal rate among a large family of different CPIR protocols. It is only natural to ask whether one can achieve similar rate but with a much simpler analysis. We propose parameters to the earlier $(n, 1)$-CPIR protocol of Lipmaa (ISC 2005), obtaining a CPIR protocol that is asymptotically almost as communication-efficient as the protocol of Kiayias et al. However, for many relevant parameter choices, it is slightly more communication-efficient, due to the cumulative rounding errors present in the protocol of Kiayias et al. Moreover, the new CPIR protocol is simpler to understand, implement, and analyze. The new CPIR protocol can be used to implement (computationally inefficient) FHE with rate $1 - o (1)$.
On Making U2F Protocol Leakage-Resilient via Re-keying
Uncategorized
Uncategorized
The Universal 2nd Factor (U2F) protocol is an open authentication standard to strengthen the two-factor authentication process. It
augments the existing password based infrastructure by using a specialized USB, termed as the U2F authenticator, as the 2nd factor.
The U2F authenticator is assigned two fixed keys at the time of manufacture, namely the device secret key and the attestation private
key. These secret keys are later used by the U2F authenticator during the Registration phase to encrypt and digitally sign data that will
help in proper validation of the user and the web server. However, the use of fixed keys for the above processing leaks information
through side channel about both the secrets.
In this work we show why the U2F protocol is not secure against side channel attacks (SCA). We then present a countermeasure
for the SCA based on re-keying technique to prevent the repeated use of the device secret key for encryption and signing. We
also recommend a modification in the existing U2F protocol to minimise the effect of signing with the fixed attestation private key.
Incorporating our proposed countermeasure and recommended modification, we then present a new variant of the U2F protocol that
has improved security guarantees. We also briefly explain how the side channel attacks on the U2F protocol and the corresponding
proposed countermeasures are similarly applicable to Universal Authentication Framework (UAF) protocol.
Computing Low-Weight Discrete Logarithms
We propose some new baby-step giant-step algorithms for computing "low-weight" discrete logarithms; that is, for computing discrete logarithms in which the radix-b representation of the exponent is known to have only a small number of nonzero digits. Prior to this work, such algorithms had been proposed for the case where the exponent is known to have low Hamming weight (i.e., the radix-2 case). Our new algorithms (i) improve the best-known deterministic complexity for the radix-2 case, and then (ii) generalize from radix-2 to arbitrary radixes b>1. We also discuss how our new algorithms can be used to attack several recent Verifier-based Password Authenticated Key Exchange (VPAKE) protocols from the cryptographic literature with the conclusion that the new algorithms render those constructions completely insecure in practice.
Efficient Proactive Secret Sharing
The secure storage of long-lived sensitive data is constantly growing in its relevance due to the ever increasing digitization of documents. One very important challenge of this research field is to provide confidentiality for the stored data even in the long term. The only known approach to achieve this, as required, for instance, for medical records, is to use proactive secret sharing. However, all currently known schemes suffer from being inefficient. They require information-theoretic secure communication channels between any two shareholders and between the client and each shareholder and come with a high communication complexity. Thus, this work addresses the scenario where only a subset of servers holding shares is connected via private channels. Furthermore, it is sufficient if there is only one private channel between the client and one shareholder. In addition to improving practicability the presented proactive secret sharing solution, called EPSS, performs data aggregation to provide an efficient solution with respect to the communication complexity. Nevertheless, it still provides unconditional confidentiality for the data at rest and towards external attackers eavesdropping the communication channels.
Conditionally Secure Secrecy Computation using Secret Sharing Scheme for n<2k-1 (full paper)
Typically, when secrecy multiplication is performed in multiparty computation using Shamir’s (k,n) threshold secret sharing scheme, the result is a polynomial with degree of 2k-2 instead of k-1. This causes a problem where, in order to reconstruct a multiplication result, the number of polynomials needed will increase from k to 2k-1. Shingu et al. proposed a method to solve the problem that the degree of polynomial increases when secrecy multiplication is performed by using the (scalar value×polynomial) approach instead of the typical (polynomial×polynomial). However, this method is not secure when a combination operation, such as a product-sum operation, is performed. In this paper, we propose a multiparty computation that uses a secret sharing scheme that is secure against a product-sum operation but does not increase the degree of polynomial of the output. We prove that all combinations of the basic operations (addition, subtraction, multiplication, and division) can be performed securely using this scheme. We also propose three preconditions and finally show that our proposed method is information-theoretic secure against a passive adversary.
Fault Attacks on XEX Mode with Application to certain Authenticated Encryption Modes
The XOR-Encrypt-XOR (XEX) block cipher mode was introduced by Rogaway in 2004. XEX mode uses nonce-based secret masks $(L)$ that are distinct for each message. The existence of secret masks in XEX mode prevents the application of conventional fault attack techniques, such as differential fault analysis. This work investigates other types of fault attacks against XEX mode that either eliminate the effect of the secret masks or retrieve their values. Either of these outcomes enables existing fault attack techniques to then be applied to recover the secret key. To estimate the success rate and feasibility, we ran simulations for ciphertext-only fault attacks against 128-bit AES in XEX mode. The paper discusses also the relevance of the proposed fault attacks to certain authenticated encryption modes based on XEX, such as OCB2, OTR, COPA, SHELL and ElmD. Finally, we suggest effective countermeasures to provide resistance to these fault attacks.
Anonymous Post-Quantum Cryptocash
In this paper, we construct an anonymous and decentralized cryptocash protocol which is secure in the quantum computation model. In order to achieve that, a linkable ring signature based on the ideal lattice
is proposed. The size of a signature in our scheme is O(log N ), where N is the number of participants in the ring. The framework of our cryptocash system follows that of CryptoNote with some modifications. By adopting the logarithmic size quantum resistant linkable ring signature scheme, our protocol is efficient and anonymous. We also introduce how to generate the verifying and signing key pairs of the linkable ring signature temporarily. With these techniques, both the sender and the receiver's privacy in transactions are protected even though they are published in the public ledger.
Privacy-Preserving Deep Learning via Additively Homomorphic Encryption
We build a privacy-preserving deep learning system in which many learning participants perform neural network-based deep learning over a combined dataset of all, without actually revealing the participants' local data. To that end, we revisit the previous work by Shokri and Shmatikov (ACM CCS 2015) and point out that local data information may be actually leaked to an honest-but-curious server. We then move on to fix that problem via building an enhanced system with following properties: (1) no information is leaked to the server; and (2) accuracy is kept intact, compared to that of the ordinary deep learning system also over the combined dataset.
Our system is a bridge between deep learning and cryptography: we utilise stochastic gradient descent (SGD) applied to neural networks, in combination with additively homomorphic encryption. We show that our usage of encryption adds tolerable overhead to the ordinary deep learning system.
The Edited Truth
Uncategorized
Uncategorized
We introduce two new cryptographic notions in the realm of public and symmetric key encryption.
Encryption with invisible edits is an encryption scheme with two tiers of users:
"privileged" and "unprivileged". Privileged users know a key pair $(pk, sk)$ and "unprivileged" users know a key pair $(pk_e, sk_e)$ which is associated with an underlying edit $e$ to be applied to messages encrypted. Each key pair on its own works exactly as in standard public-key encryption, but when an unprivileged user attempts to decrypt a ciphertext generated by a privileged user of an underlying plaintext $m$, it will be decrypted to an edited $m' = Edit(m,e)$. Here, $Edit$ is some supported edit function and $e$ is a description of the particular edit to be applied. For example, we might want the edit to overwrite several sensitive blocks of data, replace all occurrences of one word with a different word, airbrush an encrypted image, etc. A user shouldn't be able to tell whether he's an unprivileged or a privileged user.
An encryption with deniable edits is an encryption scheme which allows a user who owns a ciphertext $c$ encrypting a large corpus of data $m$ under a secret key $sk$, to generate an alternative but legitimate looking secret key $sk_{e,c}$ that decrypts $c$ to an "edited" version of the data $m'=Edit(m,e)$. This generalizes classical receiver deniable encryption, which can be viewed as a special case of deniable edits where the edit function performs a complete replacement of the original data. The new flexibility allows us to design solutions with much smaller key sizes than required in classical receiver deniable encryption, and in particular allows the key size to only scale with the description size of the edit $e$ which can be much smaller than the size of the plaintext data $m$.
We construct encryption schemes with deniable and invisible edits for any polynomial-time computable edit function under minimal assumptions: in the public-key setting we only require the existence of standard public-key encryption and in the symmetric-key setting we only require the existence of one-way functions.
The solutions to both problems use common ideas, however there is a significant conceptual difference between deniable edits and invisible edits.
Whereas encryption with deniable edits enables a user to modify the meaning of a single ciphertext in hindsight, the goal of encryption with invisible edits is to enable ongoing modifications of multiple ciphertexts.
More is Less: On the End-to-End Security of Group Chats in Signal, WhatsApp, and Threema
Secure instant messaging is utilized in two variants: one-to-one communication and group communication. While the first variant has received much attention lately (Frosch et al., EuroS&P16; Cohn-Gordon et al., EuroS&P17; Kobeissi et al., EuroS&P17), little is known about the cryptographic mechanisms and security guarantees of secure group communication in instant messaging.
To approach an investigation of group instant messaging protocols, we first provide a comprehensive and realistic security model. This model combines security and reliability goals from various related literature to capture relevant properties for communication in dynamic groups. Thereby the definitions consider their satisfiability with respect to the instant delivery of messages. To show its applicability, we analyze three widely used real-world protocols: Signal, WhatsApp, and Threema. Since these protocols and their implementations are mostly undocumented for the public and two out of three applications among them are closed source, we describe the group protocols employed in Signal, WhatsApp, and Threema. By applying our model, we reveal several shortcomings with respect to the security definition. Therefore we propose generic countermeasures to enhance the protocols regarding the required security and reliability goals. Our systematic analysis reveals that (1) the communications' integrity – represented by the integrity of all exchanged messages – and(2) the groups' closeness – represented by the members' ability of managing the group – are not end-to-end protected.
We additionally show that strong security properties, such as Future Secrecy which is a core part of the one-to-one communication in the Signal protocol, do not hold for its group communication.
On desynchronised El Gamal algorithm
Uncategorized
Uncategorized
Families of stable cyclic groups of nonlinear polynomial transformations of affine spaces $K^n$ over general commutative ring $K$ of
increasing with $n$ order can be used in the key exchange protocols and related to them El Gamal multivariate cryptosystems.
We suggest to use high degree of noncommutativity of affine Cremona group and modify multivariate El Gamal algorithm via the usage of conjugations for
two polynomials of kind $g^k$ and $g^{-1}$ given by key holder (Alice) or giving them as elements of different transformation groups.
We present key exchange protocols based on twisted discrete logarithms problem
which uses noncommutativity of semigroup.
Recent results on the existence of families of stable transformations of prescribed degree and density and exponential order over finite fields
can be used for the implementation of schemes as above with feasible computational complexity. We introduce an example of a new implemented quadratic multivariate cryptosystem based on the above mentioned ideas.
Composable Masking Schemes in the Presence of Physical Defaults and the Robust Probing Model
Composability and robustness against physical defaults (e.g., glitches) are two highly desirable properties for secure implementations of masking schemes. While tools exist to guarantee them separately, no current formalism enables their joint investigation. In this paper, we solve this issue by introducing a new model, the robust probing model, that is naturally suited to capture the combination of these properties. We first motivate this formalism by analyzing the excellent robustness and low randomness requirements of first-order threshold implementations, and highlighting the difficulty to extend them to higher orders. Next, and most importantly, we use our theory to design higher-order secure, robust and composable multiplication gadgets. While admittedly inspired by existing approaches to masking (e.g., Ishai-Sahai-Wagner-like, threshold, domain-oriented), these gadgets exhibit subtle implementation differences with these state-of-the-art solutions (none of which being provably composable and robust). Hence, our results illustrate how sound theoretical models can guide practically-relevant implementations.
Distributed Computing with Channel Noise
A group of $n$ users want to run a distributed protocol $\pi$ over a network where communication occurs via private point-to-point channels. Unfortunately, an adversary, who knows $\pi$, is able to maliciously flip bits on the channels. Can we efficiently simulate $\pi$ in the presence of such an adversary?
We show that this is possible, even when $L$, the number of bits sent in $\pi$, and $T$, the number of bits flipped by the adversary are not known in advance. In particular, we show how to create a robust version of $\pi$ that 1) fails with probability at most $\delta$, for any $\delta > 0$; and 2) sends $\tilde{O}(L+T)$ bits, where the $\tilde{O}$ notation hides a $\log(nL/\delta)$ term multiplying $L$.
Additionally, we show how to improve this result when the average message size $\alpha$ is not constant. In particular, we give an algorithm that sends $O(L(1 + (1/\alpha) \log(nL/\delta) + T )$ bits. This algorithm is adaptive in that it does not require a priori knowledge of $\alpha$. We note that if $\alpha$ is $\Omega (log(nL/\delta))$, then this improved algorithm sends only $O(L + T)$ bits, and is therefore within a constant factor of optimal.
spKEX: An optimized lattice-based key exchange
Uncategorized
Uncategorized
The advent of large-scale quantum computers has resulted in significant interest in quantum-safe cryptographic primitives. Lattice-based cryptography is one of the most attractive post-quantum cryptographic families due to its well-understood security, efficient operation and versatility. However, LWE-based schemes are still relatively bulky and slow.
In this work, we present spKEX, a forward-secret, post-quantum,
unauthenticated lattice-based key-exchange scheme that combines four
techniques to optimize performance. spKEX relies on Learning with
Rounding (LWR) to reduce bandwidth; it uses sparse and ternary secrets
to speed up computations and reduce failure probability; it applies an improved key reconciliation scheme to reduce bandwidth and failure probability; and computes the public matrix A by means of a permutation to improve performance while allowing for a fresh A in each key exchange.
For a quantum security level of 128 bits, our scheme requires 30%
lesser bandwidth than the LWE-based key-exchange proposal Frodo [9]
and allows for a fast implementation of the key exchange.
Reconsidering the Security Bound of AES-GCM-SIV
We make a number of remarks about the AES-GCM-SIV nonce-misuse resistant authenticated encryption scheme currently considered for standardization by the Crypto Forum Research Group (CFRG). First, we point out that the security analysis proposed in the ePrint report 2017/168 is incorrect, leading to overly optimistic security claims. We correct the bound and re-assess the security guarantees offered by the scheme for various parameters. Second, we suggest a simple modification to the key derivation function which would improve the security of the scheme with virtually no efficiency penalty.
Privacy-Preserving Ridge Regression on Distributed Data
Linear regression is an important statistical tool that models the relationship between some explanatory values and an outcome value using a linear function.
In many current applications (e.g. predictive modelling in personalized healthcare), these values represent sensitive data owned by several different parties that are unwilling to share them. In this setting, training a linear regression model becomes challenging and needs specific cryptographic solutions. In this work, we propose a new system that can train a linear regression model with 2-norm regularization (i.e. ridge regression) on a dataset obtained by merging a finite number of private datasets. Our system is composed of two phases: The first one is based on a simple homomorphic encryption scheme and takes care of securely merging the private datasets. The second phase is a new ad-hoc two-party protocol that computes a ridge regression model solving a linear system where all coefficients are encrypted. The efficiency of our system is evaluated both on synthetically generated and real-world datasets.
SCATTER : A New Dimension in Side-Channel
Side-channel techniques have well progressed over the last few years, leading to the creation of a variety of statistical tools for extracting the secrets used in cryptographic algorithms. Such techniques are taking advantage of the side-channel traces collected during the executions of secret computations in the product. Noticeably, the vast majority of side-channel attacks requires the traces have been aligned together beforehand. This prerequisite turns out to be more and more challenging in the practical realisation of attacks as many devices include hardware or software countermeasures to increase this difficulty. This is typically achieved applying random jittering or random executions with fake operations. In this paper, we introduce \textsl{scatter}, a new attack which has the potential to tackle most of the alignment issues.
scatter brings a new dimension to improve the efficiency of existing attacks and opens the door to a large set of potential new attack techniques. The effectiveness of scatter has been proven on both simulated traces and real word secure products. As a result, scatter is a new side-channel technique particularly powerful when the trace alignment represents an issue, or even when considering low-cost attacks, as the requirements in terms of equipment and expertise are significantly reduced.
Multi-Hop Distance Estimation: How Far are You?
Several access control systems are based on the users’ physical location/proximity to the access point. Distance- Bounding (DB) protocols constitute a classical solution to calculate the distance between a trusted verifier (e.g., an access point) and an untrusted prover (e.g., a pervasive device). The main limitation of DB is that the prover and the verifier need to lie in each other’s communication range. In this paper, we introduce the concept of Multi-Hop Distance-Estimation (MHDE) protocols, which enable a verifier to authenticate a possibly far-away prover and estimate its distance to this prover, when they are not in the communication range of each other, using an ad-hoc network of pervasive devices. More precisely, our contributions are three-fold, since we provide: (1) a formal definition for MHDE; (2) a threat model for MHDE that considers a powerful and distributed adversary; and (3) implementation of MHDE protocols with different settings. Additionally, we demonstrate our protocol to be secure in the considered threat model, and we provide a performance analysis regarding the accuracy of the distance estimation and the tolerance of limited mobility of the nodes. The results are promising in order to adopt MHDE in a distributed setting.
A Key Backup Scheme Based on Bitcoin
Since first introduced by Satoshi Nakamoto in 2008, Bitcoin has become the biggest and most well-known decentralized digital currency. Its anonymity allows users all over the world to make transactions with each other and keep their identities hidden. However, protecting private key becomes a very important issue because it is the only access to a unique account and can hardly be recovered if missing. Storing an encrypted backup of private key and its corresponding advanced key is a traditional but effective way, and many other techniques help to make the backup harder to obtain by attackers. While in this paper, we introduce a new backup scheme that can provide protection when an attacker manages to obtain the backup. It is based on Bitcoin system and ECDSA signature scheme. The biggest difference is the generation and recovery of the backup processes are both related with some specific transactions on blockchain, thus it creates a gap for legal users and attackers who manages to obtain backup to recover key. The gap is decided by the number of accounts and transactions on the blockchain which increases rapidly with the growth of Bitcoin's popularity and therefore strengthens the security of our scheme at the same time. What's more, our technique can also be combined with former ones to achieve better security.
Optimally Sound Sigma Protocols Under DCRA
Given a well-chosen additively homomorphic cryptosystem and a $\Sigma$ protocol with a linear answer, Damgård, Fazio, and Nicolosi proposed a non-interactive designated-verifier zero knowledge argument in the registered public key model that is sound under non-standard complexity-leveraging assumptions.
In 2015, Chaidos and Groth showed how to achieve the weaker yet reasonable culpable soundness notion under standard assumptions but only if the plaintext space order is prime. It makes use of $\Sigma$ protocols that satisfy what we call the \emph{optimal culpable soundness}. Unfortunately, most of the known additively homomorphic cryptosystems (like the Paillier Elgamal cryptosystem that is secure under the standard Decisional Composite Residuosity Assumption) have composite-order plaintext space. We construct optimally culpable sound $\Sigma$ protocols and thus culpably sound non-interactive designated-verifier zero knowledge protocols for NP under standard assumptions given that the least prime divisor of the plaintext space order is large.
Better Bounds for Block Cipher Modes of Operation via Nonce-Based Key Derivation
Block cipher modes of operation provide a way to securely encrypt using a block cipher. The main factors in analyzing modes of operation are the level of security achieved (chosen-plaintext security, authenticated encryption, nonce-misuse resistance, and so on) and performance. When measuring the security level of a mode of operation, it does not suffice to consider asymptotics, and a concrete analysis is necessary. This is especially the case today, when encryption rates can be very high, and so birthday bounds may be approached or even reached.
In this paper, we show that key-derivation at every encryption significantly improves the security bounds in many cases. We present a new key-derivation method that utilizes a truncated block cipher, and show that this is far better than standard block-cipher based key derivation.
We prove that by using our key derivation method, we obtain greatly improved bounds for many modes of operation, with a result that the lifetime of a key can be significantly extended. We demonstrate this for AES-CTR (CPA-security), AES-GCM (authenticated encryption) and AES-GCM-SIV (nonce-misuse resistance). Finally, we demonstrate that when using modern hardware with AES instructions (AES-NI), the performance penalty of deriving keys at each encryption is insignificant for most uses.
Improved Reconstruction Attacks on Encrypted Data Using Range Query Leakage
We analyse the security of database encryption schemes supporting range queries against persistent adversaries. The bulk of our work applies to a generic setting, where the adversary's view is limited to the set of records matched by each query (known as access pattern leakage). We also consider a more specific setting where certain rank information is also leaked. The latter is inherent to multiple recent encryption schemes supporting range queries, including Kerschbaum's FH-OPE scheme (CCS 2015), Lewi and Wu's order-revealing encryption scheme (CCS 2016), and the recently proposed Arx scheme of Poddar et al. (IACR eprint 2016/568, 2016/591). We provide three attacks.
First, we consider full reconstruction, which aims to recover the value of every record, fully negating encryption. We show that for dense datasets, full reconstruction is possible within an expected number of queries $N \log N + O(N)$, where $N$ is the number of distinct plaintext values. This directly improves on a $O(N^2 \log N)$ bound in the same setting by Kellaris et al. (CCS 2016). We also provide very efficient, data-optimal algorithms that succeed with the minimum possible number of queries (in a strong, information theoretical sense), and prove a matching data lower bound for the number of queries required.
Second, we present an approximate reconstruction attack recovering all plaintext values in a dense dataset within a constant ratio of error (such as a 5% error), requiring the access pattern leakage of only $O(N)$ queries. We also prove a matching lower bound.
Third, we devise an attack in the common setting where the adversary has access to an auxiliary distribution for the target dataset. This third attack proves highly effective on age data from real-world medical data sets. In our experiments, observing only 25 queries was sufficient to reconstruct a majority of records to within 5 years.
In combination, our attacks show that current approaches to enabling range queries offer little security when the threat model goes beyond snapshot attacks to include a persistent server-side adversary.
Linearly Homomorphic Authenticated Encryption with Provable Correctness and Public Verifiability
In this work the first linearly homomorphic authenticated encryption
scheme with public verifiability and provable correctness, called
LEPCoV, is presented. It improves the initial proposal by avoiding false
negatives during the verification algorithm. This work provides a detailed
description of LEPCoV, a comparison with the original scheme, a
security and correctness proof, and a performance analysis showing that
all algorithms run in reasonable time for parameters that are currently
considered secure. The scheme presented here allows a user to outsource
computations on encrypted data to the cloud, such that any third party
can verify the correctness of the computations without having access
to the original data. This makes this work an important contribution
to cloud computing and applications where operations on sensitive data
have to be performed, such as statistics on medical records and tallying
of electronically cast votes.
Runtime Code Polymorphism as a Protection Against Side Channel Attacks
We present a generic framework for runtime code polymorphism, applicable to a broad range of computing platforms including embedded systems with low computing resources (e.g. microcontrollers with few kilo-bytes of memory). Code polymorphism is defined as the ability to change the observable behaviour of a software component without changing its functional properties. In this paper we present the implementation of code polymorphism with runtime code generation, which offers many code transformation possibilities: we describe the use of random register allocation, random instruction selection, instruction shuffling and insertion of noise instructions. We evaluate the effectiveness of our framework against correlation power analysis: as compared to an unprotected implementation of AES where the secret key could be recovered in less than 50 traces in average, in our protected implementation, we increased the number of traces necessary to achieve the same attack by more than 20000x. With regards to the state of the art, our implementation shows a moderate impact in terms of performance overhead.
δ-subgaussian Random Variables in Cryptography
Uncategorized
Uncategorized
In the Ring-LWE literature, there are several works that use a statistical framework based on delta-subgaussian random variables. These were introduced by Miccancio and Peikert (Eurocrypt 2012) as a relaxation of subgaussian random variables. In this paper, we completely characterise delta-subgaussian random variables. In particular, we show that this relaxation from a subgaussian random variable corresponds only to the shifting of the mean. Next, we give an alternative noncentral formulation for a delta-subgaussian random variable, which we argue is more statistically natural. This formulation enables us to extend prior results on sums of delta-subgaussian random variables, and on their discretisation.
On Internal Re-keying
In this paper we introduce a classification of existing re-keying-based approaches to increase the security of block cipher operation modes. We introduce the concepts of external and internal re-keying putting the focus on the second one. Whereas the external re-keying approach is widely used and provides the mechanism of key usage control on a message stream processing level, the internal re-keying approach is the first known mechanism providing such a control on a single message processing level. These approaches can be applied completely independently. The internal re-keying approach was already applied to the CTR encryption mode and yielded the CTR-ACPKM mode. This mode is currently being standardized in ISO and in IETF/IRTF (CFRG).
In the current paper we apply the internal re-keying approach to the well-known GCM authenticated encryption mode. The main results of this paper are a new internally re-keyed GCM-ACPKM mode and its security bounds. The proposed mode is also passing through the last formal standardization stages in IETF (CFRG). We estimate the security of the GCM-ACPKM mode respecting standard security notions. We compare both security and performance of the GCM-ACPKM and GCM modes. The results show that changing GCM mode by integrating the ACPKM internal re-keying procedure increases security, significantly extending the lifetime of a key with a negligible loss in performance. Also we show how the re-keying approaches could increase the security of TLS 1.3 cipher suites.
A Humble Theory and Application for Logic Encryption
Logic encryption is an important hardware security technique that introduces keys to modify a given combinational circuit in order to lock the functionality from unauthorized uses. Traditional methods are all ad hoc approaches based on inserting lock gates with keys on randomly selected signals in the original circuit. Thus, it was not a surprise that a SAT-based attack developed by Subramanyan et al. successfully defeated almost all of them. New approaches such as SARLock and Anti-SAT were then developed to defend against SAT-based attack. However, they are still vulnerable with extremely low error rates.
In this paper, we present a theory on logic encryption that provides a complete understanding on the design space and the trade-o between error rate and attack complexity. An e cient general design scheme is derived from the theory and some speci c designs are also suggested. We also suggest a method to insert one-way function to burden the SAT engine, and a method to obfuscate the whole design. In addition, as an application of the theory, we also develop a scienti c encryption benchmark for approximate attacks. We test our encryption designs and obfuscation techniques by the SAT-based attack, and the results have veri ed the robustness of our approach.
Updatable Tokenization: Formal Definitions and Provably Secure Constructions
Tokenization is the process of consistently replacing sensitive elements, such as credit cards numbers, with non-sensitive surrogate values. As tokenization is mandated for any organization storing credit card data, many practical solutions have been introduced and are in commercial operation today. However, all existing solutions are static yet, i.e., they do not allow for efficient updates of the cryptographic keys while maintaining the consistency of the tokens. This lack of updatability is a burden for most practical deployments, as cryptographic keys must also be re-keyed periodically for ensuring continued security. This paper introduces a model for updatable tokenization with key evolution, in which a key exposure does not disclose relations among tokenized data in the past, and where the updates to the tokenized data set can be made by an untrusted entity and preserve the consistency of the data. We formally define the desired security properties guaranteeing unlinkability of tokens among different time epochs and one-wayness of the tokenization process. Moreover, we construct two highly efficient updatable tokenization schemes and prove them to achieve our security notions.
Atomically Trading with Roger: Gambling on the success of a hardfork
Uncategorized
Uncategorized
We present atomic trade protocols for Bitcoin and Ethereum
that can bind two parties to swap coins in the event that two blockchains
emerge from a single “pre-fork” blockchain. This work is motivated by
a bet between two members of the Bitcoin community, Loaded and
Roger Ver, to trade 60,000 bitcoins in the event that Bitcoin Unlimited’s
planned hardfork occurs and the blockchain splits into two distinct
forks. Additionally we study several ways to provide replay protection
in the event of hardfork alongside a novel mechanism called migration
inputs. We provide a detailed survey and history of previous softforks
and hardforks in Ethereum and Bitcoin.
Cryptanalysis of Deoxys and its Internal Tweakable Block Ciphers
In this article, we provide the first independent security analysis of Deoxys, a third-round authenticated encryption candidate of the CAESAR competition, and its internal tweakable block ciphers Deoxys-BC-256 and Deoxys-BC-384. We show that the related-tweakey differential bounds provided by the designers can be greatly improved thanks to a MILP-based search tool. In particular, we develop a new method to incorporate linear incompatibility in the MILP model. We use this tool to generate valid differential paths for reduced-round versions of Deoxys-BC-256 and Deoxys-BC-384, later combining them into broader boomerang or rectangle attacks. Here, we also develop a new MILP model which optimises the two paths by taking into account the effect of the ladder switch technique. Interestingly, with the tweak in Deoxys-BC providing extra input as opposed to a classical block cipher, we can even consider beyond full-codebook attacks, and we analyse how our results can be improved in this setting. As these primitives are based on the TWEAKEY framework, we further study how the security of the cipher is impacted when playing with the tweak/key sizes. All in all, we are able to attack 10 rounds of Deoxys-BC-256 (out of 14) and 14 rounds of Deoxys-BC-384 (out of 16). The extra rounds specified in Deoxys-BC to balance the tweak input (when compared to AES) seem to provide about the same security margin as AES-128. Finally we analyse why the authenticated encryption modes of Deoxys mostly prevent our attacks on Deoxys-BC to apply to the authenticated encryption primitive.
Towards Characterizing Securely Computable Two-Party Randomized Functions
A basic question of cryptographic complexity is to combinatorially
characterize all randomized functions which have information-theoretic
semi-honest secure 2-party computation protocols. The corresponding question
for deterministic functions was answered almost three decades back, by
Kushilevitz (FOCS 1989). In this work, we make progress towards
understanding securely computable `randomized' functions. We bring
tools developed in the study of completeness to bear on this problem. In
particular, our characterizations are obtained by considering only symmetric
functions with a combinatorial property called `simplicity'
(Maji et al. Indocrypt 2012).
Our main result is a complete combinatorial characterization of
randomized functions with `ternary output' kernels, that have
information-theoretic semi-honest secure 2-party computation protocols. In
particular, we show that there exist simple randomized functions with
ternary output that do not have secure computation protocols. (For
deterministic functions, the smallest output alphabet size of such a
function is 5, due to an example given by Beaver, DIMACS Workshop on Distributed Computing and Cryptography 1989.)
Also, we give a complete combinatorial characterization of randomized
functions that have `2-round' information-theoretic semi-honest secure
2-party computation protocols.
We also give a counter-example to a natural conjecture for the full
characterization, namely, that all securely computable simple functions have secure
protocols with a unique transcript for each output value. This conjecture
is in fact true for deterministic functions, and -- as our results above
show -- for ternary functions and for functions with 2-round secure
protocols.
Universal Forgery and Key Recovery Attacks: Application to FKS, FKD and Keyak
Uncategorized
Uncategorized
In this paper, we provide a security analysis of the Full-State Keyed Sponge (FKS), Full-State Keyed Duplex (FKD) and Keyak, one of the third-round CAESAR candidates, in the classic setting and the quantum model, respectively. In the classic setting, we present an universal forgery attack that can be implemented in $O(2^{c/2})$ queries, where $c$ is the capacity.
In the quantum model, by utilizing the Simon's algorithm, we propose an efficient universal forgery attack to FKS, FKD and Keyak with complexity of $O(c)$. Moreover, we also propose an efficient key recovery attack that can be implemented in $O(c)$. Such attacks show that FKS, FKD and Keyak is completely broken in the quantum model.
High Performance Post-Quantum Key Exchange on FPGAs
Uncategorized
Uncategorized
Lattice-based cryptography is a highly potential candidate that protects against the threat of quantum attack. At Usenix Security 2016, Alkim, Ducas, Pöpplemann, and Schwabe proposed a post-quantum key exchange scheme called NewHope, based on a variant of lattice problem, the ring-learning-with-errors (RLWE) problem.
In this work, we propose a high performance hardware architecture for NewHope. Our implementation requires 6,680 slices, 9,412 FFs, 18,756 LUTs, 8 DSPs and 14 BRAMs on Xilinx Zynq-7000 equipped with 28mm Artix-7 7020 FPGA. In our hardware design of NewHope key exchange, the three phases of key exchange costs 51.9, 78.6 and 21.1 microseconds, respectively. It achieves more than 4.8 times better in terms of area-time product comparing to previous results of hardware implementation of NewHope-Simple from Oder and Güneysu at Latincrypt 2017.
On the security of HMFEv
In this short report, we study the security of the new multivariate signature scheme HMFEv proposed at PQCrypto 2017.
Quantum Collision-Finding in Non-Uniform Random Functions
We give a complete characterization of quantum attacks for finding a collision in a non- uniform random function whose outputs are drawn according to a distribution of min-entropy k. This can be viewed as showing generic security of hash functions under relaxed assumptions in contrast to the standard heuristic of assuming uniformly random outputs. It also has ap- plications in analyzing quantum security of the Fujisaki-Okamoto transformation [TU TCC16B]. In particular, our results close a gap in the lower bound left open in [TTU PQCrypto16].
Specifically, let $D$ be a min-entropy $k$ distribution on a set $Y$ of size $N$. Let $f: X\to Y$ be a function whose output $f(x)$ is drawn according to $D$ for each $x \in X$ independently. We show that $\Omega(2^{k/3})$ quantum queries are necessary to find a collision in $f$, improving the previous bound $\Omega(2^{k/9})$. In fact we show a stronger lower bound $2^{k/2}$ in some special case. For all cases, we also describe explicit quantum algorithms that find a collision with a number of queries matching the corresponding lower bounds.
Last updated: 2017-07-23
Impossibility of Secure Multi-Party Products in Non-Abelian Groups
Suppose $n$ parties have respective inputs $x_1, \ldots, x_n \in \mathbb{G}$, where $\mathbb{G}$ is a finite group. The parties would like to privately compute $x_1 x_2 \cdots x_n$ (where multiplication refers to the group operation in $\mathbb{G}$). There is a well-known secure protocol that works for any number of parties $n$ when $\mathbb{G}$ is abelian.
In this note we consider private group-product protocols for non-abelian groups. We show that such protocols are possible for if and only if $n$ (the number of parties) is less than 4.
On the Necessity of a Prescribed Block Validity Consensus: Analyzing Bitcoin Unlimited Mining Protocol
Uncategorized
Uncategorized
Bitcoin has not only attracted many users but also been considered as a technical breakthrough by academia. However, the expanding potential of Bitcoin is largely untapped due to its limited throughput. The Bitcoin community is now facing its biggest crisis in history as the community splits on how to increase the throughput. Among various proposals, Bitcoin Unlimited recently became the most popular candidate, as it allows miners to collectively decide the block size limit according to the real network capacity. However, the security of BU is heatedly debated and no consensus has been reached as the issue is discussed in different miner incentive models. In this paper, we systematically evaluate BU's security with three incentive models via testing the two major arguments of BU supporters: the block validity consensus is not necessary for BU's security; such consensus would emerge in BU out of economic incentives. Our results invalidate both arguments and therefore disprove BU's security claims. Our paper further contributes to the field by addressing the necessity of a prescribed block validity consensus for cryptocurrencies.
Compact-LWE: Enabling Practically Lightweight Public Key Encryption for Leveled IoT Device Authentication
Leveled authentication allows resource-constrained IoT devices to be authenticated at different strength levels according to the particular types of communication. To achieve efficient leveled authentication, we propose a lightweight public key encryption scheme that can produce very short ciphertexts without sacrificing its security.
The security of our scheme is based on the Learning With Secretly Scaled Errors in Dense Lattice (referred to as Compact-LWE) problem. We prove the hardness of Compact-LWE by reducing Learning With Errors (LWE) to Compact-LWE. However, unlike LWE, even if the closest vector problem (CVP) in lattices can be solved, Compact-LWE is still hard, due to the high density of lattices constructed from Compact-LWE samples and the relatively longer error vectors. By using a lattice-based attack tool, we verify that the attacks, which are successful on LWE instantly, cannot succeed on Compact-LWE, even for a small dimension parameter like $n=13$, hence allowing small dimensions for short ciphertexts.
On the Contiki operating system for IoT, we have implemented our scheme, with which a leveled Needham-Schroeder-Lowe public key authentication protocol is implemented.
On a small IoT device with 8MHZ MSP430 16-bit processor and 10KB RAM,
our experiment shows that our scheme can complete 50 encryptions and 500 decryptions per second at a security level above 128 bits, with a public key of 2368 bits, generating 176-bit ciphertexts for 16-bit messages. With two small IoT devices communicating over IEEE 802.15.4 and 6LoWPAN, the total time of completing an authentication varies from 640ms (the 1st authentication level) to 8373ms (the 16th authentication level), in which the execution of our encryption scheme takes only a very small faction from 46ms to 445ms.
Z-Channel: Scalable and Efficient Scheme in Zerocash
Uncategorized
Uncategorized
Decentralized ledger-based cryptocurrencies like Bitcoin present a way to construct payment systems without trusted banks.
However, the anonymity of Bitcoin is fragile.
Many altcoins and protocols are designed to improve Bitcoin on this issue, among which Zerocash is the first full-fledged anonymous ledger-based currency, using zero-knowledge proof, specifically zk-SNARK, to protect privacy.
However, Zerocash suffers two problems: poor scalability and low efficiency.
In this paper, we address the above issues by constructing a micropayment system in Zerocash called Z-Channel.
First, we improve Zerocash to support multisignature and time lock functionalities, and prove that the reconstructed scheme is secure.
Then we construct Z-Channel based on the improved Zerocash scheme.
Our experiments demonstrate that Z-Channel significantly improves the scalability and reduces the confirmation time for Zerocash payments.
Efficient Privacy-Preserving General Edit Distance and Beyond
Uncategorized
Uncategorized
Edit distance is an important non-linear metric that has many applications ranging from matching patient genomes to text-based intrusion detection. Depends on the application, related string-comparison metrics, such as weighted edit distance, Needleman-Wunsch distance, longest common subsequences, and heaviest common subsequences, can usually fit better than the basic edit distance. When these metrics need to be calculated on sensitive input strings supplied by mutually distrustful parties, it is more desirable but also more challenging to compute them in privacy-preserving ways. In this paper, we propose efficient secure computation protocols for private edit distance as well as several generalized applications including weighted edit distance (with potentially content-dependent weights), longest common subsequence, and heaviest common subsequence. Our protocols run 20+ times faster and use an order-of-magnitude less bandwidth than their best previous counterparts. Along- side, we propose a garbling scheme that allows free arithmetic addition, free multiplication with constants, and low-cost comparison/minimum for inputs of restricted relative-differences. Moreover, the encodings (i.e. wire-labels) in our garbling scheme can be converted from and to encodings used by traditional binary circuit garbling schemes with light to moderate costs. Therefore, while being extremely efficient on certain kinds of computations, the new garbling scheme remains composable and capable of handling generic computational tasks.
Conditional Blind Signatures
Uncategorized
Uncategorized
We propose a novel cryptographic primitive called conditional blind signatures. Our primitive allows a user to request blind signatures on messages of her choice. The signer has a secret Boolean input
which determines if the supplied signature is valid or not. The user should not be able to distinguish between valid and invalid signatures. A designated verifier, however, can tell which signatures verify correctly, and is in fact the only entity who can learn the secret input associated with the (unblinded) signed message. We instantiate our primitive as an extension of the Okamoto-Schnorr blind signature scheme and provide variations to fit different usage scenarios. Finally, we analyze and prove the security properties of the new scheme and explore potential applications
Logical loophole in random 3-bit sequence generator
In this note, we will provide an information-theoretic framework for the random 3-bit sequence {111, 110, 101, 100, 011, 010, 001, 000} that demonstrates (by using Wigner’s inequality) a logical loophole in the Bell-CHSH inequality [1, 2, 3], as also has recently been found experimentally for triples measurements [4]. As a consequence of this, both classical and quantum regimes can share their bounds within the same environment, which shows that maximally entangled states used in cryptography secure systems can be critically subverted via EPR (Einstein-Podolsky-Rosen) paradox [5].
SOFIA: MQ-based signatures in the QROM
Uncategorized
Uncategorized
We propose SOFIA, the first MQ-based signature scheme provably secure in the quantum-accessible random oracle model (QROM). Our construction relies on an extended version of Unruh's transform for 5-pass identification schemes that we describe and prove secure both in the ROM and QROM.
Based on a detailed security analysis, we provide concrete parameters for SOFIA that achieve 128 bit post-quantum security. The result is SOFIA-4-128 with parameters that are carefully optimized to minimize signature size and maximize performance. SOFIA-4-128 comes with an implementation targeting recent Intel processors with the AVX2 vector-instruction set; the implementation is fully protected against timing attacks.
Searchable Encryption with Access Control
Outsourcing data to the cloud is becoming increasingly prevalent. To ensure data confidentiality, encrypting the data before outsourcing it is advised. While encryption protects the secrets in the data, it also prevents operations on the data. For example in a multi-user setting, data is often accessed via search, but encryption prevents search. Searchable encryption solves this dilemma. However, in a multi-user setting not all users may be allowed to access all data, requiring some means of access control. We address the question how searchable encryption and access control can be combined. Combining these technologies is required to achieve strong notions of confidentiality: if a ciphertext occurs as a search result, we learn something about the underlying document, even if access control does not let us access
the document. This illustrates a need to link search and access control, so that search results presented to users only feature data the users are allowed to access. Our searchable encryption scheme with access control establishes that link.
Differential Fault Attack on Grain v1, ACORN v3 and Lizard
Differential Fault Attack (DFA) is presently a very well known technique to evaluate security of a stream cipher. This considers that the stream cipher can be weakened by injection of the fault. In this paper we study DFA on three ciphers, namely Grain v1, Lizard and ACORN v3. We show that Grain v1 (an eStream cipher) can be attacked with injection of only 5 faults instead of 10 that has been reported in 2012. For the first time, we have mounted the fault attack on Lizard, a very recent design and show that one requires only 5 faults to obtain the state. ACORN v3 is a third round candidate of CAESAR and there is only one hard fault attack on an earlier version of this cipher. However, the `hard fault' model requires a lot more assumption than the generic DFA. In this paper, we mount a DFA on ACORN v3 that requires 9 faults to obtain the state. In case of Grain v1 and ACORN v3, we can obtain the secret key once the state is known. However, that is not immediate in case of Lizard. While we have used the basic framework of DFA that appears in literature quite frequently, specific tweaks have to be explored to mount the actual attacks that were not used earlier. To the best of our knowledge, these are the best known DFA on these three ciphers.
Faster Unbalanced Private Set Intersection
Uncategorized
Uncategorized
Protocols for Private Set Intersection (PSI) are important cryptographic primitives that perform joint operations on datasets in a privacy-preserving way. They allow two parties to compute the intersection of their private sets without revealing any additional information beyond the intersection itself. Unfortunately, PSI implementations in the literature do not usually employ the best possible cryptographic implementation techniques. This results in protocols presenting computational and communication complexities that are prohibitive, particularly in the case when one of the participants is a low-powered device and there are bandwidth restrictions. This paper builds on modern cryptographic engineering techniques and proposes optimizations for a promising one-way PSI protocol based on public-key cryptography. For the case when one of the parties holds a set much smaller than the other (a realistic assumption in many scenarios) we show that our improvements and optimizations yield a protocol that outperforms the communication complexity and the run time of previous proposals by around one thousand times.
Dynamic Verifiable Encrypted Keyword Search Using Bitmap Index and Homomorphic MAC
Uncategorized
Uncategorized
Outsourcing data storage to the cloud securely and retrieving the remote data in an efficient way is a very significant research topic, with high relevance to secure cloud deployment. With the ever growing security and privacy concerns, encrypting the data stored remotely is inevitable but using traditional encryption thwarts performing search operation on the encrypted data. Encrypted keyword search is a cryptographic setting, which offers search functionality and at the same time, ensures security and privacy of the remotely stored data.
Searchable Symmetric Encryption (SSE) is a technique to securely outsource the data, which is encrypted using symmetric key primitives, while maintaining search functionality. While several solutions have been proposed to realize SSE over various data structures, the efficient solution using inverted index is due to Curtmola et.al. Hwang et.al. introduced a SSE scheme based on bitmaps in-order to reduce the index size.
In this paper, we consider Searchable Symmetric Encryption (SSE) in the presence of a Semi-Honest-But-Curious Cloud Service Provider (SHBC-CSP). We have defined a new security notion for SSE in presence of SHBC-CSP, contrived two new SSE schemes and proved their security formally in the proposed security notion.
Dynamic Verifiable Encrypted Keyword Search (DVSSE), is the first SSE scheme to the best of our knowledge, which is both dynamic and verifiable. We have implemented our schemes, compared their performance and complexity with existing schemes.
Memory-Tight Reductions
Uncategorized
Uncategorized
Cryptographic reductions typically aim to be tight by transforming an adversary A into an algorithm that uses essentially the same resources as A. In this work we initiate the study of memory efficiency in reductions. We argue that the amount of working memory used (relative to the initial adversary) is a relevant parameter in reductions, and that reductions that are inefficient with memory will sometimes yield less meaningful security guarantees. We then point to several common techniques in reductions that are memory-inefficient and give a toolbox for reducing memory usage. We review common cryptographic assumptions and their sensitivity to memory usage. Finally, we prove an impossibility result showing that reductions between some assumptions must unavoidably be either memory- or time-inefficient. This last result follows from a connection to data streaming algorithms for which unconditional memory lower bounds are known.
Transparent Memory Encryption and Authentication
Security features of modern (SoC) FPAGs permit to protect the confidentiality of hard- and software IP when the devices are powered off as well as to validate the authenticity of IP when being loaded at startup. However, these approaches are insufficient since attackers with physical access can also perform attacks during runtime, demanding for additional security measures. In particular, RAM used by modern (SoC) FPGAs is under threat since RAM stores software IP as well as all kinds of other sensitive information during runtime.
To solve this issue, we present an open-source framework for building transparent RAM encryption and authentication pipelines, suitable for both FPGAs and ASICs. The framework supports various ciphers and modes of operation as shown by our comprehensive evaluation on a Xilinx Zynq-7020 SoC. For encryption, the ciphers Prince and AES are used in the ECB, CBC and XTS mode. Additionally, the authenticated encryption cipher Ascon is used both standalone and within a TEC tree. Our results show that the data processing of our encryption pipeline is highly efficient with up to 94% utilization of the read bandwidth that is provided by the FPGA interface. Moreover, the use of a cryptographically strong primitive like Ascon yields highly practical results with 54% bandwidth utilization.
Differential Fault Analysis Automation
Uncategorized
Uncategorized
Characterization of all possible faults in a cryptosystem exploitable for fault attacks is a problem
which is of both theoretical and practical interest for the cryptographic community. The complete
knowledge of exploitable fault space is desirable while designing optimal countermeasures for any
given crypto-implementation. In this paper, we address the exploitable fault characterization problem
in the context of Differential Fault Analysis (DFA) attacks on block ciphers. The formidable size
of the fault spaces demands an automated albeit fast mechanism for verifying each individual fault
instance and neither the traditional, cipher-specific, manual DFA techniques nor the generic and au-
tomated Algebraic Fault Attacks (AFA) [10] fulfill these criteria. Further, the diversified structures
of different block ciphers suggest that such an automation should be equally applicable to any block
cipher. This work presents an automated framework for DFA identification, fulfilling all aforemen-
tioned criteria, which, instead of performing the attack just estimates the attack complexity for each
individual fault instance. A generic and extendable data-mining assisted dynamic analysis frame-
work capable of capturing a large class of DFA distinguishers is devised, along with a graph-based
complexity analysis scheme. The framework significantly outperforms another recently proposed
one [6], in terms of attack class coverage and automation effort. Experimental evaluation on AES and
PRESENT establishes the effectiveness of the proposed framework in detecting most of the known
DFAs, which eventually enables the characterization of the exploitable fault space.
Coding for interactive communication beyond threshold adversaries
We revisit the setting of coding for interactive communication, CIC, (initiated by Schulman 96') for
non-threshold tampering functions. In a nutshell, in the (special case of) the communication complexity setting, Alice and Bob holding inputs $x,y$ wish to compute a function $g(x,y)$ on their inputs over the identity channel using an interactive protocol. The goal here is to minimize the total communication complexity (CC). A "code" for interactive communication is a compiler transforming any $\pi_0$
working in the communication complexity setting into a protocol $\pi$ evaluating the same function over any channel $f$ picked from a family $\mathcal{F}$. Here $f$ is a function modifying the entire communication transcript. The goal here is to minimize the code's
\emph{rate}, which is the CC overhead $CC(\pi)/CC(\pi_0)$ incurred by the compiler.
All previous work in coding for interactive communication considered error correction (that is, $g(x,y)$ must be recovered correctly with high probability), which puts a limit of corrupting up to a $1/4$ of the symbols (Braverman and Rao 11'). In this work,
we initiate the study of CIC for non-threshold families. We first come up with a robustness notion both meaningful and achievable by CIC for interesting non-threshold families. As a test case, we consider $\mathcal{F}_{\text{bit}}$, where each bit of the codeword is modified independently of the other bits (and all bits can be modified). Our robustness notion is an enhanced form of error-detection, where the output of the protocol is distributed over $\{\bot,f(x,y)\}$, and the distribution does not depend on $x,y$. This definition can be viewed as enhancing error detection by non malleability (as in the setting of non-malleable codes introduced by Dzembowski et. al. 10'). We devise CIC for several interesting tampering families (including $\mathcal{F}_{\text{bit}}$). As a building block, we introduce the notion of MNMC (non malleable codes for multiple messages), which may be of independent interest.
Guru: Universal Reputation Module for Distributed Consensus Protocols
In this paper we describe how to couple reputation systems with distributed consensus protocols to provide high-throughput highly-scalable consensus for large peer-to-peer networks of untrusted validators.
We introduce reputation module Guru, which can be laid on top of various consensus protocols such as PBFT or HoneyBadger. It ranks nodes based on the outcomes of consensus rounds run by a small committee, and adaptively selects the committee based on the current reputation. The protocol can also take external reputation ranking as input.
Guru can tolerate larger threshold of malicious nodes (up to slightly above 1/2) compared to the 1/3 limit of BFT consensus algorithms.
Private Set Intersection for Unequal Set Sizes with Mobile Applications
Private set intersection (PSI) is a cryptographic technique that is applicable to many privacy-sensitive scenarios. For decades, researchers have been focusing on improving its efficiency in both communication and computation. However, most of the existing solutions are inefficient for an unequal number of inputs, which is common in conventional client-server settings.
In this paper, we analyze and optimize the efficiency of existing PSI protocols to support precomputation so that they can efficiently deal with such input sets. We transform four existing PSI protocols into the precomputation form such that in the setup phase the communication is linear only in the size of the larger input set, while in the online phase the communication is linear in the size of the smaller input set. We implement all four protocols and run experiments between two PCs and between a PC and a smartphone and give a systematic comparison of their performance. Our experiments show that a protocol based on securely evaluating a garbled AES circuit achieves the fastest setup time by several orders of magnitudes, and the fastest online time in the PC setting where AES-NI acceleration is available. In the mobile setting, the fastest online time is achieved by a protocol based on the Diffie-Hellman assumption.
Speeding up Elliptic Curve Scalar Multiplication without Precomputation
Uncategorized
Uncategorized
This paper presents a series of Montgomery scalar multiplication algorithms on general short Weierstrass curves over odd characteristic fields, which need only 12 field multiplications plus 12 ~ 20 field additions per scalar bit using 8 ~ 10 field registers, thus significantly outperform the binary NAF method on average. Over binary fields, the Montgomery scalar multiplication algorithm which was presented at the first CHES workshop by L´opez and Dahab has been a favorite of ECC implementors, due to its nice properties such as high efficiency outperforming the binary NAF, natural SPA-resistance, generality coping with all ordinary curves and implementation easiness. Over odd characteristic fields, the new scalar multiplication algorithms are the first ones featuring all these properties. Building-blocks of our contribution are new efficient differential addition-and-doubling formulae and a novel conception of on-the-fly adaptive coordinates which softly represent points occurring during a scalar multiplication not only in accordance with the basepoint but also bits of the given scalar. Importantly, the new algorithms are equipped with built-in countermeasures against known side-channel attacks, while it is shown that previous Montgomery ladder algorithms with the randomized addressing countermeasure fail to thwart attacks exploiting address-dependent leakage.
Spot the Black Hat in a Dark Room: Parallelized Controlled Access Searchable Encryption on FPGAs
Uncategorized
Uncategorized
The advent of cloud computing offers clients with the opportunity to outsource storage and processing of large volumes of shared data to third party service providers, thereby enhancing overall accessibility and operational productivity. However, security concerns arising from the threat of insider and external attacks often require the data to be stored in an encrypted manner. Secure and efficient keyword searching on such large volumes of encrypted data is an important and yet one of the most challenging services to realize in practice. Even more challenging is to incorporate fine-grained client-specific access control - a commonly encountered requirement in cloud applications - in such searchable encryption solutions. Existing searchable encryption schemes in literature tend to focus on the use of specialized data structures for efficiency, and are not explicitly designed to address controlled access scenarios. In this paper, we propose a novel controlled access searchable encryption (CASE) scheme. As the name suggests, CASE inherently embeds access control in its key management process, and scales efficiently with increase in the volume of encrypted data handled by the system. We provide a concrete construction for CASE that is privacy-preserving under well-known cryptographic assumptions. We then present a prototype implementation for our proposed construction on an ensemble of Artix 7 FPGAs. The architecture for our implementation exploits the massively parallel capabilities provided by hardware, especially in the design of data structures for efficient storage and retrieval of data. The implementation requires a total of 192 FPGAs to support a document collection comprising of 100 documents with a dictionary of 1000 keywords. In addition, the hardware implementation of CASE is found to outperform its software counterpart in terms of both search efficiency and scalability. To the best of our knowledge, this is the first hardware implementation of a searchable encryption scheme to be reported in the literature.
High-speed key encapsulation from NTRU
This paper presents software demonstrating that the 20-year-old NTRU cryptosystem is competitive with more recent lattice-based cryptosystems in terms of speed, key size, and ciphertext size. We present a slightly simplified version of textbook NTRU, select parameters for this encryption scheme that target the 128-bit post-quantum security level, construct a KEM that is CCA2-secure in the quantum random oracle model, and present highly optimized software targeting Intel CPUs with the AVX2 vector instruction set. This software takes only 307914 cycles for the generation of a keypair, 48646 for encapsulation, and 67338 for decapsulation. It is, to the best of our knowledge, the first NTRU software with full protection against timing attacks.
On Ends-to-Ends Encryption: Asynchronous Group Messaging with Strong Security Guarantees
In the past few years secure messaging has become mainstream, with over a billion active users of
end-to-end encryption protocols through apps such as WhatsApp, Signal, Facebook Messenger, Google
Allo, Wire and many more. While these users' two-party communications now enjoy very strong
security guarantees, it turns out that many of these apps provide,
without notifying the users, a weaker property for
group messaging: an adversary who compromises a single group member can intercept
communications indefinitely.
One reason for this discrepancy in security guarantees is that most existing group messaging
protocols are fundamentally synchronous, and thus cannot be used in the asynchronous world
of mobile communications. In this paper we show that this is not necessary, presenting a design
for a tree-based group key exchange protocol in which no two parties ever need to be online at the
same time, which we call Asynchronous Ratcheting Tree (ART). ART achieves strong security guarantees, in particular including
post-compromise security.
We give a computational security proof for ART's core design as well as a
proof-of-concept implementation, showing that ART scales efficiently even to large groups.
Our results show that strong security guarantees for group messaging are achievable even in the
modern, asynchronous setting, without resorting to using inefficient point-to-point communications
for large groups. By building on standard and well-studied constructions, our hope is that many
existing solutions can be applied while still respecting the practical constraints of mobile
devices.
Lower bounds on communication for multiparty computation of multiple «AND» instances with secret sharing
The present report contains a proof of a linear lower bound for a typical three-party secure computation scheme of $n$ independent $AND$ functions. The goal is to prove some linear communication lower bound
for a maximally broad definition of «typical».
The article [DNPR] contains various communications lower bounds
for unconditionally secure multiparty computation. In particular, it contains a linear lower bound for communication complexity of a regular parallel multiplication protocol using an ideal secret sharing
scheme. These conditions mean that the protocol starts with the input
being secret-shared with each share of each input field element being a field element, all combinations are used, and the output is shared in the same way as input.
In this report a weaker property of the secret sharing scheme that still allows to prove a linear (w.r.t. the number of multiplications)
lower bound on communication is presented. Namely, if we have two (out of three) sides and two options for each party's shares and three possible combinations decode as the same value, the remaining combination should also be a valid pair of shares and reveal the same value.
Message Franking via Committing Authenticated Encryption
We initiate the study of message franking, recently introduced in Facebook’s end-to-end encrypted message system. It targets verifiable reporting of abusive messages to Facebook without compromising security guarantees. We capture the goals of message franking via a new
cryptographic primitive: compactly committing authenticated encryption with associated data (AEAD). This is an AEAD scheme for which a small part of the ciphertext can be used as a cryptographic commitment to the message contents. Decryption provides, in addition to the message, a value that can be used to open the commitment. Security for franking mandates more than that required of traditional notions associated with commitment. Nevertheless, and despite the fact that AEAD schemes are in general not committing (compactly or otherwise), we prove that many in-use AEAD schemes can be used for message franking by using secret keys
as openings. An implication of our results is the first proofs that several in-use symmetric encryption schemes are committing in the traditional sense. We also propose and analyze schemes that retain security even after openings are revealed to an adversary. One is a generalization of the scheme implicitly underlying Facebook’s message franking protocol, and another is a new construction that offers improved performance.
Securing Memory Encryption and Authentication Against Side-Channel Attacks Using Unprotected Primitives
Memory encryption is used in many devices to protect memory content from attackers with physical access to a device. However, many current memory encryption schemes can be broken using Differential Power Analysis (DPA). In this work, we present MEAS---the first Memory Encryption and Authentication Scheme providing security against DPA attacks. The scheme combines ideas from fresh re-keying and authentication trees by storing encryption keys in a tree structure to thwart first-order DPA without the need for DPA-protected cryptographic primitives. Therefore, the design strictly limits the use of every key to encrypt at most two different plaintext values. MEAS prevents higher-order DPA without changes to the cipher implementation by using masking of the plaintext values. MEAS is applicable to all kinds of memory, e.g., NVM and RAM, and has memory overhead comparable to existing memory authentication techniques without DPA protection, e.g., 7.3% for a block size fitting standard disk sectors.