All papers in 2016 (Page 7 of 1195 results)
A High Throughput/Gate AES Hardware Architecture by Compressing Encryption and Decryption Datapaths --- Toward Efficient CBC-Mode Implementation
Uncategorized
Uncategorized
This paper proposes a highly efficient AES hardware architecture that supports both encryption and decryption for the CBC mode. Some conventional AES architectures employ pipelining techniques to enhance the throughput and efficiency. However, such pipelined architectures are frequently unfit because many practical cryptographic applications work in the CBC mode, where block-wise parallelism is not available for encryption. In this paper, we present an efficient AES encryption/decryption hardware design suitable for such block-chaining modes. In particular, new operation-reordering and register-retiming techniques allow us to unify the inversion circuits for encryption and decryption (i.e., SubBytes and InvSubBytes) without any delay overhead. A new unification technique for linear mappings further reduces both the area and critical delay in total. Our design employs a common loop architecture and can therefore efficiently perform even in the CBC mode. We also present a shared key scheduling datapath that can work on-the-fly in the proposed architecture. To the best of our knowledge, the proposed architecture has the shortest critical path delay and the most efficient in terms of throughput per area among conventional AES encryption/decryption architectures with tower-field S-boxes. We evaluate the performance of the proposed and some conventional datapaths by logic synthesis results with the TSMC 65-nm standard-cell library and NanGate 45- and 15-nm open-cell libraries. As a result, we confirm that our proposed architecture achieves approximately 53--72% higher efficiency (i.e., a higher bps/GE) than any other conventional counterpart.
"Make Sure DSA Signing Exponentiations Really are Constant-Time''
Uncategorized
Uncategorized
TLS and SSH are two of the most commonly used protocols for securing Internet traffic. Many of the implementations of these protocols rely on the cryptographic primitives provided in the OpenSSL library. In this work we disclose a vulnerability in OpenSSL, affecting all versions and forks (e.g. LibreSSL and BoringSSL) since roughly October
2005, which renders the implementation of the DSA signature scheme vulnerable to cache-based side-channel attacks. Exploiting the software defect, we demonstrate the first published cache-based key-recovery attack on these protocols: 260 SSH-2 handshakes to extract a 1024/160-bit DSA host key from an OpenSSH server, and 580 TLS 1.2 handshakes
to extract a 2048/256-bit DSA key from an stunnel server.
No Place to Hide: Contactless Probing of Secret Data on FPGAs
Uncategorized
Uncategorized
Field Programmable Gate Arrays (FPGAs) have been the target of different physical attacks in recent years.
Many different countermeasures have already been integrated into these devices to mitigate the existing vulnerabilities. However, there has not been enough attention paid to semi-invasive attacks from the IC backside due to the following reasons. First, the conventional semi-invasive attacks from the IC backside --- such as laser fault injection and photonic emission analysis --- cannot be scaled down without further effort to the very latest nanoscale technologies of modern FPGAs and programmable SoCs. Second, the more advanced solutions for secure storage, such as controlled Physically Unclonable Functions (PUFs), make the conventional memory-readout techniques almost impossible. In this paper, however, novel approaches have been explored: Attacks based on Laser Voltage Probing (LVP) and its derivatives, as commonly used in Integrated Circuit (IC) debug for nanoscale low voltage technologies, are successfully launched against a $60$ nanometer technology FPGA. We discuss how these attacks can be used to break modern bitstream encryption implementations. Our attacks were carried out on a Proof-of-Concept PUF-based key generation implementation. To the best of our knowledge this is the first time that LVP is used to perform an attack on secure ICs.
Subspace Trail Cryptanalysis and its Applications to AES
Uncategorized
Uncategorized
We introduce subspace trail cryptanalysis, a generalization of invariant subspace cryptanalysis.
With this more generic treatment of subspaces we do no longer rely on specific choices of round constants or subkeys, and the resulting method is as such a potentially more powerful attack vector. Interestingly, subspace trail cryptanalysis in fact includes techniques based on impossible or truncated differentials and integrals as special cases.
Choosing AES-128 as the perhaps most studied cipher,
we describe distinguishers up to 5-round AES with a single unknown key.
We report (and practically verify) competitive key-recovery attacks with very low data-complexity on 2, 3 and 4 rounds of AES.
Additionally, we consider AES with a secret S-Box and we present a (generic) technique that allows to directly recover the secret key without finding any information about the secret S-Box. This approach allows to use e.g. truncated differential, impossible differential and integral attacks to find the secret key. Moreover, this technique works also for other AES-like constructions, if some very common conditions on the S-Box and on the MixColumns matrix (or its inverse) hold. As a consequence, such attacks allow to better highlight the security impact of linear mappings inside an AES-like block cipher.
Finally, we show that our impossible differential attack on 5 rounds of AES with secret S-Box can be turned into a distinguisher for AES in the same setting as the one recently proposed by Sun, Liu, Guo, Qu and Rijmen at CRYPTO 2016.
Arx: An Encrypted Database using Semantically Secure Encryption
In recent years, encrypted databases have emerged as a promising direction that provides data confidentiality without sacrificing functionality: queries are executed on encrypted data. However, many practical proposals rely on a set of weak encryption schemes that have been shown to leak sensitive data.
In this paper, we propose Arx, a practical and functionally rich database system that encrypts the data only with semantically secure encryption schemes. We show that Arx supports real applications such as ShareLaTeX with a modest performance overhead.
Mitigating SAT Attack on Logic Locking
Uncategorized
Uncategorized
Logic locking is a technique that has been proposed to protect outsourced IC designs from piracy and counterfeiting by untrusted foundries. It can hide the functionality of an IC by inserting key-controlled logic gates into the original design. The locked IC preserves the correct functionality only when a correct key is provided. Recently, the security of logic locking is threatened by a new type of attack called satisfiability checking (SAT) based attack, which can decipher the correct key of most logic locking techniques within a few hours [11] even for reasonably large number of keys. This type of attack iteratively solves SAT formulas which progressively eliminate the incorrect keys till the circuit unlocked. In this paper, we present a specially designed circuit block (referred to as Anti-SAT block) to thwart the SAT attack. We show that the number of SAT attack iterations to reveal the correct key in a circuit comprising an Anti-SAT block is an exponential function of the key-size thereby making the SAT attack computationally infeasible. This is a substantial result because a) we illustrate how to construct the functionality of the Anti- SAT block and b) using a mathematically rigorous approach to prove that if chosen correctly, the Anti-SAT block makes SAT attack computationally infeasible (exponential in key-size). Moreover, through our experiments, we illustrate the effectiveness of our approach to securing modern chips fabricated in untrusted foundries.
Dimension-Preserving Reductions from LWE to LWR
Uncategorized
Uncategorized
The Learning with Rounding (LWR) problem was first introduced by Banerjee, Peikert, and Rosen (Eurocrypt 2012) as a \emph{derandomized} form of the standard Learning with Errors (LWE) problem. The original motivation of LWR was as a building block for constructing efficient, low-depth pseudorandom functions on lattices. It has since been used to construct reusable computational extractors, lossy trapdoor functions, and deterministic encryption.
In this work we show two (incomparable) dimension-preserving reductions from LWE to LWR in the case of a \emph{polynomial-size modulus}. Prior works either required a superpolynomial modulus $q$, or lost at least a factor $\log(q)$ in the dimension of the reduction. A direct consequence of our improved reductions is an improvement in parameters (i.e. security and efficiency) for each of the known applications of poly-modulus LWR.
Our results directly generalize to the ring setting. Indeed, our formal analysis is performed over ``module lattices,'' as defined by Langlois and Stehlé (DCC 2015), which generalize both the general lattice setting of LWE and the ideal lattice setting of RLWE as the single notion M-LWE. We hope that taking this broader perspective will lead to further insights of independent interest.
Secure obfuscation in a weak multilinear map model: A simple construction secure against all known attacks
Uncategorized
Uncategorized
All known candidate indistinguishibility obfuscation (iO) schemes rely on candidate multilinear maps. Until recently, the strongest proofs of security available for iO candidates were in a generic model that only allows "honest" use of the multilinear map. Most notably, in this model the zero-test procedure only reveals whether an encoded element is 0, and nothing more.
However, this model is inadequate: there have been several attacks on multilinear maps that exploit extra information revealed by the zero-test procedure. In particular, the authors [Crypto’16] recently gave a polynomial-time attack on several iO candidates when instantiated with the multilinear maps of Garg, Gentry, and Halevi [Eurocrypt’13], and also proposed a new "weak multilinear map model" that captures all known polynomial-time attacks on GGH13.
Subsequent to those attacks, Garg, Mukherjee, and Srinivasan [ePrint’16] gave a beautiful new candidate iO construction, using a new variant of the GGH13 multilinear map candidate, and proved its security in the weak multilinear map model assuming an explicit PRF in NC^1.
In this work, we give a simpler candidate iO construction, which can be seen as a small modification or generalization of the original iO candidate of Garg, Gentry, Halevi, Raykova, Sahai, and Waters [FOCS’13], and we prove its security in the weak multilinear map model. Our work has a number of benefits over that of GMS16.
• Our construction and analysis are simpler. In particular, the proof of our security theorem is 4 pages, versus 15 pages in GMS16.
• We do not require any change to the original GGH13 multilinear map candidate.
• We prove the security of our candidate under a more general assumption. One way that our assumption can be true is if there exists a PRF in NC^1.
• GMS16 required an explicit PRF in NC^1 to be "hard-wired" into their obfuscation candidate. In contrast, our scheme does not require any such hard-wiring. In fact, roughly speaking, our obfuscation candidate will depend only on the minimal size of such a PRF, and not on any other details of the PRF.
Bash-f: another LRX sponge function
Uncategorized
Uncategorized
We present the Bash family of hashing algorithms based on the sponge paradigm. A core element of this family is the Bash-f sponge function which refers to the LRX (Logical-Rotation-Xor) class of symmetric cryptography schemes. We describe the components of Bash-f: a nonlinear mapping, linear diffusion mappings, a permutation of words of a hash state. For each component, we establish reasonable quality criteria as detailed as possible to make the choice of the component maximally objective and transparent.
A Modular Treatment of Cryptographic APIs: The Symmetric-Key Case
Uncategorized
Uncategorized
Application Programming Interfaces (APIs) to cryptographic tokens like smartcards and Hardware Security Modules (HSMs) provide users with commands to manage and use cryptographic keys stored on trusted hardware. Their design is mainly guided by industrial standards without clear security promises.
In this paper we propose cryptographic models for the security of such APIs. The key feature of our approach is that it enables modular analysis. Specifically, we show that a secure cryptographic API can be obtained by combining a secure API for key-management together with secure implementations of, for instance, encryption or message authentication.
Our models are the first to provide such compositional guarantees
while considering realistic adversaries that can adaptively corrupt
keys stored on tokens. We also provide a proof of concept
instantiation (from a deterministic authenticated-encryption scheme) of the key-management portion of cryptographic API.
Breaking the Circuit Size Barrier for Secure Computation Under DDH
Uncategorized
Uncategorized
Under the Decisional Diffie-Hellman (DDH) assumption, we present a 2-out-of-2 secret sharing scheme that supports a compact evaluation of branching programs on the shares. More concretely, there is an evaluation algorithm $\Eval$ with a single bit of output, such that if an input $w\in\{0,1\}^n$ is shared into $(w^0,w^1)$, then for any deterministic branching program $P$ of size $S$ we have that $\Eval(P,w^0)\oplus \Eval(P,w^1)=P(w)$ except with at most $\delta$ failure probability. The running time of the sharing algorithm is polynomial in $n$ and the security parameter $\lambda$, and that of $\Eval$ is polynomial in $S,\lambda$, and $1/\delta$. This applies as a special case to boolean formulas of size $S$ or boolean circuits of depth $\log S$. We also present a public-key variant that enables homomorphic computation on inputs contributed by multiple clients.
The above result implies the following DDH-based applications:
- A secure 2-party computation protocol for evaluating any branching program of size $S$, where the communication complexity is linear in the input size and only the running time grows with $S$.
- A secure 2-party computation protocol for evaluating any layered boolean circuit of size $S$ and $m$ outputs with communication complexity $O(S/\log S)+m\cdot\poly(\lambda)$.
-A 2-party {\em function secret sharing} scheme, as defined by Boyle et al. (Eurocrypt 2015), for general branching programs (with inverse polynomial error probability).
- A 1-round 2-server {\em private information retrieval} scheme supporting general searches expressed by branching programs.
Network Oblivious Transfer
Uncategorized
Uncategorized
Motivated by the goal of improving the concrete efficiency of secure multiparty computation (MPC), we study the possibility of implementing an infrastructure for MPC. We propose an infrastructure based on oblivious transfer (OT), which would consist of OT channels between some pairs of parties in the network. We devise information-theoretically secure protocols that allow additional pairs of parties to establish secure OT correlations using the help of other parties in the network in the presence of a dishonest majority. Our main technical contribution is an upper bound that matches a lower bound of Harnik, Ishai, and Kushilevitz (Crypto 2007), who studied the number of OT channels necessary and sufficient for MPC. In particular, we characterize which n-party OT graphs G allow t-secure computation of OT correlations between all pairs of parties, showing that this is possible if and only if the complement of G does not contain the complete bipartite graph K_{n-t,n-t} as a subgraph.
Efficient Zero-Knowledge Proof of Algebraic and Non-Algebraic Statements with Applications to Privacy Preserving Credentials
Uncategorized
Uncategorized
Practical anonymous credential systems are generally built around sigma-protocol ZK proofs. This requires that credentials be based on specially formed signatures. Here we ask whether we can instead use a standard (say, RSA, or (EC)DSA) signature that includes formatting and hashing messages, as a credential, and still provide privacy. Existing techniques do not provide efficient solutions for proving knowledge of such a signature: On the one hand, ZK proofs based on garbled circuits (Jawurek et al. 2013) give efficient proofs for checking formatting of messages and evaluating hash functions. On the other hand they are expensive for checking algebraic relations such as RSA or discrete-log, which can be done efficiently with sigma protocols.
We design new constructions obtaining the best of both worlds: combining the efficiency of the garbled circuit approach for non-algebraic statements and that of sigma protocols for algebraic ones. We then discuss how to use these as building-blocks to construct privacy-preserving credential systems based on standard RSA and (EC)DSA signatures.
Other applications of our techniques include anonymous credentials with more complex policies, the ability to efficiently switch between commitments (and signatures) in different groups, and secure two-party computation on committed/signed inputs.
TV-PUF : A Fast Lightweight Analog Physically Unclonable Function
Physical Unclonable Function (PUF) is hardware analog of a one-way function which can address hardware security issues such as device authentication, generating secret keys, producing seeds for Random Number Generators, etc. Traditional silicon PUFs are based on delay (Ring Oscillator PUFs and Arbiter PUFs) or memory structures (like SRAM). In this paper, we propose a novel idea of a very fast, lightweight and robust analog PUF that exploits the susceptibility of Threshold Voltage (${V_{th}}$) of MOSFETs to process variations. We call this the Threshold Voltage PUF (TV-PUF). Extensive implementations and simulations shows improvement in quality metrics like uniformity of the PUF, intra-die distances (reliability metric of the PUF) and inter-die distances (uniqueness metric of the PUF) for 64-bit key generation. For 1 GHz clock input for sense amplifier, our design consumes 0.18$\mu$W/bit power with 50 \% uniqueness and 51\% reliability. It is also shown that TV-PUF characteristics are independent on the technology node.
UC Commitments for Modular Protocol Design and Applications to Revocation and Attribute Tokens
Uncategorized
Uncategorized
Complex cryptographic protocols are often designed from simple cryptographic primitives, such as signature schemes, encryption schemes, verifiable random functions, and zero-knowledge proofs, by bridging between them with
commitments to some of their inputs and outputs.
Unfortunately, the known universally composable (UC) functionalities for commitments and the cryptographic primitives mentioned above
do not allow such constructions of higher-level protocols as hybrid protocols.
Therefore, protocol designers typically resort to primitives with property-based definitions,
often resulting in complex monolithic security proofs that are prone to mistakes and hard to verify.
We address this gap by presenting a UC functionality for non-interactive commitments
that enables modular constructions of complex protocols within the UC framework.
We also show how the new functionality can be used to construct hybrid protocols that combine different
UC functionalities and use commitments to ensure that the same inputs are provided to different functionalities.
We further provide UC functionalities for attribute tokens and revocation that can be used as building blocks together with our UC commitments.
As an example of building a complex system from these new UC building blocks, we provide a construction (a hybrid protocol) of anonymous attribute tokens with revocation.
Unlike existing accumulator-based schemes, our scheme allows one to accumulate several revocation lists into a single commitment value and to hide the revocation status of a user from other users and verifiers.
Fine-grained Cryptography
Uncategorized
Uncategorized
Fine-grained cryptographic primitives are ones that are secure against adversaries with a-priori bounded polynomial resources (time, space or parallel-time), where the honest algorithms use less resources than the adversaries they are designed to fool. Such primitives were previously studied in the context of time-bounded adversaries (Merkle, CACM 1978), space-bounded adversaries (Cachin and Maurer, CRYPTO 1997) and parallel-time-bounded adversaries (Håstad, IPL 1987). Our goal is to show unconditional security of these constructions when possible, or base security on widely believed separation of worst-case complexity classes. We show:
NC$^1$-cryptography: Under the assumption that NC$^1 \neq \oplus$L/poly, we construct one-way functions, pseudo-random generators (with sub-linear stretch), collision-resistant hash functions and most importantly, public-key encryption schemes, all computable in NC$^1$ and secure against all NC$^1$ circuits. Our results rely heavily on the notion of randomized encodings pioneered by Applebaum, Ishai and Kushilevitz, and crucially, make {\em non-black-box} use of randomized encodings for logspace classes.
AC$^0$-cryptography: We construct (unconditionally secure) pseudo-random generators with arbitrary polynomial stretch, weak pseudo-random functions, secret-key encryption and perhaps most interestingly, {\em collision-resistant hash functions}, computable in AC$^0$ and secure against all AC^$0$ circuits.
Previously, one-way permutations and pseudo-random generators (with linear stretch) computable in AC$^0$ and secure against AC$^0$ circuits were known from the works of Håstad and Braverman.
Automatic Search of Meet-in-the-Middle and Impossible Differential Attacks
Uncategorized
Uncategorized
Tracking bits through block ciphers and optimizing attacks at hand is one of the tedious task symmetric cryptanalysts have to deal with. It would be nice if a program will automatically handle them at least for well-known attack techniques, so that cryptanalysts will only focus on finding new attacks. However, current automatic tools cannot be used as is, either because they are tailored for specific ciphers or because they only recover a specific part of the attacks and cryptographers are still needed to finalize the analysis.
In this paper we describe a generic algorithm exhausting the best meet-in-the-middle and impossible differential attacks
on a very large class of block ciphers from byte to bit-oriented, SPN, Feistel and Lai-Massey block ciphers. Contrary to
previous tools that target to find the best differential / linear paths in the cipher and leave the cryptanalysts to find the
attack using these paths, we automatically find the best attacks by considering the cipher and the key schedule algorithms.
The building blocks of our algorithm led to two algorithms designed to find the best simple meet-in-the-middle attacks and the best impossible truncated differential attacks respectively.
We recover and improve many attacks on AES, mCRYPTON, SIMON, IDEA, KTANTAN, PRINCE and ZORRO.
We show that this tool can be used by designers to improve their analysis.
Key-alternating Ciphers and Key-length Extension: Exact Bounds and Multi-user Security
The best existing bounds on the concrete security of key-alternating
ciphers (Chen and Steinberger, EUROCRYPT '14) are only
asymptotically tight, and the quantitative gap with the best existing
attacks remains numerically substantial for concrete parameters. Here,
we prove exact bounds on the security of key-alternating ciphers and
extend them to XOR cascades, the most efficient construction for key-length
extension. Our bounds essentially match, for any possible query
regime, the advantage achieved by the best existing attack.
Our treatment also extends to the multi-user regime. We show that the
multi-user security of key-alternating ciphers and XOR cascades is very close to the single-user case, i.e., given enough rounds, it does not substantially decrease as the number of users increases. On the way, we also
provide the first explicit treatment of multi-user security for key-length
extension, which is particularly relevant given the significant security loss
of block ciphers (even if ideal) in the multi-user setting.
The common denominator behind our results are new techniques for
information-theoretic indistinguishability proofs that both extend and
refine existing proof techniques like the H-coefficient method.
Backdoors in Pseudorandom Number Generators: Possibility and Impossibility Results
Inspired by the Dual EC DBRG incident, Dodis et al. (Eurocrypt 2015) initiated the formal study of backdoored PRGs, showing that backdoored PRGs are equivalent to public key encryption schemes, giving constructions for backdoored PRGs (BPRGs), and showing how BPRGs can be ``immunised'' by careful post-processing of their outputs. In this paper, we continue the foundational line of work initiated by Dodis et al., providing both positive and negative results.
We first revisit the backdoored PRG setting of Dodis et al., showing that PRGs can be more strongly backdoored than was previously envisaged. Specifically, we give efficient constructions of BPRGs for which, given a single generator output, Big Brother can recover the initial state and, therefore, all outputs of the BPRG. Moreover, our constructions are forward-secure in the traditional sense for a PRG, resolving an open question of Dodis et al. in the negative.
We then turn to the question of the effectiveness of backdoors in robust PRNGs with input (c.f. Dodis et al., ACM-CCS 2013): generators in which the state can be regularly refreshed using an entropy source, and in which, provided sufficient entropy has been made available since the last refresh, the outputs will appear pseudorandom. The presence of a refresh procedure might suggest that Big Brother could be defeated, since he would not be able to predict the values of the PRNG state backwards or forwards through the high-entropy refreshes. Unfortunately, we show that this intuition is not correct: we are also able to construct robust PRNGs with input that are backdoored in a backwards sense. Namely, given a single output, Big Brother is able to rewind through a number of refresh operations to earlier ``phases'', and recover all the generator's outputs in those earlier phases.
Finally, and ending on a positive note, we give an impossibility result: we provide a bound on the number of previous phases that Big Brother can compromise as a function of the state-size of the generator: smaller states provide more limited backdooring opportunities for Big Brother.
Last updated: 2016-08-14
Indistinguishability Obfuscation Does Not Reduce to Structured Languages
Uncategorized
Uncategorized
We prove that indistinguishability obfuscation (iO) and one-way functions do not naturally reduce to any language within $NP \cap coNP$. This is proved within the framework introduced by Asharov and Segev (FOCS '15) that captures the vast majority of techniques that have been used so far in iO-based constructions.
Our approach is based on a two-fold generalization of a classic result due to Impagliazzo and Naor (Structure in Complexity Theory '88) on non-deterministic computations via decision trees. First, we generalize their approach from the, rather restrictive, model of decision trees to arbitrary computations. Then, we further generalize the argument within the Asharov-Segev framework.
Our result explains why iO does not seem to suffice for certain applications, even when combined with the assumption that one-way functions exist. In these cases it appears that additional, more structured, assumptions are necessary. In addition, our result suggests that any attempt for ruling out the existence of iO by reducing it "ad absurdum" to a potentially efficiently-decidable language may encounter significant difficulties.
TumbleBit: An Untrusted Bitcoin-Compatible Anonymous Payment Hub
This paper presents TumbleBit, a new unidirectional unlinkable payment hub that is fully compatible with today's Bitcoin protocol. TumbleBit allows parties to make fast, anonymous, off-blockchain payments through an untrusted intermediary called the Tumbler. TumbleBit's anonymity properties are similar to classic Chaumian eCash: no one, not even the Tumbler, can link a payment from its payer to its payee. Every payment made via TumbleBit is backed by bitcoins, and comes with a guarantee that Tumbler can neither violate anonymity, nor steal bitcoins, nor ``print money'' by issuing payments to itself. We prove the security of TumbleBit using the real/ideal world paradigm and the random oracle model. Security follows from the standard RSA assumption and ECDSA unforgeability. We implement TumbleBit, mix payments from 800 users and show that TumbleBit's off-blockchain payments can complete in seconds.
Structure vs Hardness through the Obfuscation Lens
Uncategorized
Uncategorized
Much of modern cryptography, starting from public-key encryption and going beyond, is based on the hardness of structured (mostly algebraic) problems like factoring, discrete log, or finding short lattice vectors. While structure is perhaps what enables advanced applications, it also puts the hardness of these problems in question. In particular, this structure often puts them in low (and so called structured) complexity classes such as NP$\cap$coNP or statistical zero-knowledge (SZK).
Is this structure really necessary? For some cryptographic primitives, such as one-way permutations and homomorphic encryption, we know that the answer is yes — they imply hard problems in NP$\cap$coNP and SZK, respectively. In contrast, one-way functions do not imply such hard problems, at least not by black-box reductions. Yet, for many basic primitives such as public-key encryption, oblivious transfer, and functional encryption, we do not have any answer.
We show that the above primitives, and many others, do not imply hard problems in NP$\cap$coNP or SZK via black-box reductions. In fact, we first show that even the very powerful notion of Indistinguishability Obfuscation (IO) does not imply such hard problems, and then deduce the same for a large class of primitives that can be constructed from IO.
Towards Sound Fresh Re-Keying with Hard (Physical) Learning Problems
Uncategorized
Uncategorized
Most leakage-resilient cryptographic constructions aim at limiting the information adversaries can obtain about secret keys.
In the case of asymmetric algorithms, this is usually obtained by secret sharing (aka masking) the key, which is made easy by their algebraic properties.
In the case of symmetric algorithms, it is rather key evolution that is exploited.
While more efficient, the scope of this second solution is limited to stateful primitives that easily allow for key evolution such as stream ciphers.
Unfortunately, it seems generally hard to avoid the need of (at least one) execution of a stateless primitive, both for encryption and authentication protocols.
As a result, fresh re-keying has emerged as an alternative solution, in which a block cipher that is hard to protect
against side-channel attacks is re-keyed with a stateless function that is easy to mask. While previous proposals in this direction
were all based on heuristic arguments, we propose two new constructions that, for the first time, allow a more formal treatment of fresh re-keying.
More precisely, we reduce the security of our re-keying schemes to two building blocks that can be of independent interest. The first one
is an assumption of Learning Parity with Leakage, which leverages the noise that is available in side-channel measurements.
The second one is based on the Learning With Rounding assumption,
which can be seen as an alternative solution for low-noise implementations. Both constructions are efficient and easy to mask, since they are key homomorphic or almost key homomorphic.
Faster Evaluation of SBoxes via Common Shares
We describe a new technique for improving the efficiency of the masking countermeasure against side-channel attacks. Our technique is based on using common shares between secret variables, in order to reduce the number of finite field multiplications. Our algorithms are proven secure in the ISW probing model with $n \geq t+1$ shares against $t$ probes. For AES, we get an equivalent of $2.8$ non-linear multiplications for every SBox evaluation, instead of $4$ in the Rivain-Prouff countermeasure. We obtain similar improvements for other block-ciphers. Our technique is easy to implement and performs relatively well in practice, with roughly a 20% speed-up compared to existing algorithms.
Simple Key Enumeration (and Rank Estimation) using Histograms: an Integrated Approach
Uncategorized
Uncategorized
The main contribution of this paper, is a new key enumeration algorithm that combines the conceptual simplicity of the rank estimation algorithm of Glowacz et al. (from FSE 2015) and the parallelizability of the enumeration algorithm of Bogdanov et al. (SAC 2015) and Martin et al. (from ASIACRYPT 2015). Our new algorithm is based on histograms. It allows obtaining simple bounds on the (small) rounding errors that it introduces and leads to straightforward parallelization. We further show that it can minimize the bandwidth of distributed key testing by selecting parameters that maximize the factorization of the lists of key candidates produced by the enumeration, which can be highly beneficial, e.g. if these tests are performed by a hardware coprocessor. We also put forward that the conceptual simplicity of our algorithm translates into efficient implementations (that slightly improve the state-of-the-art). As an additional consolidating effort, we finally describe an open source implementation of this new enumeration algorithm, combined with the FSE 2015 rank estimation one, that we make available with the paper.
Design in Type-I, Run in Type-III: Fast and Scalable Bilinear-Type Conversion using Integer Programming
Uncategorized
Uncategorized
Bilinear-type conversion is to convert cryptographic schemes
designed over symmetric groups instantiated with imperilled curves into ones that run over more secure and efficient asymmetric groups.
In this paper we introduce a novel type conversion method called {\em IPConv} using 0-1 Integer Programming. Instantiated with a widely available IP solver, it instantly converts existing intricate schemes, and can process large-scale schemes that involves more than a thousand variables and hundreds of pairings.
Such a quick and scalable method allows a new approach in designing
cryptographic schemes over asymmetric bilinear groups. Namely,
designers work without taking much care about asymmetry of
computation but the converted scheme runs well in the asymmetric
setting.
We demonstrate the usefulness of conversion-aided design by presenting somewhat counter-intuitive examples where converted DLIN-based Groth-Sahai proofs are more compact than manually built SXDH-based proofs.
FourQ on FPGA: New Hardware Speed Records for Elliptic Curve Cryptography over Large Prime Characteristic Fields
We present fast and compact implementations of FourQ (ASIACRYPT 2015) on field-programmable gate arrays (FPGAs), and demonstrate, for the first time, the high efficiency of this new elliptic curve on reconfigurable hardware. By adapting FourQ's algorithms to hardware, we design FPGA-tailored architectures that are significantly faster than any other ECC alternative over large prime characteristic fields. For example, we show that our single-core and multi-core implementations can compute at a rate of 6389 and 64730 scalar multiplications per second, respectively, on a Xilinx Zynq-7020 FPGA, which represent factor-2.5 and 2 speedups in comparison with the corresponding variants of the fastest Curve25519 implementation on the same device. These results show the potential of deploying FourQ on hardware for high-performance and embedded security applications. All the presented implementations exhibit regular, constant-time execution, protecting against timing and simple side-channel attacks.
A Secure One-Roundtrip Index for Range Queries
We present the first one-roundtrip protocol for performing range, range-aggregate, and order-by-limit queries over encrypted data, that both provides semantic security and is efficient. We accomplish this task by chaining garbled circuits over a search tree, using branch-chained garbled circuits, as well as carefully designing garbled circuits. We then show how to build a database index that can answer order comparison queries. We implemented and evaluated our index. We demonstrate that queries as well as inserts and updates are efficient, and that our index outperforms previous interactive constructions. This index is part of the Arx database system, whose source code will be released in the near future.
Adversary-dependent Lossy Trapdoor Function from Hardness of Factoring Semi-smooth RSA Subgroup Moduli
Uncategorized
Uncategorized
Lossy trapdoor functions (LTDFs), proposed by Peikert and Waters (STOC'08), are known to have a number of applications in cryptography. They have been constructed based on various assumptions, which include the quadratic residuosity (QR) and decisional composite residuosity (DCR) assumptions, which are factoring-based {\it decision} assumptions. However, there is no known construction of an LTDF based on the factoring assumption or other factoring-related search assumptions. In this paper, we first define a notion of {\it adversary-dependent lossy trapdoor functions} (ad-LTDFs) that is a weaker variant of LTDFs. Then we construct an ad-LTDF based on the hardness of factorizing RSA moduli of a special form called semi-smooth RSA subgroup (SS) moduli proposed by Groth (TCC'05). Moreover, we show that ad-LTDFs can replace LTDFs in many applications. Especially, we obtain the first factoring-based deterministic encryption scheme that satisfies the security notion defined by Boldyreva et al. (CRYPTO'08) without relying on a decision assumption. Besides direct applications of ad-LTDFs, by a similar technique, we construct a chosen ciphertext secure public key encryption scheme whose ciphertext overhead is the shortest among existing schemes based on the factoring assumption w.r.t. SS moduli.
Concurrent Non-Malleable Commitments (and More) in 3 Rounds
Uncategorized
Uncategorized
The round complexity of commitment schemes secure against man-in-the-middle attacks has been the focus of extensive research for about 25 years. The recent breakthrough of Goyal, Pandey and Richelson [STOC 2016] showed that 3 rounds are sufficient for (one-left, one-right) non-malleable commitments. This result matches a lower bound of [Pas13]. The state of affairs leaves still open the intriguing problem of constructing 3-round concurrent non-malleable commitment schemes.
In this paper we solve the above open problem by showing how to transform any 3-round (one-left one-right) non-malleable commitment scheme (with some extractability property) in a 3-round concurrent non-malleable commitment scheme. Our transform makes use of complexity leveraging and when instantiated with the construction of [GPR16] gives a 3-round concurrent non-malleable commitment scheme from one-way permutations secure w.r.t. subexponential-time adversaries.
We also show how our 3-round concurrent non-malleable commitment scheme can be used for 3-round arguments of knowledge and in turn for 3-round identification schemes secure against concurrent man-in-the-middle attacks.
Bounded Indistinguishability and the Complexity of Recovering Secrets
Uncategorized
Uncategorized
Motivated by cryptographic applications, we study the notion of {\em bounded indistinguishability}, a natural relaxation of the well studied notion of bounded independence.
We say that two distributions $\mu$ and $\nu$ over $\Sigma^n$ are {\em $k$-wise indistinguishable} if their projections to any $k$ symbols are identical. We say that a function $f\colon \Sigma^n \to \zo$ is {\em $\e$-fooled by $k$-wise indistinguishability} if $f$ cannot distinguish with
advantage $\e$ between any two $k$-wise indistinguishable distributions $\mu$ and $\nu$ over
$\Sigma^n$.
We are interested in characterizing the class of functions that are fooled by $k$-wise indistinguishability. While the case of $k$-wise independence (corresponding to one of the distributions being uniform) is fairly well understood, the more general case remained unexplored.
When $\Sigma = \zo$, we observe that whether $f$ is fooled is closely related
to its approximate degree. For larger alphabets $\Sigma$, we obtain several positive and negative
results. Our results imply the first efficient secret sharing schemes with a high secrecy threshold
in which the secret can be reconstructed in AC$^0$. More concretely, we show that for every
$0 < \sigma < \rho \leq 1$ it is possible to share a secret among $n$ parties so that
any set of fewer than $\sigma n$ parties can learn nothing about the secret,
any set of at least $\rho n$ parties can reconstruct the secret, and where
both the sharing and the reconstruction are done by constant-depth circuits
of size $\poly(n)$. We present additional cryptographic applications of our results to low-complexity secret sharing, visual secret sharing, leakage-resilient cryptography, and protecting against ``selective failure'' attacks.
The Multi-User Security of Authenticated Encryption: AES-GCM in TLS 1.3
We initiate the study of multi-user (mu) security of authenticated encryption (AE) schemes as a way to rigorously formulate, and answer, questions about the "randomized nonce" mechanism proposed for the use of the AE scheme GCM in TLS 1.3. We (1) Give definitions of mu ind (indistinguishability) and mu kr (key recovery) security for AE (2) Characterize the intent of nonce randomization as being improved mu security as a defense against mass surveillance (3) Cast the method as a (new) AE scheme RGCM (4) Analyze and compare the mu security of both GCM and RGCM in the model where the underlying block cipher is ideal, showing that the mu security of the latter is indeed superior in many practical contexts to that of the former, and (5) Propose an alternative AE scheme XGCM having the same efficiency as RGCM but better mu security and a more simple and modular design.
Garbling Scheme for Formulas with Constant Size of Garbled Gates
We provide a garbling scheme which creates garbled circuits of a very small constant size (four bits per gate) for circuits with fan-out one (formulas). For arbitrary fan-out, we additionally need only two ciphertexts per additional connection of each gate output wire. We make use of a trapdoor permutation for which we define a generalized notion of correlation robustness. We show that our notion is implied by PRIV-security, a notion for deterministic (searchable) encryption. We prove our scheme secure in the programmable random oracle model.
Deniable Attribute Based Encryption for Branching Programs from LWE
Deniable encryption (Canetti et al. CRYPTO '97) is an intriguing primitive that provides a security guarantee against not only eavesdropping attacks as required by semantic security, but also stronger coercion attacks performed after the fact. The concept of deniability has later demonstrated useful and powerful in many other contexts, such as leakage resilience, adaptive security of protocols, and security against selective opening attacks. Despite its conceptual usefulness, our understanding of how to construct deniable primitives under standard assumptions is restricted.
In particular from standard lattice assumptions, i.e. Learning with Errors (LWE), we have only flexibly and non-negligible advantage deniable public-key encryption schemes, whereas with the much stronger assumption of indistinguishable obfuscation, we can obtain at least fully sender-deniable PKE and computation. How to achieve deniability for other more advanced encryption schemes under standard assumptions remains an interesting open question.
In this work, we construct a flexibly bi-deniable Attribute-Based Encryption (ABE) scheme for all polynomial-size Branching Programs from LWE.
Our techniques involve new ways of manipulating Gaussian noise that may be of independent interest, and lead to a significantly sharper analysis of noise growth in Dual Regev type encryption schemes. We hope these ideas give insight into achieving deniability and related properties for further, advanced cryptographic systems from lattice assumptions.
Compactness vs Collusion Resistance in Functional Encryption
We present two general constructions that can be used to combine any
two functional
encryption (FE) schemes (supporting a bounded number of key queries)
into a new functional encryption scheme supporting
a larger number of key queries.
By using these constructions iteratively,
we transform any primitive FE scheme supporting a single
functional key query (from a sufficiently general class of functions)
and has certain weak compactness properties to a collusion-resistant
FE scheme with the same or slightly weaker compactness properties.
Together with previously known reductions, this
shows that the compact, weakly compact, collusion-resistant, and
weakly collusion-resistant versions of FE are all equivalent
under polynomial time reductions.
These are all FE variants known to imply the existence of indistinguishability
obfuscation, and were previously thought to offer slightly different avenues toward
the realization of obfuscation from general assumptions.
Our results show that they are indeed all equivalent, improving our
understanding of the minimal assumptions on
functional encryption required to instantiate
indistinguishability obfuscation.
Memory-Efficient Algorithms for Finding Needles in Haystacks
Uncategorized
Uncategorized
One of the most common tasks in cryptography and cryptanalysis is to find
some interesting event (a needle) in an exponentially large collection (haystack) of
$N=2^n$ possible events, or to demonstrate that no such event is likely to
exist. In particular, we are interested in finding needles which are defined as events that
happen with an unusually high probability of $p \gg 1/N$ in a haystack which is an almost uniform
distribution on $N$ possible events. When the search algorithm can
only sample values from this distribution, the best known time/memory
tradeoff for finding such an event requires $O(1/Mp^2)$ time given
$O(M)$ memory.
In this paper we develop much faster needle searching algorithms in the common
cryptographic setting in which the distribution is defined
by applying some deterministic function $f$ to random inputs.
Such a distribution can be modelled by a random directed graph with $N$ vertices in
which almost all the vertices have $O(1)$ predecessors while
the vertex we are looking for has an unusually large number of $O(pN)$ predecessors.
When we are given only a constant amount of memory, we propose a new search methodology which we call
\textbf{NestedRho}. As $p$ increases, such random graphs undergo several subtle phase transitions,
and thus the log-log dependence of the time complexity $T$ on $p$
becomes a piecewise linear curve which bends four times. Our new algorithm is faster than the
$O(1/p^2)$ time complexity of the best previous algorithm in the full range of $1/N<p<1$, and in particular
it improves the previous time complexity by a significant factor of $\sqrt{N}$ for any $p$ in the range $N^{-0.75}<p< N^{-0.5}$. When we are given more memory, we show how to combine the \textbf{NestedRho} technique with the parallel collision
search technique in order to further reduce its time complexity. Finally, we show how to apply our new search
technique to more complicated distributions with multiple peaks when we want to find all the peaks whose
probabilities are higher than $p$.
Quantum homomorphic encryption for polynomial-sized circuits
We present a new scheme for quantum homomorphic encryption which is compact and allows for efficient evaluation of arbitrary polynomial-sized quantum circuits. Building on the framework of Broad- bent and Jeffery and recent results in the area of instantaneous non-local quantum computation, we show how to construct quantum gadgets that allow perfect correction of the errors which occur during the homomorphic evaluation of T gates on encrypted quantum data. Our scheme can be based on any classical (leveled) fully homomorphic encryption (FHE) scheme and requires no computational assumptions besides those already used by the classical scheme. The size of our quantum gadget depends on the space complexity of the classical decryption function - which aligns well with the current efforts to minimize the complexity of the decryption function.
Our scheme (or slight variants of it) offers a number of additional advantages such as ideal compactness, the ability to supply gadgets "on demand", circuit privacy for the evaluator against passive adversaries, and a three-round scheme for blind delegated quantum computation which puts only very limited demands on the quantum abilities of the client.
From Cryptomania to Obfustopia through Secret-Key Functional Encryption
Uncategorized
Uncategorized
Functional encryption lies at the frontiers of current research in cryptography; some variants have been shown sufficiently powerful to yield indistinguishability obfuscation (\IO) while other variants have been constructed from standard assumptions such as \LWE. Indeed, most variants have been classified as belonging to either the former or the latter category. However, one mystery that has remained is the case of \emph{secret-key functional encryption} with an unbounded number of keys and ciphertexts. On the one hand, this primitive is not known to imply anything outside of minicrypt, the land of secret-key crypto, but on the other hand, we do no know how to construct it without the heavy hammers in obfustopia.
In this work, we show that (subexponentially secure) secret-key functional encryption is powerful enough to construct indistinguishability obfuscation if we additionally assume the existence of (subexponentially secure) plain public-key encryption. In other words, secret-key functional encryption provides a bridge from cryptomania to obfustopia.
On the technical side, our result relies on two main components. As our first contribution, we show how to use secret key functional encryption to get ``exponentially-efficient indistinguishability obfuscation'' (\XIO), a notion recently introduced by Lin et al. (PKC '16) as a relaxation of \IO. Lin et al. show how to use \XIO and the \LWE assumption to build \IO. As our second contribution, we improve on this result by replacing its reliance on the \LWE assumption with any plain public-key encryption scheme.
Lastly, we ask whether secret-key functional encryption can be used to construct public-key encryption itself and therefore take us all the way from minicrypt to obfustopia. A result of Asharov and Segev (FOCS '15) shows that this is not the case under black-box constructions, even for exponentially secure functional encryption. We show, through a non-black box construction, that subexponentially secure-key functional encryption indeed leads to public-key encryption. The resulting public-key encryption scheme, however, is at most quasi-polynomially secure, which is insufficient to take us to obfustopia.
On the Multiplicative Complexity of Boolean Functions and Bitsliced Higher-Order Masking
Uncategorized
Uncategorized
Higher-order masking is a widely used countermeasure to make software implementations of blockciphers achieve high security levels against side-channel attacks. Unfortunately, it often comes with a strong impact in terms of performances which may be prohibitive in some contexts. This situation has motivated the research for efficient schemes that apply higher-order masking with minimal performance overheads. The most widely used approach is based on a polynomial representation of the cipher s-box(es) allowing the application of standard higher-order masking building blocks such as the ISW scheme (Ishai-Sahai-Wagner, Crypto 2003). Recently, an alternative approach has been considered which is based on a bitslicing of the s-boxes. This approach has been shown to enjoy important efficiency benefits, but it has only been applied to specific blockciphers such as AES, PRESENT, or custom designs. In this paper, we present a generic method to find a Boolean representation of an s-box with efficient bitsliced higher-order masking. Specifically, we propose a method to construct a circuit with low \emph{multiplicative complexity}. Compared to previous work on this subject, our method can be applied to any s-box of common size and not necessarily to small s-boxes. We use it to derive higher-order masked s-box implementations that achieve important performance gain compared to optimized state-of-the-art implementations.
Network-Hiding Communication and Applications to Multi-Party Protocols
Uncategorized
Uncategorized
As distributed networks are heavily used in modern applications, new security challenges emerge. In a multi-party computation (in short, MPC) protocol over an incomplete network, such a challenge is to hide, to the extent possible, the topology of the underlying communication network. Such a topology-hiding (aka network hiding) property is in fact very relevant in applications where anonymity is needed.
To our knowledge, with the exception of two recent works by Chandran et al. [ITCS 2015] and by Moran et al. [TCC 2015], existing MPC protocols do not hide the topology of the underlying communication network. Moreover, the above two solutions are either not applicable to arbitrary networks (as is [ITCS 2015])
or, as in [TCC 2015], they make non-black-box and recursive use of cryptographic primitives resulting in an unrealistic communication and computation complexity even for simple, i.e., low degree and diameter, networks.
Our work suggests the first topology-hiding communication protocol for incomplete networks which makes black-box use of the underlying cryptographic assumption-in particular, a public-key encryption scheme-and tolerates any adversary who passively corrupts arbitrarily many network nodes. Our solutions are based on a new, enhanced variant of threshold homomorphic encryption, in short, TH-PKE, that requires no a-priori setup and allows to circulate an encrypted message over any (unknown) incomplete network and then decrypt it without revealing any network information to intermediate nodes. We show how to realize this enhanced TH-PKE from the DDH assumption. The black-box nature of our scheme, along with some optimization tricks that we employ, makes our communication protocol more efficient than existing solutions.
We then use our communication protocol to make any semi-honest secure MPC protocol topology-hiding with a reasonable-i.e., for simple networks, polynomial with small constants-communication and computation overhead. We further show how to construct anonymous broadcast without using expensive MPCs to setup the original pseudonyms.
On the Security and Performance of Proof of Work Blockchains
Proof of Work (PoW) powered blockchains currently account for more than 90% of the total market capitalization of existing digital currencies. Although the security provisions of Bitcoin have been thoroughly analysed, the security guarantees of variant (forked) PoW blockchains (which were instantiated with different parameters) have not received much attention in the literature.
In this paper, we introduce a novel quantitative framework to analyse the security and performance implications of various consensus and network parameters of PoW blockchains. Based on our framework, we devise optimal adversarial strategies for double-spending and selfish mining while taking into account real world constraints such as network propagation, different block sizes, block generation intervals, information propagation mechanism, and the impact of eclipse attacks. Our framework therefore allows us to capture existing PoW-based deployments as well as PoW blockchain variants that are instantiated with different parameters, and to objectively compare the tradeoffs between their performance and security provisions.
Another view of the division property
A new distinguishing property against block ciphers, called the division property, was introduced by Todo at Eurocrypt 2015. Our work gives a new approach to it by the introduction of the notion of parity sets. First of all, this new notion permits us to formulate and characterize in a simple way the division property of any order. At a second step, we are interested in the way of building distinguishers on a block cipher by considering some further properties of parity sets, generalising the division property. We detail in particular this approach for substitution-permutation networks. To illustrate our method, we provide low-data distinguishers against reduced-round Present. These distinguishers reach a much higher number of rounds than generic distinguishers based on the division property and demonstrate, amongst others, how the distinguishers can be improved when the properties of the linear and the Sbox layer are taken into account. At last, this work provides an analysis of the resistance of Sboxes against this type of attacks, demonstrates links with the algebraic normal form of an Sbox as well as its inverse Sbox and exhibit design criteria for Sboxes to resist such attacks.
Last updated: 2016-07-11
Storage Efficient Substring Searchable Symmetric Encryption
Uncategorized
Uncategorized
We address the problem of substring searchable encryption. A single user produces a big stream of data and later
on wants to learn the positions in the string that some patterns occur. Although current techniques exploit auxiliary data structures
to achieve efficient substring search on the server side, the cost at the user side may be prohibitive. We revisit the work of substring
searchable encryption in order to reduce the storage cost of auxiliary data structures. Our solution entails suffix array which allows optimal storage cost $O(n)$ with small hidden factor at the size of the string $n$. On top of that we build an encrypted index that allows the server to answer substring queries without learning neither the query nor the result. We identify the leakages of the scheme following the work of Curtmola $\etal$ \cite{sse2} and we analyze the security of the protocol in the real ideal framework. Moreover, we demonstrate the practicality of the protocol by searching a one million characters data stream in less than one second within the GPU computing paradigm. The total speedup approximates a factor of $4$x, compared with naive CPU implementation.
Provably Secure Password Authenticated Key Exchange Based on RLWE for the Post-QuantumWorld
Authenticated Key Exchange (AKE) is a cryptographic scheme with the aim to establish a high-entropy and secret session key over a insecure communications network. \emph{Password}-Authenticated Key Exchange (PAKE) assumes that the parties in play share a simple password, which is cheap and human-memorable and is used to achieve the authentication. PAKEs are practically relevant as these features are extremely appealing in an age where most people access sensitive personal data remotely from more-and-more pervasive hand-held devices. Theoretically, PAKEs allow the secure computation and authentication of a high-entropy piece of data using a low-entropy string as a starting point. In this paper, we apply the recently proposed technique introduced in~\cite{DXX2012} to construct two lattice-based PAKE protocols enjoying a very simple and elegant design that is an parallel extension of the class of Random Oracle Model (ROM)-based protocols \msf{PAK} and \msf{PPK}~\cite{BMP2000,M2002}, but in the lattice-based setting. The new protocol resembling \msf{PAK} is three-pass, and provides \emph{mutual explicit authentication}, while the protocol following the structure of \msf{PPK} is two-pass, and provides \emph{implicit authentication}. Our protocols rely on the Ring-Learning-with-Errors (RLWE) assumption, and exploit the additive structure of the underlying ring. They have a comparable level of efficiency to \msf{PAK} and \msf{PPK}, which makes them highly attractive. We present a preliminary implementation of our protocols to demonstrate that they are both efficient and practical. We believe they are suitable quantum safe replacements for \msf{PAK} and \msf{PPK}.
Improved Factorization of $N=p^rq^s$
Bones et al. showed at Crypto 99 that moduli of the form $N=p^rq$ can be factored in polynomial time when $r \geq \log p$. Their algorithm is based on Coppersmith's technique for finding small roots of polynomial equations. Recently, Coron et al. showed that $N=p^rq^s$ can also be factored in polynomial time, but under the stronger condition $r \geq \log^3 p$. In this paper, we show that $N=p^rq^s$ can actually be factored in polynomial time when $r \geq \log p$, the same condition as for $N=p^rq$.
Antikernel: A Decentralized Secure Hardware-Software Operating System Architecture
Uncategorized
Uncategorized
The ``kernel" model has been part of operating system architecture for decades, but upon closer inspection it clearly violates the principle of least required privilege. The kernel is a single entity which provides many services (memory management, interfacing to drivers, context switching, IPC) which have no real relation to each other, and has the ability to observe or tamper with all state of the system. This work presents Antikernel, a novel operating system architecture consisting of both hardware and software components and designed to be fundamentally more secure than the state of the art. To make formal
verification easier, and improve parallelism, the Antikernel system is highly modular and consists of many independent hardware state machines (one or more of which may be a general-purpose CPU running application or systems software) connected by a packet-switched network-on-chip (NoC). We create and verify an FPGA-based prototype of the system.
Short and Adjustable Signatures
Motivated by the problem of one-time password generation with security against server breaches, we introduce the notion of {\em adjustable signature schemes} that allow the length of a signature to be adjusted---at the setup, signing or verification stages, depending on the application. Defining security for such schemes poses several challenges, such as: (i) different signature lengths should provide different levels of security, and (ii) the effort required for forging a very short signature (e.g., 6 bytes) should not be reusable for
forging additional signatures. We provide security definitions that concretely
capture the trade-off between signature length, number of forgeries and level
of security provided by the scheme.
The above requirements rule out all existing solutions for short signatures.
In this paper, as a feasibility result, we provide the first instantiation of
all variants of adjustable signatures based on indistinguishability
obfuscation. Our starting point is the state-of-the-art construction by
Ramchen and Waters [ACM CCS 2014]. We observe that their scheme fails to meet our requirements for an adjustable signature scheme, and enhance it to obtain adjustable signatures with {\em shorter} signatures, {\em faster} signing and {\em strong} unforgeability. We also employ new proof techniques in order toobtain the above-mentioned notions of security.
For the simpler case where adversarial effort does not grow with the number of forgeries, we also provide a concrete construction based on the BLS signature scheme, by instantiating it using smaller group sizes that yield shorter signature lengths while providing reasonable security. We implement this scheme for various signature sizes an report on its efficiency.
Linicrypt: A Model for Practical Cryptography
A wide variety of objectively practical cryptographic schemes can be constructed using only symmetric-key operations and linear operations. To formally study this restricted class of cryptographic algorithms, we present a new model called {\em Linicrypt}. A Linicrypt program has access to a random oracle whose inputs and outputs are field elements, and otherwise manipulates data only via fixed linear combinations.
Our main technical result is that it is possible to decide {\em in polynomial time} whether two given Linicrypt programs induce computationally indistinguishable distributions (against arbitrary PPT adversaries, in the random oracle model).
We show also that indistinguishability of Linicrypt programs can be expressed as an existential formula, making the model amenable to {\em automated program synthesis.} In other words, it is possible to use a SAT/SMT solver to automatically generate Linicrypt programs satisfying a given security constraint. Interestingly, the properties of Linicrypt imply that this synthesis approach is both sound and complete.
We demonstrate this approach by synthesizing Linicrypt constructions of garbled circuits.
Efficient High-Speed WPA2 Brute Force Attacks using Scalable Low-Cost FPGA Clustering
Uncategorized
Uncategorized
WPA2-Personal is widely used to protect Wi-Fi networks against illicit access.
While attackers typically use GPUs to speed up the discovery of weak network passwords, attacking random passwords is considered to quickly become infeasible with increasing password length.
Professional attackers may thus turn to commercial high-end FPGA-based cluster solutions to significantly increase the speed of those attacks.
Well known manufacturers such as Elcomsoft have succeeded in creating world's
fastest commercial FPGA-based WPA2 password recovery system,
but since they rely on high-performance FPGAs the costs of
these systems are well beyond the reach of amateurs.
In this paper, we present a highly optimized low-cost FPGA cluster-based WPA-2 Personal password recovery system that can not only achieve similar performance at a cost affordable by amateurs, but in comparison our implementation would also be more than $5$ times as fast on the original hardware.
Since the currently fastest system is not only significantly slower but proprietary as well, we believe that we are the first to present the internals of a highly optimized and fully pipelined FPGA WPA2 password recovery system.
In addition we evaluated our approach with respect to performance and power usage and compare it to GPU-based systems.
An Unconditionally Hiding Auditing Procedure for Multi-Party Computations
Uncategorized
Uncategorized
In this work an unconditionally hiding auditing procedure for computations on documents stored in distributed fashion is introduced. There is only one multi-party computation (MPC) scheme providing auditability which computationally protects the inputs of the computations. Building on this, we propose a computationally hiding solution that uses bilinear maps and therefore produces no additional overhead in the online phase. In addition, we introduce a second variation that is the first auditable MPC scheme providing unconditional (or information-theoretic) hidingness. We achieve this by combining bilinear maps with unconditionally hiding commitments leading to only a small overhead in the online phase. We prove our solutions secure and give arguments for practicability and efficiency. The auditing procedures presented here are an important contribution since distributed storage solutions, e.g. cloud of clouds, allow for information-theoretic confidentiality. Using our technique, they can be extended to perform auditable computations on the data stored.
On Trees, Chains and Fast Transactions in the Blockchain
A fundamental open problem in the area of
blockchain protocols is whether the Bitcoin protocol
is the
only solution
for building a secure transaction ledger.
A recently proposed and
widely considered alternative is the
\GHOST protocol which, notably,
was proposed to be at the core of Ethereum
as well as other recent proposals for improved Bitcoin-like
systems.
%
The \GHOST variant is touted as offering superior performance compared to Bitcoin (potentially offering block production
speed up by a factor of more than 40) without a security loss. Motivated by this, in this work, we study
from
a provable security
point of view
the \GHOST protocol.
We introduce a new formal framework for the analysis
of blockchain protocols that relies on trees (rather
than chains) and we showcase the power of the framework
by providing a unified description of the \GHOST and Bitcoin protocols,
the former of which we extract and formally describe. We then prove that \GHOST implements a
``robust transaction ledger'' (i.e., possesses liveness and persistence) and hence it is
a provably secure alternative to Bitcoin; moreover, our bound for the liveness parameter is superior to that proven for the bitcoin backbone in line with the original expectation for \GHOST.
Our proof follows a novel methodology for establishing that \GHOST is a robust transaction ledger compared to previous works, which may be of independent interest and can be applicable to other blockchain variants.
New Protocols for Secure Equality Test and Comparison
Protocols for securely comparing private values are among the most fundamental building blocks of multiparty computation. Introduced by Yao under the name millionaire's problem, they have found numerous applications in a variety of privacy-preserving protocols; however, due to their inherent non-arithmetic structure, existing construction often remain an important bottleneck in large-scale secure protocols.
In this work, we introduce new protocols for securely computing the greater-than and the equality predicate between two parties. Our protocols rely solely on the existence of oblivious transfer, and are UC-secure against passive adversaries. Furthermore, our protocols are well suited for use in large-scale secure computation protocols, where secure comparisons (SC) and equality tests (ET) are commonly used as basic routines: they perform particularly well in an amortized setting, and can be preprocessed efficiently (they enjoy an extremely efficient, information-theoretic online phase). We perform a detailed comparison of our protocols to the state of the art, showing that they improve over the most practical existing solutions regarding both communication and computation, while matching the asymptotic efficiency of the best theoretical constructions.
ObliviSync: Practical Oblivious File Backup and Synchronization
Oblivious RAM (ORAM) protocols are powerful techniques that hide a client's data as well as access patterns from untrusted service providers. We present an oblivious cloud storage system, ObliviSync, that specifically targets one of the most widely-used personal cloud storage paradigms: synchronization and backup services, popular examples of which are Dropbox, iCloud Drive, and Google Drive. This setting provides a unique opportunity because the above privacy properties can be achieved with a simpler form of ORAM called write-only ORAM, which allows for dramatically increased efficiency compared to related work. Our solution is asymptotically optimal and practically efficient, with a small constant overhead of approximately 4x compared with non-private file storage, depending only on the total data size and parameters chosen according to the usage rate, and not on the number or size of individual files. Our construction also offers protection against timing-channel attacks, which has not been previously considered in ORAM protocols. We built and evaluated a full implementation of ObliviSync that supports multiple simultaneous read-only clients and a single concurrent read/write client whose edits automatically and seamlessly propagate to the readers. We show that our system functions under high work loads, with realistic file size distributions, and with small additional latency (as compared to a baseline encrypted file system) when paired with Dropbox as the synchronization service.
MPC-Friendly Symmetric Key Primitives
We discuss the design of symmetric primitives, in particular Pseudo-Random Functions (PRFs) which are suitable for use in a secret-sharing based MPC system. We consider three different PRFs: the Naor-Reingold
PRF, a PRF based on the Legendre symbol, and a specialized block cipher design called MiMC. We present protocols for implementing these PRFs within a secret-sharing based MPC system, and discuss possible applications. We then compare the performance of our protocols. Depending on the application, different PRFs may offer different optimizations and advantages over the classic AES benchmark. Thus, we cannot conclude that there is one optimal PRF to be used in all situations.
Big-Key Symmetric Encryption: Resisting Key Exfiltration
Uncategorized
Uncategorized
This paper aims to move research in the bounded retrieval model (BRM) from theory to practice by considering symmetric (rather than public-key) encryption, giving efficient schemes, and providing security analyses with sharp, concrete bounds. The threat addressed is malware that aims to exfiltrate a user's key. Our schemes aim to thwart this by using an enormously long key, yet paying for this almost exclusively in storage cost, not speed. Our main result is a general-purpose lemma, the subkey prediction lemma, that gives a very good bound on an adversary's ability to guess a (modest length) subkey of a big-key, the subkey consisting of the bits of the big-key found at random, specified locations, after the adversary has exfiltrated partial information about the big key (e.g., half as many bits as the big-key is long). We then use this to design a new kind of key encapsulation mechanism, and, finally, a symmetric encryption scheme. Both are in the random-oracle model. We also give a less efficient standard-model scheme that is based on universal computational extractors (UCE). Finally, we define and achieve hedged BRM symmetric encryption, which provides authenticity in the absence of leakage.
Horizontal Side-Channel Attacks and Countermeasures on the ISW Masking Scheme
A common countermeasure against side-channel attacks consists in using the masking scheme originally introduced by Ishai, Sahai and Wagner (ISW) at Crypto 2003, and further generalized by Rivain and Prouff at CHES 2010. The countermeasure is provably secure in the probing model, and it was showed by Duc, Dziembowski and Faust at Eurocrypt 2014 that the proof can be extended to the more realistic noisy leakage model. However the extension only applies if the leakage noise $\sigma$ increases at least linearly with the masking order $n$, which is not necessarily possible in practice.
In this paper we investigate the security of an implementation when the previous condition is not satisfied, for example when the masking order $n$ increases for a constant noise $\sigma$. We
exhibit two (template) horizontal side-channel attacks against the Rivain-Prouff's secure multiplication scheme and we analyze their efficiency thanks to several simulations and experiments.
We also describe a variant of Rivain-Prouff's multiplication that is still provably secure in the original ISW model, and also heuristically secure against our new attacks. Finally, we describe a
new mask refreshing algorithm with complexity ${\cal O}(n \log n)$, instead of ${\cal O}(n^2)$ for the classical algorithm.
Cryptanalysis of a Theorem: Decomposing the Only Known Solution to the Big APN Problem (Full Version)
The existence of Almost Perfect Non-linear (APN) permutations operating on an even number of bits has been a long standing open question until Dillon et al., who work for the NSA, provided an example on 6 bits in 2009.
In this paper, we apply methods intended to reverse-engineer S-Boxes with unknown structure to this permutation and find a simple decomposition relying on the cube function over $GF(2^3)$. More precisely, we show that it is a particular case of a permutation structure we introduce, the butterfly. Such butterflies are $2n$-bit mappings with two CCZ-equivalent representations: one is a quadratic non-bijective function and one is a degree $n+1$ permutation. We show that these structures always have differential uniformity at most 4 when $n$ is odd. A particular case of this structure is actually a 3-round Feistel Network with similar differential and linear properties. These functions also share an excellent non-linearity for $n=3,5,7$.
Furthermore, we deduce a bitsliced implementation and significantly reduce the hardware cost of a 6-bit APN permutation using this decomposition, thus simplifying the use of such a permutation as building block for a cryptographic primitive.
How to prove knowledge of small secrets
We propose a new zero-knowledge protocol applicable to additively homomorphic functions that map integer vectors to an Abelian group. The protocol demonstrates knowledge of a short preimage and achieves amortised efficiency comparable to the approach of Cramer and Damgård from Crypto 2010, but gives a much tighter bound on what we can extract from a dishonest prover. Towards achieving this result, we develop an analysis for bins-and-balls games that might be of independent interest. We also provide a general analysis of rewinding of a cut-and-choose protocol as well as a method to use Lyubachevsky's rejection sampling technique efficiently in an interactive protocol when many proofs are given simultaneously.
Our new protocol yields improved proofs of plaintext knowledge for (Ring-)LWE-based cryptosystems, where such general techniques were not known before. Moreover, they can be extended to prove preimages of homomorphic hash functions as well.
A Generalisation of the Conjugation Method for Polynomial Selection for the Extended Tower Number Field Sieve Algorithm
In a recent work, Kim and Barbulescu showed how to combine previous polynomial selection methods with the extended tower
number field sieve algorithm to obtain improved complexity for the discrete logarithm problem on finite fields $\mathbb{F}_{p^n}$
for the medium prime case and where $n$ is composite and not a prime-power. A follow up work by Sarkar and Singh presented a
general polynomial selection method and showed how to lower the complexity in the medium prime case even when $n$ is composite
and a prime-power. This complexity, though, was higher than what was reported for the case of $n$ composite and not a prime-power.
By suitably combining the Conjugation method of polynomial selection proposed earlier by Barbulescu et al. with the extended tower
number field sieve algorithm, Jeong and Kim showed that the same asymptotic complexity is achieved for any composite $n$.
The present work generalises the polynomial selection method of Jeong and Kim for all composite $n$. Though the best complexity that can
be achieved is not lowered, there is a significant range of finite fields for which the new algorithm achieves complexity which
is lower than all previously proposed methods.
Position-Based Cryptography and Multiparty Communication Complexity
\emph{Position based cryptography (PBC)}, proposed in the seminal work of Chandran, Goyal, Moriarty, and Ostrovsky {(SIAM J. Computing, 2014)}, aims at constructing cryptographic schemes in which the identity of the user is his geographic position. Chandran et al.~construct PBC schemes for \emph{secure positioning} and \emph{position-based key agreement} in the \emph{bounded-storage model} (Maurer, J. Cryptology, 1992). Apart from bounded memory, their security proofs need a strong additional restriction on the power of the adversary: he cannot compute \emph{joint} functions of his inputs. Removing this assumption is left as an open problem.
We show that an answer to this question would resolve a long standing open problem in multiparty communication complexity: finding a function that is hard to compute with low communication complexity in the simultaneous message model, but easy to compute in the fully adaptive model.
On a more positive side: we also show some implications in the other direction, i.e.: we prove that lower bounds on the communication complexity of certain multiparty problems imply existence of PBC primitives. Using this result we then show two attractive ways to ``bypass'' our hardness result: the first uses the random oracle model, the second weakens the \emph{locality} requirement in the bounded-storage model to \emph{online computability}. The random oracle construction is arguably one of the simplest proposed so far in this area. Our results indicate that constructing improved provably secure protocols for PBC requires a better understanding of multiparty communication complexity. This is yet another example where \emph{negative} results in one area (in our case: lower bounds in multiparty communication complexity) can be used to construct secure cryptographic schemes.
Last updated: 2018-04-18
Impossible Differential Cryptanalysis of Midori
Midori is a light weight block cipher recently presented by Banik et al in ASIACRYPT 2015. There are two versions of Midori with state sizes of 64-bit and 128-bit respectively. The round function is based on Substitution-Permutation Network(SPN). In this paper, we give impossible differential cryptanalysis of Midori64. We studied the non-linear layer of the cipher and give two useful properties. We also find the first 6- round impossible differential paths with two non-zero and equal input cells and one non-zero output cell, and then mount 10-round attack. This is the first impossible differential attack on Midori.
Damaging, Simplifying, and Salvaging p-OMD
One of the submissions to the CAESAR competition for the design of a new authenticated encryption scheme is Offset Merkle-Damgård (OMD). At FSE 2015, Reyhanitabar et al. introduced p-OMD, an improvement of OMD that processes the associated data almost for free. As an extra benefit, p-OMD was claimed to offer integrity against nonce-misusing adversaries, a property that OMD does not have.
In this work we show how a nonce-misusing adversary can forge a message for the original p-OMD using only 3 queries (including the forgery). As a second contribution, we generalize and simplify p-OMD. This is done via the introduction of the authenticated encryption scheme Spoed. The most important difference is the usage of a generalized padding function GPAD, which neatly eliminates the need for a case distinction in the design specification and therewith allows for a significantly shorter description of the scheme and a better security bound. Finally, we introduce the authenticated encryption scheme Spoednic, a variant of Spoed providing authenticity against a nonce-misusing adversary at a modest price.
New Insights on AES-like SPN Ciphers
It has been proved in Eurocrypt 2016 that if the details of the S-boxes are not exploited, an impossible differential and a zero-correlation hull can extend over at most 4 rounds of the AES. This paper concentrates on distinguishing attacks on AES-like SPN ciphers by investigating the details of both the S-boxes and the MDS matrices and illustrates some new insights on the security of these schemes. Firstly, we construct several types of $5$-round zero-correlation linear hulls for AES-like ciphers that adopt identical S-boxes to construct the round function and that
have two identical elements in a column of the inverse of their MDS matrices. We then use these linear hulls to construct 5-round integrals provided that the difference of two sub-key bytes is known. Furthermore, we prove that we can always distinguish 5 rounds of such ciphers from random permutations even when the difference of the sub-keys is unknown. Secondly, the constraints for the S-boxes and special property of the MDS matrices can be removed if the cipher is used as a building block of the Miyaguchi-Preneel hash function. As an example, we construct two types of 5-round distinguishers for the hash function Whirlpool. Finally, we show that, in the chosen-ciphertext mode, there exist some nontrivial distinguishers for 5-round AES. To the best of our knowledge, this is the longest distinguishing attack for the round-reduced AES in the secret-key setting. Since the 5-round distinguisher for the AES can only be constructed in the chosen-ciphertext mode, the security margin for the round-reduced AES under the chosen-plaintext attack may be different from that under the chosen-ciphertext attack.
Cryptanalysis of GOST2
GOST 28147 is a 256-bit key 64-bit block cipher developed by the USSR, later adopted by the Russian government as a national standard. In 2010, GOST was suggested to be included in ISO-18033, but was rejected due to weaknesses found in its key schedule.
In 2015, a new version of GOST was suggested with the purpose of mitigating such attacks. In this paper, we show that similar weaknesses exist in the new version as well. More specifically, we present a fixed-point attack on the full cipher with time complexity of $2^{237}$ encryptions. We also present reflection which improves on exhaustive search by a factor of $2e$ attack with time complexity of $2^{192}$ for a key that is chosen from a class of $2^{224}$ weak keys. Finally, we discuss an impossible reflection attack and several possible related-key attacks.
Reducing number field defining polynomials: An application to class group computations
Uncategorized
Uncategorized
In this paper, we describe how to compute smallest monic polynomials that define a given number field $\mathbb K$. We make use of the one-to-one correspondence between monic defining polynomials of $\mathbb K$ and algebraic integers that generate $\mathbb K$. Thus, a smallest polynomial corresponds to a vector in the lattice of integers of $\mathbb K$ and this vector is short in some sense. The main idea is to consider weighted coordinates for the vectors of the lattice of integers of $\mathbb K$. This allows us to find the desired polynomial by enumerating short vectors in these weighted lattices. In the context of the subexponential algorithm of Biasse and Fieker for computing class groups, this algorithm can be used as a precomputation step that speeds up the rest of the computation. It also widens the applicability of their faster conditional method -- which requires a defining polynomial of small height -- to a much larger set of number field descriptions.
Generic Semantic Security against a Kleptographic Adversary
Notable recent security incidents have generated intense interest in adversaries which attempt to subvert---perhaps covertly---crypto\-graphic algorithms. In this paper we develop (IND-CPA) Semantically Secure encryption in this challenging setting.
This fundamental encryption primitive has been previously studied in the ``kleptographic setting,'' though existing results must relax the model by introducing trusted components or otherwise constraining the subversion power of the adversary: designing a Public Key System that is kletographically semantically secure (with minimal trust) has remained elusive to date.
In this work, we finally achieve such systems, even when all relevant cryptographic algorithms are subject to adversarial (kleptographic) subversion. To this end we exploit novel inter-component randomized cryptographic checking techniques (with an offline checking component), combined with common and simple software engineering modular programming techniques (applied to the system's black box specification level). Moreover, our methodology yields a strong generic technique for the preservation of any semantically secure cryptosystem when incorporated into the strong kleptographic adversary setting.
Efficient Public-Key Cryptography with Bounded Leakage and Tamper Resilience
We revisit the question of constructing public-key encryption and signature schemes with security in the presence of bounded leakage and tampering memory attacks.
For signatures we obtain the first construction in the standard model; for public-key encryption we obtain the first construction free of pairing (avoiding non-interactive zero-knowledge proofs).
Our constructions are based on generic building blocks, and, as we show, also admit efficient instantiations under fairly standard number-theoretic assumptions.
The model of bounded tamper resistance was recently put forward by Damgård {\em et al.} (Asiacrypt 2013) as an attractive path to achieve security against arbitrary memory tampering attacks without making hardware assumptions (such as the existence of a protected self-destruct or key-update mechanism), the only restriction being on the number of allowed tampering attempts (which is a parameter of the scheme).
This allows to circumvent known impossibility results for unrestricted tampering (Gennaro {\em et al.}, TCC 2010), while still being able to capture realistic tampering attacks.
Certified lattice reduction
Quadratic form reduction and lattice reduction are fundamental tools in
computational number theory and in computer science, especially in
cryptography. The celebrated Lenstra-Lenstra-Lovász reduction algorithm
(so-called LLL) has been improved in many ways through the past decades and
remains one of the central methods used for reducing integral lattice basis. In
particular, its floating-point variants-where the rational arithmetic required
by Gram-Schmidt orthogonalization is replaced by floating-point arithmetic-are
now the fastest known. However, the systematic study of the reduction theory of
real quadratic forms or, more generally, of real lattices is not widely
represented in the literature. When the problem arises, the lattice is usually
replaced by an integral approximation of (a multiple of) the original lattice,
which is then reduced. While practically useful and proven in some special
cases, this method doesn't offer any guarantee of success in general. In this
work, we present an adaptive-precision version of a generalized LLL algorithm
that covers this case in all generality. In particular, we replace
floating-point arithmetic by Interval Arithmetic to certify the behavior of the
algorithm. We conclude by giving a typical application of the result in
algebraic number theory for the reduction of ideal lattices in number fields.
Secure Outsourcing of Circuit Manufacturing
Uncategorized
Uncategorized
The fabrication process of integrated circuits (ICs) is complex and requires the use of off-shore foundries to lower the costs and to have access to leading-edge manufacturing facilities. Such an outsourcing trend leaves the possibility of inserting malicious circuitry (a.k.a. hardware Trojans) during the fabrication process, causing serious security concerns. Hardware Trojans are very hard and expensive to detect and can disrupt the entire circuit or covertly leak sensitive information via a subliminal channel.
In this paper, we propose a formal model for assessing the security of ICs whose fabrication has been outsourced to an untrusted off-shore manufacturer. Our model captures that the IC specification and design are trusted but the fabrication facility(ies) may be malicious. Our objective is to investigate security in an ideal sense and follows a simulation based approach that ensures that Trojans cannot release any sensitive information to the outside. It follows that the Trojans' impact in the overall IC operation, in case they exist, will be negligible up to simulation. We then establish that such level of security is in fact achievable for the case of a single and of multiple outsourcing facilities. We present two compilers for ICs for the single outsourcing facility case relying on verifiable computation (VC) schemes, and another two compilers for the multiple outsourcing facilities case, one relying on multi-server VC schemes, and the other relying on secure multiparty computation (MPC) protocols with certain suitable properties that are attainable by existing schemes
Extended Tower Number Field Sieve with Application to Finite Fields of Arbitrary Composite Extension Degree
Uncategorized
Uncategorized
We propose a generalization of exTNFS algorithm recently introduced by Kim and Barbulescu (CRYPTO 2016). The algorithm, exTNFS, is a state-of-the-art algorithm for discrete logarithm in $\mathbb{F}_{p^n}$ in the medium prime case, but it only applies when $n=\eta\kappa$ is a composite with nontrivial factors $\eta$ and $\kappa$ such that $\gcd(\eta,\kappa)=1$. Our generalization, however, shows that exTNFS algorithm can be also adapted to the setting with an arbitrary composite $n$ maintaining its best asymptotic complexity. We show that one can solve discrete logarithm in medium case in the running time of $L_{p^n}(1/3, \sqrt[3]{48/9})$ (resp. $L_{p^n}(1/3, 1.71)$ if multiple number fields are used), where $n$ is an \textit{arbitrary composite}. This should be compared with a recent variant by Sarkar and Singh (Asiacrypt 2016) that has the fastest running time of $L_{p^n}(1/3, \sqrt[3]{64/9})$ (resp. $L_{p^n}(1/3, 1.88)$) when $n$ is a power of prime 2. When $p$ is of special form, the complexity is further reduced to $L_{p^n}(1/3, \sqrt[3]{32/9})$. On the practical side, we emphasize that the keysize of pairing-based cryptosystems should be updated following to our algorithm if the embedding degree $n$ remains composite.
EWCDM: An Efficient, Beyond-Birthday Secure, Nonce-Misuse Resistant MAC
Uncategorized
Uncategorized
We propose a nonce-based MAC construction called EWCDM (Encrypted Wegman-Carter with Davies-Meyer), based on an almost xor-universal hash function and a block cipher, with the following properties: (i) it is simple and efficient, requiring only two calls to the block cipher, one of which can be carried out in parallel to the hash function computation; (ii) it is provably secure beyond the birthday bound when nonces are not reused; (iii) it provably retains security up to the birthday bound in case of nonce misuse. Our construction is a simple modification of the Encrypted Wegman-Carter construction, which is known to achieve only (i) and (iii) when based on a block cipher. Underlying our new construction is a new PRP-to-PRF conversion method coined Encrypted Davies-Meyer, which turns a pair of secret random permutations into a function which is provably indistinguishable from a perfectly random function up to at least $2^{2n/3}$ queries, where $n$ is the bit-length of the domain of the permutations.
Single-Key to Multi-Key Functional Encryption with Polynomial Loss
Functional encryption (FE) enables fine-grained access to encrypted data. In a FE scheme, the holder of a secret key $\SK_f$ (associated with a function $f$) and a ciphertext $c$ (encrypting plaintext $x$) can learn $f(x)$ but nothing more.
An important parameter in the security model for FE is the number of secret keys that adversary has access to. In this work, we give a transformation from a FE scheme for which the adversary gets access to a single secret key (with ciphertext size sub-linear in the circuit for which this secret key is issued) to one that is secure even if adversary gets access to an {unbounded} number of secret keys. A novel feature of our transformation is that its security proof incurs only a {\em polynomial} loss.
Programmable Hash Functions from Lattices: Short Signatures and IBEs with Small Key Sizes
Driven by the open problem raised by Hofheinz and Kiltz (Journal of Cryptology, 2012), we study the formalization of lattice-based programmable hash function (PHF), and give two types of constructions by using several techniques such as a novel combination of cover-free sets and lattice trapdoors. Under the Inhomogeneous Small Integer Solution (ISIS) assumption, we show that any (non-trivial) lattice-based PHF is collision-resistant, which gives a direct application of this new primitive. We further demonstrate the power of lattice-based PHF by giving generic constructions of signature and identity-based encryption (IBE) in the standard model, which not only provide a way to unify several previous lattice-based schemes using the partitioning proof techniques, but also allow us to obtain a new short signature scheme and a new fully secure IBE scheme with keys consisting of a logarithmic number of matrices/vectors in the security parameter $\kappa$. Besides, we also give a refined way of combining two concrete PHFs to construct an improved short signature scheme with short verification keys from weaker assumptions. In particular, our methods depart from the confined guessing technique of Böhl et al. (Eurocrypt'13) that was used to construct previous standard model short signature schemes with short verification keys by Ducas and Micciancio (Crypto'14) and by Alperin-Sheriff (PKC'15), and allow us to achieve existential unforgeability against chosen message attacks (EUF-CMA) without resorting to chameleon hash functions.
A Novel Methodology for Testing Hardware Security and Trust Exploiting On-Chip Power Noise Measurements (Extended Version)
Testing of electronic components is indispensable to minimize malfunction and failure of complex electronic systems. Currently, functionality and performance of these electronic components are the main parameters tested. However, validation of performance is not enough when the applications are safety or security critical. Therefore the security and trust of devices must also be tested before validation for such applications. In this paper, we promote the use of On-Chip Power noise Measurements (OCM), in order to test security using side-channel techniques. We then propose for the first time a standard side-channel measurement setup using OCM. Finally, we provide some key ideas on methodology to integrate the validation of hardware security and trust in the standard testing flow, exploiting OCM.
SAT-based cryptanalysis of ACORN
The CAESAR competition aims to provide a portfolio of authenticated encryption algorithms. SAT solvers represent powerful tools to verify automatically and efficiently (among others) the confidentiality and the authenticity of information claimed by cryptographic primitives. In this work, we study the security of the CAESAR candidate ACORN against a SAT-based cryptanalysis. We provide the first practical and efficient attacks on the first and the last versions of ACORN. More precisely, we achieve state recovery, key recovery, state collision as well as forgery attacks. All our results demonstrate the usefulness of SAT solvers to cryptanalyse all the candidates of the CAESAR competition, thereby accelerating the "test of time".
Universally Composable Two-Server PAKE
Two-Server Password Authenticated Key Exchange (2PAKE) protocols apply secret sharing techniques to achieve protection against server-compromise attacks. 2PAKE protocols eliminate the need for password hashing and remain secure as long as one of the servers remains honest. This concept has also been explored in connection with two-server password authenticated secret sharing (2PASS) protocols for which game-based and universally composable versions have been proposed. In contrast, universally composable PAKE protocols exist currently only in the single-server scenario and all proposed 2PAKE protocols use game-based security definitions.
In this paper we propose the first construction of an universally composable 2PAKE protocol, alongside with its ideal functionality. The protocol is proven UC-secure in the standard model, assuming a common reference string which is a common assumption to many UC-secure PAKE and PASS protocols. The proposed protocol remains secure for arbitrary password distributions. As one of the building blocks we define and construct a new cryptographic primitive, called Trapdoor Distributed Smooth Projective Hash Function (TD-SPHF), which could be of independent interest.
On the Relationship between Statistical Zero-Knowledge and Statistical Randomized Encodings
Uncategorized
Uncategorized
\emph{Statistical Zero-knowledge proofs} (Goldwasser, Micali and Rackoff, SICOMP 1989) allow a computationally-unbounded server to convince a computationally-limited client that an input $x$ is in a language $\Pi$ without revealing any additional information about $x$ that the client cannot compute by herself. \emph{Randomized encoding} (RE) of functions (Ishai and Kushilevitz, FOCS 2000) allows a computationally-limited client to publish a single (randomized) message, $\enc(x)$, from which the server learns whether $x$ is in $\Pi$ and nothing else.
It is known that $SRE$, the class of problems that admit statistically private randomized encoding with polynomial-time client and computationally-unbounded server, is contained in the class $SZK$ of problems that have statistical zero-knowledge proof. However, the exact relation between these two classes, and, in particular, the possibility of equivalence was left as an open problem.
In this paper, we explore the relationship between $\SRE$ and $\SZK$, and derive the following results:
* In a non-uniform setting, statistical randomized encoding with one-side privacy ($1RE$) is equivalent to non-interactive statistical zero-knowledge ($NISZK$). These variants were studied in the past as natural relaxation/strengthening of the original notions. Our theorem shows that proving $SRE=SZK$ is equivalent to showing that $1RE=RE$ and $SZK=NISZK$. The latter is a well-known open problem (Goldreich, Sahai, Vadhan, CRYPTO 1999).
* If $SRE$ is non-trivial (not in $BPP$), then infinitely-often one-way functions exist. The analog hypothesis for $SZK$ yields only \emph{auxiliary-input} one-way functions (Ostrovsky, Structure in Complexity Theory, 1991), which is believed to be a significantly weaker implication.
* If there exists an average-case hard language with \emph{perfect randomized encoding}, then collision-resistance hash functions (CRH) exist. Again, a similar assumption for $SZK$ implies only constant-round statistically-hiding commitments, a primitive which seems weaker than CRH.
We believe that our results sharpen the relationship between $SRE$ and $SZK$ and illuminates the core differences between these two classes.
Attribute-based Key Exchange with General Policies
Attribute-based methods provide authorization to parties based on whether their set of attributes (e.g., age, organization, etc.) fulfills a policy. In attribute-based encryption (ABE), authorized parties can decrypt, and in attribute-based credentials (ABCs), authorized parties can authenticate themselves. In this paper, we combine elements of ABE and ABCs together with garbled circuits to construct attribute-based key exchange (ABKE). Our focus is on an interactive solution involving a client that holds a certificate (issued by an authority) vouching for that client's attributes and a server that holds a policy computable on such a set of attributes. The goal is for the server to establish a shared key with the client but only if the client's certified attributes satisfy the policy. Our solution enjoys strong privacy guarantees for both the client and the server, including attribute privacy and unlinkability of client sessions.
Our main contribution is a construction of ABKE for arbitrary circuits with high (concrete) efficiency. Specifically, we support general policies expressible as boolean circuits computed on a set of attributes. Even for policies containing hundreds of thousands of gates the performance cost is dominated by two pairing computations per policy input. Put another way, for a similar cost to prior ABE/ABC solutions, which can only support small formulas efficiently, we can support vastly richer policies.
We implemented our solution and report on its performance. For policies with 100,000 gates and 200 inputs over a realistic network, the server and client spend 957 ms and 176 ms on computation, respectively. When using offline preprocessing and batch signature verification, this drops to only 243 ms and 97 ms.
Towards Practical Tools for Side Channel Aware Software Engineering: `Grey Box' Modelling for Instruction Leakages
Uncategorized
Uncategorized
Power (along with EM, cache and timing) leaks are of considerable concern for developers who have to deal with cryptographic components as part of their overall software implementation, in particular in the context of embedded devices. Whilst there exist some compiler tools to detect timing leaks, similar progress towards pinpointing power and EM leaks has been hampered by limits on the amount of information available about the physical components from which such leaks originate.
We suggest a novel modelling technique capable of producing high-quality instruction-level power (and/or EM) models without requiring a detailed hardware description of a processor nor information about the used process technology (access to both of which is typically restricted). We show that our methodology is effective at capturing differential data-dependent effects as neighbouring instructions in a sequence vary. We also explore register effects, and verify our models across several measurement boards to comment on board effects and portability. We confirm its versatility by demonstrating the basic technique on two processors (the ARM Cortex-M0 and M4), and use the M0 models to develop ELMO, the first leakage simulator for the ARM Cortex M0.
Boneh-Gentry-Hamburg's Identity-based Encryption Schemes Revisited
BasicIBE and AnonIBE are two space-efficient identity-based encryption (IBE) schemes based on quadratic residues, proposed by Boneh, Gentry, and Hamburg, and closely related to Cocks' IBE scheme. BasicIBE is secure in the random oracle model under the quadratic residuosity assumption, while AnonIBE is secure in the standard model under the interactive quadratic residuosity assumption. In this paper we revise the BasicIBE scheme and we show that if the requirements for the deterministic algorithms used to output encryption and decryption polynomials are slightly changed, then the scheme's security margin can be slightly improved.
RSA Weak Public Keys available on the Internet
Uncategorized
Uncategorized
It is common knowledge that RSA can fail when used with
weak random number generators. In this paper we present two
algorithms that we used to find vulnerable public keys together with a simple procedure for recovering the private key from a broken public key. Our study focused on finding RSA keys with 512 and 1024 bit length, which are not considered safe, and finding a GCD is relatively fast. One database that we used in our study is made from 42 million public keys discovered when scanning TCP port 443 for raw X.509 certificates, between June 6, 2012 and August 4, 2013. Another database used in the study was made by crawling Github and retrieving the keys used by users to authenticate themselves when pushing to repositories they contribute to. We show that the percentage of broken keys with 512 bits is 3.7%, while the percentage of broken keys with
1024 bits is 0.05%. The smaller value is due to the fact that
factorization of large numbers includes new prime numbers,
unused in the small keys.
Cryptography with Auxiliary Input and Trapdoor from Constant-Noise LPN
Dodis, Kalai and Lovett (STOC 2009) initiated the study of the Learning Parity with Noise (LPN) problem with (static) exponentially hard-to-invert auxiliary input. In particular, they showed that under a new assumption (called Learning Subspace with Noise) the above is quasi-polynomially hard in the high (polynomially close to uniform) noise regime.
Inspired by the ``sampling from subspace'' technique by Yu (eprint 2009 / 467) and Goldwasser et al. (ITCS 2010), we show that standard LPN can work in a mode (reducible to itself) where the constant-noise LPN (by sampling its matrix from a random subspace) is robust against sub-exponentially hard-to-invert auxiliary input with comparable security to the underlying LPN. Plugging this into the framework of [DKL09], we obtain the same applications as considered in [DKL09] (i.e., CPA/CCA secure symmetric encryption schemes, average-case obfuscators, reusable and robust extractors) with resilience to a more general class of leakages, improved efficiency and better security under standard assumptions.
As a main contribution, under constant-noise LPN with certain sub-exponential hardness (i.e., $2^{\omega(n^{1/2})}$ for secret size $n$) we obtain a variant of the LPN with security on poly-logarithmic entropy sources, which in turn implies CPA/CCA secure public-key encryption (PKE) schemes and oblivious transfer (OT) protocols. Prior to this, basing PKE and OT on constant-noise LPN had been an open problem since Alekhnovich's work (FOCS 2003).
Applying TVLA to Public Key Cryptographic Algorithms
Test Vector Leakage Assessment (TVLA) has been proposed as a method of determining if a side-channel attack is feasible, for a given implementation of a block cipher, by looking for leakage without conducting an attack. The thresholds chosen for the evaluation of leakage are chosen such that passing the tests gives a strong indication that no leakage is present. In this document, we describe how TVLA can be adapted to pubic key cryptographic algorithms, with a specific focus on RSA, ECDSA and ECDH.
Security Analysis of ePrint Report 2016/500 "Efficient Identity-Based Encryption and Public-Key Signature from Trapdoor Subgroups"
In this short report we analyse the security of three schemes proposed by J. H. Park et al. in "Efficient Identity-Based Encryption and Public-Key Signature from Trapdoor Subgroups".
The schemes make use of trapdoor subgroups of $\ZZ_n^*$ and are secure under new assumptions called $q$-Trapdoor Subgroup Diffie-Hellman (TSDH)
and $q$-Trapdoor Subgroup Exponent Inversion (TSEI). We show that given several secret keys in case of IBE or several signatures
in case of PKS, one can easily extract the trapdoor and break security of the proposed schemes.
Optimal-Rate Non-Committing Encryption in a CRS Model
Uncategorized
Uncategorized
Non-committing encryption (NCE) implements secure channels under adaptive corruptions in situations when data erasures are not trustworthy. In this paper we are interested in the rate of NCE, i.e. in how many bits the sender and receiver need to send per plaintext bit.
In initial constructions (e.g. Canetti, Feige, Goldreich and Naor, STOC 96) the length of both the receiver message, namely the public key, and the sender message, namely the ciphertext, is m * poly(k) for an m-bit message, where k is the security parameter. Subsequent works improve efficiency significantly, achieving rate polylog(k).
We construct the first constant-rate NCE. In fact, our scheme has rate 1+o(1), which is comparable to the rate of plain semantically secure encryption. Our scheme operates in the common reference string (CRS) model. Our CRS has size poly(m, k), but it is reusable for an arbitrary polynomial number of m-bit messages. In addition, it is the first NCE protocol with perfect correctness. We assume one way functions and indistinguishability obfuscation for circuits.
As an essential step in our construction, we develop a technique for dealing with adversaries that modify the inputs to the protocol adaptively depending on a public key or CRS that contains obfuscated programs, while assuming only standard (polynomial) hardness of the obfuscation mechanism. This technique may well be useful elsewhere.
A Full RNS Variant of FV like Somewhat Homomorphic Encryption Schemes
Uncategorized
Uncategorized
Since Gentry's breakthrough work in 2009, homomorphic cryptography has received a widespread attention. Implementation of a fully homomorphic cryptographic scheme is however still highly expensive. Somewhat Homomorphic Encryption (SHE) schemes, on the other hand, allow only a limited number of arithmetical operations in the encrypted domain, but are more practical. Many SHE schemes have been proposed, among which the most competitive ones rely on (Ring-) Learning With Error (RLWE) and operations occur on high-degree polynomials with large coefficients. This work focuses in particular on the Chinese Remainder Theorem representation (a.k.a. Residue Number Systems) applied to large coefficients. In SHE schemes like that of Fan and Vercauteren (FV), such a representation remains hardly compatible with procedures involving coefficient-wise division and rounding required in decryption and homomorphic multiplication. This paper suggests a way to entirely eliminate the need for multi-precision arithmetic, and presents techniques to enable a full RNS implementation of FV-like schemes. For dimensions between $2^{11}$ and $2^{15}$, we report speed-ups from $5\times$ to $20\times$ for decryption, and from $2\times$ to $4\times$ for multiplication.
Chosen-Key Distinguishers on 12-Round Feistel-SP and 11-Round Collision Attacks on Its Hashing Modes
Uncategorized
Uncategorized
Since Knudsen and Rijmen proposed the $known$-$key$ attacks in ASIACRYPT 2007, the $open$-$key$ model becomes more and more popular. As the other component of the $open$-$key$ model, $chosen$-$key$ model was applied to the full attacks on AES-256 by Biryukov \emph{et al.} in CRYPTO 2009. In this paper, we explore how practically the $chosen$-$key$ model affect the real-world cryptography and show that 11-round generic Feistel-SP block cipher is no longer safe in its hashing modes (MMO and MP mode) as there exist collision attacks. This work improves Sasaki and Yasuda's collision attacks by 2 rounds with two interesting techniques. First, we for the first time use the available degrees of freedom in the key to reduce the complexity of the inbound phase, which extends the previous 5-round inbound differential to a 7-round one. This results in a 12-round $chosen$-$key$ distinguisher of Feistel-SP block cipher. Second, inspired by the idea of Wang \emph{et al.}, we construct collisions using two blocks. The \emph{rebound attack} is used in the second compression function. We carefully balance the freedom of the first block and the complexity of the \emph{rebound attack}, and extend the $chosen$-$key$ attack to a 11-round collision attack on its hashing modes (MMO and MP mode).
Collapse-binding quantum commitments without random oracles
We construct collapse-binding commitments in the standard
model. Collapse-binding commitments were introduced by Unruh
(Eurocrypt 2016) to model the computational-binding property of commitments
against quantum adversaries, but only constructions in the random
oracle model were known.
Furthermore, we show that collapse-binding commitments imply
selected other security definitions for quantum commitments,
answering an open question by Unruh (Eurocrypt 2016).
Solving discrete logarithms on a 170-bit MNT curve by pairing reduction
Pairing based cryptography is in a dangerous position following the breakthroughs on discrete logarithms computations in finite fields of small characteristic. Remaining instances are built over finite fields of large characteristic and their security relies on the fact the embedding field of the underlying curve is relatively large. How large is debatable. The aim of our work is to sustain the claim that the combination of degree 3 embedding and too small finite fields obviously does not provide enough security. As a computational example, we solve the DLP on a 170-bit MNT curve, by exploiting the pairing embedding to a 508-bit, degree-3 extension of the base field.
TOR - Didactic pluggable transport
Considering that access to information is one of
the most important aspects of modern society, the actions of
certain governments or internet providers to control or, even
worse, deny access for their citizens/users to selected data sources has lead to the implementation of new communication protocols. TOR is such a protocol, in which the path between the original source and destination is randomly generated using a network of globally connected routers and, by doing so, the client is not identified as actually accessing the resource. However, if the ISP knows that the first hop is part of TOR or if it can identify the contents of the exchanged packages as being TOR packages, by using advanced detection algorithms, it can still perform it’s denial policies. These types of detection are circumvented by the usage of bridges (TOR routers which aren’t publicly known) and pluggable transports (content changing protocols, in order to pass through as innocent-looking traffic). The development of a didactic pluggable transport in a simulated TOR network is the main purpose of this paper, in order to investigate the current state of the art of TOR development and analysis.
MASCOT: Faster Malicious Arithmetic Secure Computation with Oblivious Transfer
We consider the task of secure multi-party computation of arithmetic
circuits over a finite field. Unlike Boolean circuits, arithmetic
circuits allow natural computations on integers to be expressed easily
and efficiently. In the strongest setting of malicious security with a
dishonest majority — where any number of parties may deviate
arbitrarily from the protocol — most existing protocols require
expensive public-key cryptography for each multiplication in the
preprocessing stage of the protocol, which leads to a high total cost.
We present a new protocol that overcomes this limitation by using
oblivious transfer to perform secure multiplications in general finite
fields with reduced communication and computation. Our protocol is
based on an arithmetic view of oblivious transfer, with careful
consistency checks and other techniques to obtain malicious security
at a cost of less than 6 times that of semi-honest security. We
describe a highly optimized implementation together with experimental
results for up to five parties. By making extensive use of parallelism
and SSE instructions, we improve upon previous runtimes for MPC over
arithmetic circuits by more than 200 times.
Speeding up the Number Theoretic Transform for Faster Ideal Lattice-Based Cryptography
The Number Theoretic Transform (NTT) provides efficient algorithms for cyclic and nega-cyclic convolutions, which have many applications in computer arithmetic, e.g., for multiplying large integers and large degree polynomials. It is commonly used in cryptographic schemes that are based on the hardness of the Ring Learning With Errors (R-LWE) problem to efficiently implement modular polynomial multiplication.
We present a new modular reduction technique that is tailored for the special moduli required by the NTT. Based on this reduction, we speed up the NTT and propose faster, multi-purpose algorithms. We present two implementations of these algorithms: a portable C implementation and a high-speed implementation using assembly with AVX2 instructions. To demonstrate the improved efficiency in an application example, we benchmarked the algorithms in the context of the R-LWE key exchange protocol that has recently been proposed by Alkim, Ducas, Pöppelmann and Schwabe. In this case, our C and assembly implementations compute the full key exchange 1.49 and 1.13 times faster, respectively. These results are achieved with full protection against timing attacks.
MQSAS - A Multivariate Sequential Aggregate Signature Scheme
(Sequential) Aggregate signature schemes enable a group of users $u_1, \dots, u_k$ with messages $m_1, \dots, m_k$ to produce a single signature $\Sigma$ which states the integrity and authenticity of all the messages $m_1, \dots, m_k$. The length of the signature $\Sigma$ is thereby significantly shorter than a concatenation of individual signatures. Therefore, aggregate signatures can improve the efficiency of numerous applications, e.g. the BGPsec protocol of Internet routing and the development of new efficient aggregate signature schemes is an important task for cryptographic research. On the other hand, multivariate cryptography offers a huge variety of practical signature schemes. However, there is a lack of multivariate signature schemes with special properties such as aggregate signature schemes. In this paper, we propose a technique to extend the HFEv- signature scheme to a sequential aggregate signature scheme. By doing so, we create the first multivariate signature scheme of this kind. Our scheme is very efficient and offers compression rates that outperform current lattice-based constructions for practical parameters.
Key Recovery Attack against 2.5-round pi-Cipher
In this paper, we propose a guess and determine attack against some variants of the π-Cipher family of authenticated ciphers. This family of ciphers is a second-round candidate of the CAESAR competition. More precisely, we show a key recovery attack with time complexity little higher than 24^ω, and low data complexity, against variants of the cipher with ω-bit words, when the internal permutation is reduced to 2.5 rounds. In particular, this gives an attack with time complexity 2^72 against the variant π16-Cipher096 (using 16-bit words) reduced to 2.5 rounds, while the authors claim 96 bits of security with 3 rounds in their second-round submission. Therefore, the security margin for this variant of π-Cipher is very limited.
The attack can also be applied to lightweight variants that are not included in the CAESAR proposal, and use only two rounds. The lightweight variants π16-Cipher096 and π16-Cipher128 claim 96 bits and 128 bits of security respectively, but our attack can break the full 2 rounds with complexity 2^72.
Finally, the attack can be applied to reduced versions of two more variants of π-Cipher that were proposed in the first-round submission with 4 rounds: π16-Cipher128 (using 16-bit words) and π32-Cipher256 (using 32-bit words). The attack on 2.5 rounds has complexity 2^72 and 2^137 respectively, while the security claim for 4 rounds are 128 bits and 256 bits of security.
Certificateless Key Insulated Encryption: Cryptographic Primitive for Achieving Key-escrow free and Key-exposure Resilience
Certificateless encryption (CLE) alleviates the heavy certificate management in traditional public key encryption and the key escrow problem in the ID-based encryption simultaneously. Current CLE
schemes assumed that the user’s secret key is absolutely secure. Unfortunately, this assumption is too strong in case the CLE is deployed in the
hostile setting and the leakage of secret key is inevitable. In this paper,
we present a new concept called an certificateless key insulated encryption scheme (CL-KIE). We argue that this is an important cryptographic
primitive that can be used to achieve key-escrow free and key-exposure
resilience. We also present an efficient CL-KIE scheme based on bilinear pairing. After that, the security of our scheme is proved under the
Bilinear Diffie-Hellman assumption in the random oracle model.
Certificateless encryption (CLE) alleviates the heavy certificate management in traditional public key encryption and the key escrow problem in
the ID-based encryption simultaneously. Current CLE schemes assumed
that the user’s secret key is absolutely secure. Unfortunately, this assumption is too strong in case the CLE is deployed in the hostile setting
and the leakage of the secret key is inevitable. In this paper, we present
a new concept called a certificateless key insulated encryption scheme
(CL-KIE). We argue that this is an important cryptographic primitive
that can be used to achieve key-escrow free and key-exposure resilience.
We also present an efficient CL-KIE scheme based on bilinear pairing.
After that, the security of our scheme is proved under the Bilinear DiffieHellman assumption in the random oracle model.
Efficient Identity-Based Encryption and Public-Key Signature from Trapdoor Subgroups
Uncategorized
Uncategorized
We present a new Identity-Based Encryption (IBE) scheme from a trapdoor subgroup of $\mathbb{Z}^*_{n}$ for an RSA modulus $n$. In a trapdoor subgroup of $\mathbb{Z}^*_{n}$, a subgroup order is hidden and can be used as a trapdoor. Our IBE scheme is efficient in both performance and space. Compared to practical pairing-based IBE schemes, ours is more efficient particularly in terms of computational performance. Following Naor's observation, we also suggest a new Public-Key Signature (PKS) scheme from a trapdoor subgroup of $\mathbb{Z}^*_{n}$. A favorable feature of our PKS scheme is that signing algorithm is exponentiation-free and requires only one modular inversion. This enables our PKS scheme to provide the fastest signing, compared to practical signature schemes such as RSA and ECDSA. We prove the security of our schemes in the random oracle model under new computational hardness problems that arguably hold in the trapdoor subgroup of $\mathbb{Z}^*_{n}$.
Drone Targeted Cryptography
As flying, camera-bearing drones get smaller and lighter, they increasingly choke on the common ciphers as they interpret their commands, and send back their footage. New paradigm cryptography allows for minimum power, adjustable randomness security to step in, and enable this emerging technology to spy, follow, track, and detect. E.g.: to find survivors in a collapsed structure. We describe here a cryptographic premise where intensive computation is avoided, and security is achieved via non-complex processing of at-will size keys. The proposed approach is to increase the role of randomness, and to build ciphers that can handle any size key without choking on computation. Orthodox cryptography seeks to create a thorough mix between key bits and message bits, resulting in heavy-duty computation. Let’s explore simple, fast ciphers that allow their user to adjust the security of the ciphertext by determining how much randomness to use. We present “Walk in the Park” cipher where the “walk” may be described through the series of visited spots (the plaintext), or, equivalently through a list of the traversed walkways (ciphertext). The “walking park” being the key, determines security by its size. Yet, the length of the “walk” is determined by the size of the plaintext, not the size of the “park”. We describe a use scenario for the proposed cipher: a drone taking videos of variable sensitivity and hence variable required security – handled by the size of the “park”.
Towards Tightly Secure Short Signature and IBE
Constructing short signatures with tight security from standard assumptions is a long-standing open problem. We present an adaptively secure, short (and stateless) signature scheme, featuring a constant security loss relative to a conservative hardness assumption, Short Integer Solution (SIS), and the security of a concretely instantiated pseudorandom function (PRF).
This gives a class of tightly secure short lattice signature schemes whose security is based on SIS and the underlying assumption of the instantiated PRF.
Our signature construction further extends to give a class of tightly and adaptively secure ``compact" Identity-Based Encryption (IBE) schemes, reducible with constant security loss
from Regev's vanilla Learning With Errors (LWE) hardness assumption and the security of a concretely instantiated PRF. Our approach is a novel combination of a number of techniques, including Katz and Wang signature, Agrawal et al.\ lattice-based secure IBE, and Boneh et al.\ key-homomorphic encryption.
Our results, at the first time, eliminate the dependency between the number of adversary's queries and the security of short signature/IBE schemes
in the context of lattice-based cryptography. They also indicate that tightly secure PRFs (with constant security loss) would imply tightly, adaptively secure short signature and IBE schemes (with constant security loss).
Secure Computation from Elastic Noisy Channels
Noisy channels enable unconditionally secure multi-party computation even against parties with unbounded computational power. But inaccurate noise estimation and adversarially determined channel characteristics render known protocols insecure. Such channels are known as unreliable noisy channels. A large body of work in the last three decades has attempted to construct secure multi-party computation from unreliable noisy channels, but this previous work has not been able to deal with most parameter settings.
In this work, we study a form of unreliable noisy channels where the unreliability is one-sided, that we name elastic noisy channels: thus, in one form of elastic noisy channel, an adversarial receiver can increase the reception reliability unbeknown to the sender, but the sender cannot change the channel characteristic.
Our work shows feasibility results for a large set of parameters for the elastic binary symmetric channel, significantly improving upon the best results obtainable using prior techniques. In a key departure from existing approaches, we use a more elemental correlated private randomness as an intermediate cryptographic primitive that exhibits only a rudimentary essence of oblivious transfer. Toward this direction, we introduce new information-theoretic techniques that are potentially applicable to other cryptographic settings involving unreliable noisy channels.
All Complete Functionalities are Reversible
Crepeau and Santha, in 1991, posed the question of reversibility of functionalities, that is, which functionalities when used in one direction, could securely implement the identical functionality in the reverse direction. Wolf and Wullschleger, in 2006, showed that oblivious transfer is reversible. We study the problem of reversibility among 2-party SFE functionalities, which also enable general multi-party computation, in the information-theoretic setting.
We show that any functionality that enables general multi-party computation, when used in both directions, is reversible. In fact, we show that any such functionality can securely realize oblivious transfer when used in an a priori fixed direction. This result enables secure computation using physical setups that parties can only use in a particular direction due to inherent asymmetries in them.