All papers in 2015 (Page 6 of 1255 results)
Stream Cipher Operation Modes with Improved Security against Generic Collision Attacks
Most stream ciphers used in practice are vulnerable against generic collision attacks, which allow to compute the secret initial state on the basis of O(2^{n/2}) keystream bits in time and space O(2^{n/2}), where n denotes the inner state length of the underlying keystream generator. This implies the well-known rule that for reaching n-bit security, the inner state length should be at least 2n. Corresponding to this, the inner state length of recent proposals for practically used stream ciphers is quite large (e.g., n=288 for Trivium and n=160 for Grain v1).
In this paper, we suggest a simple stream cipher operation mode, respectively a simple way how to modify existing operation modes like that in the Bluetooth system, which provides provable security near 2^{2n/3} against generic collision attacks. Our suggestion refers to stream ciphers (like E0 in Bluetooth) which generate keystreams that are partitioned into packets and where the initial states for each packet are computed from a packet-IV and the secret session key using a resynchronization algorithm.
Our security analysis is based on modeling the resynchronization algorithm in terms of the FP(1)-construction E(x,k)=F(P(x+k)+k), where k denotes an n-bit secret key (corresponding to the symmetric session key), F denotes a publicly known n-bit function (corresponding to the output function of the underlying keystream generator), P denotes a publicly known n-bit permutation (corresponding to the iterated state update function of the generator), and the input x is an n-bit public initial value. Our security bounds follow from the results presented in [Cryptology ePrint Archive: Report 2015/636], where a tight 2n/3 security bound for the FP(1)-construction in the random oracle model was proved.
Cryptanalysis of an Improved One-Way Hash Chain Self-Healing Group Key Distribution Scheme
In 2014, Chen et al. proposed a one-way hash self-healing group key distribution scheme for resource-constrained wireless networks in Journal of Sensors (14(14):24358-24380, DOI: 10.3390/ s141224358). They asserted that their scheme 2 has the constant storage overhead, low communication overhead, and is secure, i.e., achieves mt-revocation capability, mt-wise forward secrecy, any-wise backward secrecy and has mt-wise collusion attack resistance capability. Unfortunately, an attack method against Chen et al.'s scheme 2 is found in this paper, which contributes to some security flaws. More precisely, a revoked user can recover other legitimate users' personal secrets, which directly breaks the forward security, mt-revocation capability and mt-wise collusion attack resistance capability. Thus, Chen et al.'s scheme 2 is insecure.
Revisiting TESLA in the quantum random oracle model
Uncategorized
Uncategorized
We study a scheme of Bai and Galbraith (CT-RSA’14), also known as TESLA. TESLA was thought to have a tight security reduction from the learning with errors problem (LWE) in the random oracle model (ROM). Moreover, a variant using chameleon hash functions was lifted to the quantum random oracle model (QROM). However, both reductions were later found to be flawed and hence it remained unresolved until now whether TESLA can be proven to be tightly secure in the (Q)ROM.
In the present paper we provide an entirely new, tight security reduction for TESLA from LWE in the QROM (and thus in the ROM). Our security reduction involves the adaptive re-programming of a quantum oracle. Furthermore, we propose parameter sets targeting 128 bits of security against both classical and quantum adversaries and compare TESLA’s performance with state-of-the-art signature schemes.
Related-Key Attack on Full-Round PICARO
Side-channel cryptanalysis is a very efficient class of attacks that recovers secret information by exploiting the physical leakage of a device executing a cryptographic computation. To adress this type of attack, many countermeasures have been proposed, and some papers adressed the question of constructing an efficient masking scheme for existing ciphers. In their work, G.~Piret, T.~Roche and C.~Carlet took the problem the other way around and specifically designed a cipher that would be easy to mask. Their careful analysis, that started with the design of an adapted Sbox, leads to the construction of a 12-round Feistel cipher named PICARO. In this paper, we present the first full-round cryptanalysis of this cipher and show how to recover the key in the related-key model. Our analysis takes advantage of the low diffusion of the key schedule together with the non-bijectivity of PICARO Sbox. Our best trade-off has a time complexity equivalent to $2^{107.4}$ encryptions, a data complexity of $2^{99}$ plaintexts and requires to store $2^{17}$ (plaintext, ciphertext) pairs.
Differential Computation Analysis: Hiding your White-Box Designs is Not Enough
Uncategorized
Uncategorized
Although all current scientific white-box approaches of standardized cryptographic primitives are broken, there is still a large number of companies which sell "secure" white-box products. In this paper a new approach to assess the security of white-box implementations is presented which requires neither knowledge about the look-up tables used nor any reverse engineering effort. This differential computation analysis (DCA) attack is the software counterpart of the differential power analysis attack as applied by the cryptographic hardware community.
We developed plugins to widely available dynamic binary instrumentation frameworks to produce software execution traces which contain information about the memory addresses being accessed. We show how DCA can extract the secret key from all publicly (non-commercial) available white-box programs implementing standardized cryptography by analyzing these traces to identify secret-key dependent correlations.
On Constructing One-Way Permutations from Indistinguishability Obfuscation
We prove that there is no black-box construction of a one-way permutation family from a one-way function and an indistinguishability obfuscator for the class of all oracle-aided circuits, where the construction is "domain invariant" (i.e., where each permutation may have its own domain, but these domains are independent of the underlying building blocks).
Following the framework of Asharov and Segev (FOCS '15), by considering indistinguishability obfuscation for oracle-aided circuits we capture the common techniques that have been used so far in constructions based on indistinguishability obfuscation. These include, in particular, non-black-box techniques such as the punctured programming approach of Sahai and Waters (STOC '14) and its variants, as well as sub-exponential security assumptions. For example, we fully capture the construction of a trapdoor permutation family from a one-way function and an indistinguishability obfuscator due to Bitansky, Paneth and Wichs (TCC '16). Their construction is not domain invariant and our result shows that this, somewhat undesirable property, is unavoidable using the common techniques.
In fact, we observe that constructions which are not domain invariant circumvent all known negative results for constructing one-way permutations based on one-way functions, starting with Rudich's seminal work (PhD thesis '88). We revisit this classic and fundamental problem, and resolve this somewhat surprising gap by ruling out all such black-box constructions -- even those that are not domain invariant.
Fast Garbling of Circuits Under Standard Assumptions
Uncategorized
Uncategorized
Protocols for secure computation enable mutually distrustful parties to jointly compute on their private inputs without revealing anything but the result. Over recent years, secure computation has become practical and considerable effort has been made to make it more and more efficient. A highly important tool in the design of two-party protocols is Yao's garbled circuit construction (Yao 1986), and multiple optimizations on this primitive have led to performance improvements of orders of magnitude over the last years. However, many of these improvements come at the price of making very strong assumptions on the underlying cryptographic primitives being used (e.g., that AES is secure for related keys, that it is circular secure, and even that it behaves like a random permutation when keyed with a public fixed key). The justification behind making these strong assumptions has been that otherwise it is not possible to achieve fast garbling and thus fast secure computation. In this paper, we take a step back and examine whether it is really the case that such strong assumptions are needed. We provide new methods for garbling that are secure solely under the assumption that the primitive used (e.g., AES) is a pseudorandom function. Our results show that in many cases, the penalty incurred is not significant, and so a more conservative approach to the assumptions being used can be adopted.
Anonymous Traitor Tracing: How to Embed Arbitrary Information in a Key
In a traitor tracing scheme, each user is given a different decryption key. A content distributor can encrypt digital content using a public encryption key and each user in the system can decrypt it using her decryption key. Even if a coalition of users combines their decryption keys and constructs some ``pirate decoder'' that is capable of decrypting the content, there is a public tracing algorithm that is guaranteed to recover the identity of at least one of the users in the coalition given black-box access to such decoder.
In prior solutions, the users are indexed by numbers $1,\ldots,N$ and the tracing algorithm recovers the index $i$ of a user in a coalition. Such solutions implicitly require the content distributor to keep a record that associates each index $i$ with the actual identifying information for the corresponding user (e.g., name, address, etc.) in order to ensure accountability. In this work, we construct traitor tracing schemes where all of the identifying information about the user can be embedded directly into the user's key and recovered by the tracing algorithm. In particular, the content distributor does not need to separately store any records about the users of the system, and honest users can even remain anonymous to the content distributor.
The main technical difficulty comes in designing tracing algorithms that can handle an exponentially large universe of possible identities, rather than just a polynomial set of indices $i \in [N]$. We solve this by abstracting out an interesting algorithmic problem that has surprising connections with seemingly unrelated areas in cryptography. We also extend our solution to a full ``broadcast-trace-and-revoke'' scheme in which the traced users can subsequently be revoked from the system. Depending on parameters, some of our schemes can be based only on the existence of public-key encryption while others rely on indistinguishability obfuscation.
Affine Equivalence and its Application to Tightening Threshold Implementations
Motivated by the development of Side-Channel Analysis (SCA) countermeasures which can provide security up to a certain order, defeating higher-order attacks has become amongst the most challenging issues. For instance, Threshold Implementation (TI) which nicely solves the problem of glitches in masked hardware designs is able to avoid first-order leakages. Hence, its extension to higher orders aims at counteracting SCA attacks at higher orders, that might be limited to univariate scenarios. Although with respect to the number of traces as well as sensitivity to noise the higher the order, the harder it is to mount the attack, a d-order TI design is vulnerable to an attack at order d+1. In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we go toward introducing structured randomness into the implementation.
Our construction, which is a combination of masking and hiding, is dedicated to TI designs and deals with the concept of "affine equivalence" of Boolean functions. Such a combination hardens a design practically against higher-order attacks so that these attacks cannot be successfully mounted. We show that the area overhead of our construction is paid off by its ability to avoid higher-order leakages to be practically exploitable.
A More Cautious Approach to Security Against Mass Surveillance
At CRYPTO 2014 Bellare, Paterson, and Rogaway (BPR) presented a formal treatment of symmetric encryption in the light of algorithm substitution attacks (ASAs), which may be employed by `big brother' entities for the scope of mass surveillance. Roughly speaking, in ASAs big brother may bias ciphertexts to establish a covert channel to leak vital cryptographic information. In this work, we identify a seemingly benign assumption implicit in BPR's treatment and argue that it artificially (and severely) limits big brother's capabilities. We then demonstrate the critical role that this assumption plays by showing that even a slight weakening of it renders the security notion completely unsatisfiable by any, possibly deterministic and/or stateful, symmetric encryption scheme. We propose a refined security model to address this shortcoming, and use it to restore the positive result of BPR, but caution that this defense does not stop most other forms of covert-channel attacks.
Self-bilinear Map from One Way Encoding System and Indistinguishability Obfuscation
Uncategorized
Uncategorized
The bilinear map whose domain and target sets are identical is called the self-bilinear map. Original self-bilinear maps are defined over cyclic groups. This brings a lot of limitations to construct secure self-bilinear schemes. Since the map itself reveals information about the underlying cyclic group, hardness assumptions on DDHP and CDHP may not hold any more. In this paper, we used $i\mathcal{O}$ to construct a self-bilinear map from generic sets. These sets should own several properties. A new notion, One Way Encoding System (OWES), is proposed to describe formally the properties those sets should hold. An Encoding Division Problem is defined to complete the security proof. As an instance of the generic construction, we propose a concrete scheme built on the GGH graded encoding system and state that any $1$-graded encoding system may satisfy the requirements of OWES. Finally, we discuss the hardness of EDP in the GGH graded encoding system.
A 2^{70} Attack on the Full MISTY1
Uncategorized
Uncategorized
MISTY1 is a block cipher designed by Matsui in 1997. It is widely deployed in Japan, and is recognized internationally as a European
NESSIE-recommended cipher and an ISO standard. After almost 20 years of unsuccessful cryptanalytic attempts, a first attack on the full MISTY1 was presented at CRYPTO 2015 by Todo. The attack, using a new technique called {\it division property}, requires almost the full codebook and has time complexity of 2^{107.3} encryptions.
In this paper we present a new attack on the full MISTY1. It is based on a modified variant of Todo's division property, along with a variety of refined key-recovery techniques. Our attack requires the full codebook, but allows to retrieve 49 bits of the secret key in time complexity of only 2^{64} encryptions, and the full key in time complexity of 2^{69.5} encryptions.
While our attack is clearly impractical due to its large data complexity, it shows that MISTY1 provides security of only 2^{70} --- significantly less than what was considered before.
Faster ECC over F2571 (feat. PMULL)
Uncategorized
Uncategorized
In this paper, we show efficient elliptic curve cryptography implementations for B-571 over ARMv8. We improve the previous binary field multiplication with finely aligned multiplication and incomplete reduction techniques by taking advantages of advanced 64-bit polynomial multiplication (\texttt{PMULL}) supported by ARMv8. This approach shows performance enhancements by a factor of 1.34 times than previous binary field implementations. For the point addition and doubling, the special types of multiplication, squaring and addition operations are combined
together and optimized, where one reduction operation is optimized in each case. The scalar multiplication is implemented in constant-time
Montgomery ladder algorithm, which is secure against timing attacks. Finally the proposed implementations achieved 759,630/331,944 clock cycles for random/fixed scalar multiplications for B-571 over ARMv8, respectively.
BitCryptor: Bit-Serialized Compact Crypto Engine on Reconfigurable Hardware
There is a significant effort in building lightweight cryptographic operations, yet the proposed solutions are typically single-purpose modules that can implement a single functionality. In contrast, we propose BitCryptor, a multi-purpose, bit-serialized compact processor for cryptographic applications on reconfigurable hardware. The proposed crypto engine can perform pseudo-random number generation, strong collision-resistant hashing and variable-key block cipher encryption. The hardware architecture utilizes SIMON, a recent lightweight block cipher, as its core. The complete engine uses a bit-serial design methodology to minimize the area. Implementation results on the Xilinx Spartan-3 s50 FPGA show that the proposed architecture occupies 95 slices (187 LUTs, 102 registers), which is 10$\times$ smaller than the nearest comparable multi-purpose design. BitCryptor is also smaller than the majority of recently proposed lightweight single-purpose designs. Therefore, it is a very efficient cryptographic IP block for resource-constrained domains, providing a good performance at a minimal area overhead.
Short Group Signatures via Structure-Preserving Signatures: Standard Model Security from Simple Assumptions
Group signatures are a central cryptographic primitive which allows users to sign messages while hiding their identity within a crowd of group members. In the standard model (without the random oracle idealization), the most efficient constructions rely on the Groth-Sahai proof systems (Eurocrypt'08). The structure-preserving signatures of Abe et al. (Asiacrypt'12) make it possible to design group signatures based on well-established, constant-size number theoretic assumptions (a.k.a. ``simple assumptions'') like the Symmetric eXternal Diffie-Hellman or Decision Linear assumptions. While much more efficient than group signatures built on general assumptions, these constructions incur a significant overhead w.r.t.
constructions secure in the idealized random oracle model. Indeed, the best known solution based on simple assumptions requires 2.8 kB per signature for currently recommended parameters. Reducing this size and presenting techniques for shorter signatures are thus natural questions. In this paper, our first contribution is to significantly reduce this overhead. Namely, we obtain the first fully anonymous group signatures based on simple assumptions with signatures shorter than 2 kB at the 128-bit security level. In dynamic (resp. static) groups, our signature length drops to 1.8 kB (resp. 1 kB). This improvement is enabled by two technical tools. As a result of independent interest, we first construct a new structure-preserving signature based on simple assumptions which shortens the best previous scheme by 25%. Our second tool is a new method for attaining anonymity in the strongest sense using a new CCA2-secure encryption scheme which is simultaneously a Groth-Sahai commitment.
A Matrix Decomposition Method for Optimal Normal Basis Multiplication
Uncategorized
Uncategorized
We introduce a matrix decomposition method and prove that multiplication in GF(2^k) with a Type 1 optimal normal basis for can be performed using k^2-1 XOR gates irrespective of the choice of the irreducible polynomial generating the field. The previous results achieved this bound only with special irreducible polynomials. Furthermore, the decomposition method performs the multiplication operation using 1.5k(k-1) XOR gates for Type 2a and 2b optimal normal bases, which matches previous bounds.
On Generic Constructions of Circularly-Secure, Leakage-Resilient Public-Key Encryption Schemes
Abstract. We propose generic constructions of public-key encryption schemes, satisfying key- dependent message (KDM) security for projections and different forms of key-leakage resilience, from CPA-secure private key encryption schemes with two main abstract properties: (1) additive homomorphism with respect to both messages and randomness, and (2) reproducibility, providing a means for reusing encryption randomness across independent secret keys. More precisely, our construction transforms a private-key scheme with the stated properties (and one more mild condition) into a public-key one, providing:
- n-KDM-projection security, an extension of circular security, where the adversary may also ask for encryptions of negated secret key bits;
– a (1-o(1)) resilience rate in the bounded-memory leakage model of Akavia et al. (TCC 2009); and
– Auxiliary-input security against subexponentially-hard functions.
We introduce homomorphic weak pseudorandom functions, a homomorphic version of the weak PRFs proposed by Naor and Reingold (FOCS ’95) and use them to realize our base encryption scheme. We obtain homomorphic weak PRFs under assumptions including subgroup indistinguishability (implied, in particular, by QR and DCR) and homomorphic hash-proof systems (HHPS). As corollaries of our results, we obtain (1) a projection-secure encryption scheme (as well as a scheme with a (1-o(1)) resilience rate) based solely on the HHPS assumption, and (2) a unifying approach explaining the results of Boneh et al (CRYPTO ’08) and Brakerski and Goldwasser (CRYPTO ’10). Finally, by observing that Applebaum’s KDM amplification method (EUROCRYPT ’11) preserves both types of leakage resilience, we obtain schemes providing at the same time high leakage resilience and KDM security against any fixed polynomial-sized circuit family.
Predictable Arguments of Knowledge
Uncategorized
Uncategorized
We initiate a formal investigation on the power of {\em predictability} for argument of knowledge systems for \NP.
Specifically, we consider private-coin argument systems where the answer of the prover can be predicted, given the private randomness of the verifier;
we call such protocols Predictable Arguments of Knowledge (PAoK).
Our study encompasses a full characterization of PAoK, showing that such arguments can be made extremely laconic, with the prover sending a single bit, and assumed to have only one round (i.e.,\ two messages) of communication without loss of generality.
We additionally explore PAoK satisfying additional properties (including zero-knowledge and the possibility of re-using the same challenge across multiple executions with the prover), present several constructs of PAoK relying on different cryptographic tools, and discuss applications to cryptography.
Last updated: 2017-02-23
Practical and Scalable Sharing of Encrypted Data in Cloud Storage with Key Aggregation
We study a sensor network setting in which samples are encrypted individually using different keys and maintained on a cloud storage. For large systems, e.g. those that generate several millions of samples per day, fine-grained sharing of encrypted samples is challenging. Existing solutions, such as Attribute-Based Encryption (ABE) and Key Aggregation Cryptosystem (KAC), can be utilized to address the challenge, but only to a certain extent. They are often computationally expensive and thus unlikely to operate at scale. We propose an algorithmic enhancement and two heuristics to improve KAC’s key reconstruction cost, while preserving its provable security. The improvement is particularly significant for range and down-sampling queries – accelerating the reconstruction cost from quadratic to linear running time. Experimental study shows that for queries of size 2^15 samples, the proposed fast reconstruction techniques speed-up the original KAC by at least 90 times on range and down-sampling queries, and by eight times on general (arbitrary) queries. It also shows that at the expense of splitting the query into 16 sub-queries and correspondingly issuing that number of different aggregated keys, reconstruction time can be reduced by 19 times. As such, the proposed techniques make KAC more applicable in practical scenarios such as sensor networks or the Internet of Things.
Authenticated Encryption with Small Stretch (or, How to Accelerate AERO)
Standard form of authenticated encryption (AE) requires the ciphertext to be expanded by
the nonce and the authentication tag. These expansions can be problematic
when messages are relatively short and communication cost is high.
To overcome the problem we propose a new form of AE scheme, MiniAE, which expands the ciphertext only by the single variable integrating nonce and tag.
An important feature of MiniAE is that it requires the receiver to be stateful not only for detecting replays but also for detecting forgery of any type.
McGrew and Foley already proposed a scheme having this feature, called AERO, however,
there is no formal security guarantee based on the provable security framework.
We provide a provable security analysis for MiniAE, and
show several provably-secure schemes using standard symmetric crypto primitives.
This covers a generalization of AERO, hence our results imply a provable security of AERO.
Moreover, one of our schemes has a similar structure as OCB mode of operation and enables rate-1 operation, i.e. only one blockcipher call to process one input block. This implies that the computation cost of MiniAE can be as small as encryption-only schemes.
New multilinear maps from ideal lattices
Uncategorized
Uncategorized
Recently, Hu and Jia presented an efficient attack on the GGH13 map. They show that the MPKE and WE based on GGH13 with public tools of encoding are not secure. Currently, an open problem is to fix GGH13 with functionality-preserving. By modifying zero-testing parameter and using switching modulus method, we present a new construction of multilinear map from ideal lattices. Our construction maintains functionality of GGH13 with public tools of encoding, such as applications of GGH13-based MPKE and WE. The security of our construction depends upon new hardness assumption.
Last updated: 2015-07-30
Solving LWE via List Decoding
Learning with errors (LWE) was introduced by Regev in 2005, which enjoys attractive worst-case hardness properties. It has served as the foundation for a variety of cryptographic schemes. There are two main types of attacks against LWE: one for the decision version of LWE, the other for the search version of LWE.
In this paper, we apply the list decoding method to solve search version of LWE. Our algorithm runs in probabilistic polynomial time and results in specific security estimates for a large range of parameters. To our knowledge, it is the first time to apply the list decoding method to recover the key of LWE.
Our algorithm improves Laine and Lauter's result.
Cutting-Edge Cryptography Through the Lens of Secret Sharing
Secret sharing is a mechanism by which a trusted dealer holding a secret "splits" the secret into many "shares" and distributes the shares to a collection of parties. Associated with the sharing is a monotone access structure, that specifies which parties are "qualified" and which are not: any qualified subset of parties can (efficiently) reconstruct the secret, but no unqualified subset can learn anything about the secret. In the most general form of secret sharing, the access structure can be any monotone NP language.
In this work, we consider two very natural extensions of secret sharing. In the first, which we call distributed} secret sharing, there is no trusted dealer at all, and instead the role of the dealer is distributed amongst the parties themselves. Distributed secret sharing can be thought of as combining the features of multiparty non-interactive key exchange and standard secret sharing, and may be useful in settings where the secret is so sensitive that no one individual dealer can be trusted with the secret. Our second notion is called functional secret sharing, which incorporates some of the features of functional encryption into secret sharing by providing more fine-grained access to the secret. Qualified subsets of parties do not learn the secret, but instead learn some function applied to the secret, with each set of parties potentially learning a different function.
Our main result is that both of the extensions above are equivalent to several recent cutting-edge primitives. In particular, general-purpose distributed secret sharing is equivalent to witness PRFs, and general-purpose functional secret sharing is equivalent to indistinguishability obfuscation. Thus, our work shows that it is possible to view some of the recent developments in cryptography through a secret sharing lens, yielding new insights about both these cutting-edge primitives and secret sharing.
Last updated: 2016-03-31
On the Security of Extended Generalized Feistel Networks
Uncategorized
Uncategorized
In this paper, we analyze the security claims of Extended Generalized Feistel Networks (EGFNs) schemes proposed by Berger et al [1]. We provide impossible differentials for 10 rounds of EGFNs with 16 branches which add up one round to the claim of 9 rounds in the impossible differential trail. Therefore, impossible differential trail covers 10 rounds for the EGFNs scheme, which is the best result on impossible differentials of EGFNs so far. We also provide several 10 round impossible differential trails to attack EGFNs based new cipher proposals. u–method is also used by authors to assert their claim for maximum number of rounds in impossible differential trails of EGFNs. This analysis indicates that u-method does not provide optimal results for this scheme.
Fully Homomorphic Encryption on Octonion Ring
In previous work(2015/474 in Cryptology ePrint Archive), I proposed a fully homomorphic encryption without bootstrapping which has the weak point in the enciphering function. In this paper I propose the improved fully homomorphic encryption scheme on non-associative octonion ring over finite field without bootstrapping technique. I improve the previous scheme by (1) adopting the enciphering function such that it is difficult to express simply by using the matrices and (2) constructing the composition of the plaintext p with two sub-plaintexts u and v. The improved scheme is immune from the “p and -p attack”. The improved scheme is based on multivariate algebraic equations with high degree or too many variables while the almost all multivariate cryptosystems proposed until now are based on the quadratic equations avoiding the explosion of the coefficients. The improved scheme is against the Gröbner basis attack.
The key size of this scheme and complexity for enciphering /deciphering become to be small enough to handle.
Compact Implementations of LEA Block Cipher for Low-End Microprocessors
In WISA'13, a novel lightweight block cipher named LEA
was released. This algorithm has certain useful features for hardware
and software implementations, i.e., simple ARX operations, non-S-box
architecture, and 32-bit word size. These features are realized in several
platforms for practical usage with high performance and low overheads.
In this paper, we further improve 128-, 192- and 256-bit LEA encryption
for low-end embedded processors. Firstly we present speed optimization
methods. The methods split a 32-bit word operation into four byte-wise
operations and avoid several rotation operations by taking advantages of
efficient byte-wise rotations. Secondly we reduce the code size to ensure
minimum code size.We nd the minimum inner loops and optimize them
in an instruction set level. After then we construct the whole algorithm
in a partly unrolled fashion with reasonable speed. Finally, we achieved
the fastest LEA implementations, which improves performance by 10.9%
than previous best known results. For size optimization, our implemen-
tation only occupies the 280B to conduct LEA encryption. After scaling,
our implementation achieved the smallest ARX implementations so far,
compared with other state-of-art ARX block ciphers such as SPECK and
SIMON.
Same Value Analysis on Edwards Curves
Recently, several research groups in cryptography have presented new elliptic curve model based on Edwards curves.
These new curves were selected for their good performance and security perspectives.
Cryptosystems based on elliptic curves in embedded devices can be vulnerable to Side-Channel Attacks (SCA), such as the Simple Power Analysis (SPA) or the Differential Power Analysis (DPA).
In this paper, we analyze the existence of special points whose use in SCA is known as Same Value Analysis (SVA), for Edwards curves. These special points show up as internal collisions under power analysis. Our results indicate that no Edwards curve is safe from such an attacks.
Indistinguishability Obfuscation from Functional Encryption for Simple Functions
Uncategorized
Uncategorized
We show how to construct indistinguishability obfuscation (iO) for circuits from any non-compact functional encryption (FE) scheme with sub-exponential security against unbounded collusions. We accomplish this by giving a generic transformation from any such FE scheme into a compact FE scheme. By composing this with the transformation from sub-exponentially secure compact FE to iO (Ananth and Jain [CRYPTO'15], Bitansky and Vaikuntanathan [FOCS'15]), we obtain our main result.
Our result provides a new pathway to iO.
We use our technique to identify a simple function family for FE that suffices for our general result. We show that the function family F is complete, where every f in F consists of three evaluations of a Weak PRF followed by finite operations. We believe that this may be useful for realizing iO from weaker assumptions in the future.
Provably-Secure Remote Memory Attestation to Prevent Heap Overflow Attacks
We initiate the study of provably secure remote memory attestation. We present two protocols offering various efficiency and security trade-offs that detect the presence of injected malicious code in remotely- stored heap memory. While our solutions offer protection only against a specific class of attacks, our novel formal security definitions are general enough to cover a wide range of attacks and settings, and should be useful for further research on the subject.
Provable Virus Detection: Using the Uncertainty Principle to Protect Against Malware
Protecting software from malware injection is the holy grail of modern computer security. Despite intensive efforts by the scientific and engineering community, the number of successful attacks continues to increase.
We have a breakthrough novel approach to provably detect malware injection. The key idea is to use the very insertion of the malware itself to allow for the systems to detect it. This is, in our opinion, close in spirit to the famous Heisenberg Uncertainty Principle. The attackers, no matter how clever, no matter when or how they insert their malware, change the state of the system they are attacking. This fundamental idea is a game changer. And our system does not rely on heuristics; instead, our scheme enjoys the unique property that it is proved secure in a formal and precise mathematical sense and with minimal and realistic CPU modification achieves strong provable security guarantees. Thus, we anticipate our system and formal mathematical security treatment to open new directions in software protection.
DPA, Bitslicing and Masking at 1 GHz
We present DPA attacks on an ARM Cortex-A8 processor running at 1 GHz. This high-end processor is typically found in portable devices such as phones and tablets. In our case, the processor sits in a single board computer and runs a full-fledged Linux operating system. The targeted AES implementation is bitsliced and runs in constant time and constant flow. We show that, despite the complex hardware and software, high clock frequencies and practical measurement issues, the implementation can be broken with DPA starting from a few thousand measurements of the electromagnetic emanation of a decoupling capacitor near the processor. To harden the bitsliced implementation against DPA attacks, we mask it using principles of hardware gate-level masking. We evaluate the security of our masked implementation against first-order and second-order attacks. Our experiments show that successful attacks require roughly two orders of magnitude more measurements.
Compositions of linear functions and applications to hashing
Uncategorized
Uncategorized
Cayley hash functions are based on a simple idea of using a pair of
(semi)group elements, $A$ and $B$, to hash the 0 and 1 bit, respectively, and then to hash an arbitrary bit string in the
natural way, by using multiplication of elements in the (semi)group.
In this paper, we focus on hashing with linear functions of one
variable over F_p. The corresponding hash functions are very efficient. In particular, we show that hashing a bit string of length n with our method requires, in general, at most 2n multiplications in F_p, but
with particular pairs of linear functions that we suggest, one does
not need to perform any multiplications at all. We also give explicit lower bounds on the length of collisions for hash functions corresponding to these particular pairs of linear functions over F_p.
The self-blindable U-Prove scheme from FC'14 is forgeable
Recently an unlinkable version of the U-Prove attribute-based credential scheme was proposed at Financial Crypto '14. Unfortunately, the new scheme is forgeable: if sufficiently many users work together then they can construct new credentials, containing any set of attributes of their choice, without any involvement of the issuer. In this note we show how they can achieve this and we point out the error in the unforgeability proof.
A masked ring-LWE implementation
Lattice-based cryptography has been proposed as a postquantum public-key cryptosystem. In this paper, we present a masked ring-LWE decryption implementation resistant to first-order side-channel attacks. Our solution has the peculiarity that the entire computation is performed in the masked domain. This is achieved thanks to a new, bespoke masked decoder implementation. The output of the ring-LWE decryption are Boolean shares suitable for derivation of a symmetric key. We have implemented a hardware architecture of the masked ring-LWE processor on a Virtex-II FPGA, and have performed side channel analysis to confirm the soundness of our approach. The area of the protected architecture is around $2000$ LUTs, a $20\%$ increase with respect to the unprotected architecture. The protected implementation takes $7478$ cycles to compute, which is only a factor $\times2.6$ larger than the unprotected implementation.
Cryptanalysis of Feistel Networks with Secret Round Functions
Generic distinguishers against Feistel Network with up to 5 rounds exist in the regular setting and up to 6 rounds in a multi-key setting. We present new cryptanalyses against Feistel Networks with 5, 6 and 7 rounds which are not simply distinguishers but actually recover completely the unknown Feistel functions.
When an exclusive-or is used to combine the output of the round function with the other branch, we use the so-called \textit{yoyo game} which we improved using a heuristic based on particular cycle structures. The complexity of a complete recovery is equivalent to $O(2^{2n})$ encryptions where $n$ is the branch size. This attack can be used against 6- and 7-round Feistel Networks in time respectively $O(2^{n2^{n-1}+2n})$ and $O(2^{n2^{n}+2n})$. However when modular addition is used, this attack does not work. In this case, we use an optimized guess-and-determine strategy to attack 5 rounds with complexity $O(2^{n2^{3n/4}})$.
Our results are, to the best of our knowledge, the first recovery attacks against generic 5-, 6- and 7-round Feistel Networks.
Oblivious Substring Search with Updates
We are the first to address the problem of efficient oblivious substring search over encrypted data supporting updates. Our two new protocols SA-ORAM and ST-ORAM obliviously search for substrings in an outsourced set of n encrypted strings. Both protocols are efficient, requiring communication complexity that is only poly-logarithmic in n. Compared to a straightforward solution for substring search using recent “oblivious data structures” [30], we demonstrate that our tailored solutions improve communication complexity by a factor of logn. The idea behind SA-ORAM and ST-ORAM is to employ a new, hierarchical ORAM tree structure that takes advantage of data dependency and optimizes the size of ORAM blocks and tree height. Based on oblivious suffix arrays, SA-ORAM targets efficiency, yet does not allow updates to the outsourced set of strings. ST-ORAM, based on oblivious suffix trees, allows updates at the additional communications cost of a factor of loglogn. We implement and benchmark SA-ORAM to show its feasibility for practical deployments: even for huge datasets of 2^40 strings, an oblivious substring search can be performed with only hundreds of KBytes communication cost.
KDM-Security via Homomorphic Smooth Projective Hashing
We present new frameworks for constructing public-key encryption schemes satisfying key-dependent message (KDM) security and that yield efficient, universally composable oblivious transfer (OT) protocols via the dual-mode cryptosystem framework of Peikert, Waters and Vaikuntanathan (Crypto 2008).
– Our first framework yields a conceptually simple and unified treatment of the KDM-secure schemes of Boneh et al. (Crypto 2008), Brakerski and Goldwasser (Crypto 2010) and Brakerski, Goldwasser and Kalai (TCC 2011) in the single-key setting.
– Using our second framework, we obtain new dual-mode cryptosystems based on the d-linear, quadratic residuocity and decisional composite residuocity assumptions.
Both of these frameworks build on the notion of smooth projective hashing introduced by Cramer and Shoup (Eurocrypt 2002), with the additional requirement that the hash function is homomorphic, as is the case for all known instantiations.
Output-Compressing Randomized Encodings and Applications
Uncategorized
Uncategorized
We consider randomized encodings (RE) that enable encoding a Turing machine Pi and input x into its ``randomized encoding'' \hat{Pi}(x) in sublinear, or even polylogarithmic, time in the running-time of Pi(x), independent of its output length. We refer to the former as sublinear RE and the latter as compact RE. For such efficient RE, the standard simulation-based notion of security is impossible, and we thus consider a weaker (distributional) indistinguishability-based notion of security: Roughly speaking, we require indistinguishability of \hat{Pi}_0(x_0) and \hat{Pi}_0(x_1) as long as Pi_0,x_0 and Pi_1,x_1 are sampled from some distributions such that Pi_0(x_0),Time(Pi_0(x_0)) and Pi_1(x_1),Time(Pi_1(x_1)) are indistinguishable.
We first observe that compact RE is equivalent to a variant of the notion of indistinguishability obfuscation (iO)---which we refer to as puncturable iO---for the class of Turing machines without inputs. For the case of circuits, puncturable iO and iO are equivalent (and this fact is implicitly used in the powerful ``punctured program'' paradigm by Sahai and Waters [SW13]).
We next show the following:
- Impossibility in the Plain Model: Assuming the existence of subexponentially secure one-way functions, subexponentially-secure sublinear RE does not exists. (If additionally assuming subexponentially-secure iO for circuits we can also rule out polynomially-secure sublinear RE.) As a consequence, we rule out also puncturable iO for Turing machines (even those without inputs).
- Feasibility in the CRS model and Applications to iO for circuits: Subexponentially-secure sublinear RE in the CRS model and one-way functions imply iO for circuits through a simple construction generalizing GGM's PRF construction. Additionally, any compact (even with sublinear compactness) functional encryption essentially directly yields a sublinear RE in the CRS model, and as such we get an alternative, modular, and simpler proof of the results of [AJ15,BV15] showing that subexponentially-secure sublinearly compact FE implies iO. We further show other ways of instantiating sublinear RE in the CRS model (and thus also iO): under the subexponential LWE assumption, it suffices to have a subexponentially secure FE schemes with just sublinear ciphertext (as opposed to having sublinear encryption time).
- Applications to iO for Unbounded-input Turing machines: Subexponentially-secure compact RE for natural restricted classes of distributions over programs and inputs (which are not ruled out by our impossibility result, and for which we can give candidate constructions) imply iO for unbounded-input Turing machines. This yields the first construction of iO for unbounded-input Turing machines that does not rely on (public-coin) differing-input obfuscation.
Consolidating masking schemes
In this paper we investigate relations between several masking schemes. We show that the Ishai--Sahai--Wagner private circuits construction is closely related to Threshold Implementations and the Trichina gate. The implications of this observation are manifold. We point out a higher-order weakness in higher-order Threshold Implementations, suggest a mitigation and provide new sharings that use a lower number of input shares.
Efficient Asynchronous Accumulators for Distributed PKI
Cryptographic accumulators are a tool for compact set representation and secure set membership proofs. When an element is added to a set by means of an accumulator, a membership witness is generated. This witness can later be used to prove the membership of the element. Typically, the membership witness has to be synchronized with the accumulator value: it has to be updated every time another element is added to the accumulator, and it cannot be used with outdated accumulator values. However, in many distributed applications (such as blockchain-based public key infrastructures), requiring strict synchronization is prohibitive. We define low update frequency, which means that a witness only needs to be updated a small number of times, and old-accumulator compatibility, which means that a witness can be used with outdated accumulator values. Finally, we propose an accumulator that achieves both of those properties.
Towards Secure Cryptographic Software Implementation Against Side-Channel Power Analysis Attacks
Side-channel attacks have been a real threat against many critical embedded systems that rely on cryptographic algorithms as their security engine. A commonly used algorithmic countermeasure, random masking, incurs large execution delay and resource overhead. The other countermeasure, operation shuffling or permutation, can mitigate side-channel leakage effectively with minimal overhead. In this paper, we target utilizing the independence among operations in cryptographic algorithms and randomizing their execution order. We design a tool to automatically detect such independence between statements at the source code level and devise an algorithm for automatic operation shuffling. We test our algorithm on the new SHA3 standard, Keccak. Results show that the tool has effectively implemented operation-shuffling to reduce the side-channel leakage significantly, and therefore can guide automatic secure cryptographic software implementations against differential power analysis attacks.
Linear Cryptanalysis of Reduced-Round SIMECK Variants
SIMECK is a family of 3 lightweight block ciphers designed by Yang et al. They
follow the framework used by Beaulieu et al. from the United States National Security Agency
(NSA) to design SIMON and SPECK. A cipher in this family with K-bit key and N-bit block is
called SIMECKN=K.We show that the security of this block cipher against linear cryptanalysis
is not as good as its predecessors SIMON. More precisely, while the best known linear attack
for SIMON32/64, using algorithm 1 of Matsui, covers 13 rounds we present a linear attack in
this senario which covers 14 rounds of SIMECK32/64. Similarly, using algorithm 1 of Matsui,
we present attacks on 19 and 22 rounds of SIMECK48/96 and SIMECK64/128 respectively,
compare them with known attacks on 16 and 19 rounds SIMON48/96 and SIMON64/128
respectively. In addition, we use algorithm 2 of Matsui to attack 18, 23 and 27 rounds of
SIMECK32/64, SIMECK48/96 and SIMECK64/128 respectively, compare them with known
attacks on 18, 19 and 21 rounds SIMON32/64, SIMON48/96 and SIMON64/128 respectively.
New Circular Security Counterexamples from Decision Linear and Learning with Errors
We investigate new constructions of n-circular counterexamples with a focus on the case of n=2. We have a particular interest in what qualities a cryptosystem must have to be able to separate such circular security from IND-CPA or IND-CCA security. To start, we ask whether there is something special about the asymmetry in bilinear groups that is inherent in the works of ABBC10 and CGH12 or whether it is actually the bilinearity that matters. As a further question, we explore whether such counterexamples are derivable from other
assumptions such as the Learning with Errors (LWE) problem. If it were difficult to find such counterexamples, this might bolster are confidence in using 2-circular encryption as a method of bootstrapping Fully Homomorphic Encryption systems that are based on lattice assumptions.
The results of this paper broadly expand the class of assumptions under which we can build 2-circular counterexamples. We first show for any constant k >= 2 how to build counterexamples from a bilinear group under the decision k-linear assumption. Recall that the decision k-linear assumption becomes progressively weaker as k becomes larger. This means that we can instantiate counterexamples
from symmetric bilinear groups and shows that asymmetric groups do not have any inherently special property needed for this problem.
We then show how to create 2-circular counterexamples from the Learning with Errors problem. This extends the reach of these systems beyond bilinear groups and obfuscation.
New classes of public key cryptosystem K(XVI)SE(1)PKC constructed based on Reed-Solomon code over extension field of m=8 and K(XVI)SE(2)PKC, based on binary cyclic code.
In this paper, we first present a new class of code based public key cryptosystem(PKC) based on Reed-Solomon code over extension field of less than m=9, referred to as K(XVI)SE(1)PKC.
We then present a new class of quadratic multivariate PKC, K(XVI)SE(2)PKC, based on binary cyclic code.
We show that both K(XVI)SE(1)PKC and K(XVI)SE(2)PKC can be secure against the various linear transformation attacks such as Grobner bases attack due to a non-linear structure introduced when constructing the ciphertexts.
Namely, thanks to a non-linear transformation introduced in the construction of K(XVI)SE(1)PKC and K(XVI)SE(2)PKC the ciphertexts can be made very secure against the various sort of linear transformation attacks such as Grobner bases attack, although the degree of any multivariate polynomial used for public key is 1.
A new scheme presented in this paper that transforms message variables in order to realize a non-linear transformation, K(II)TS, would yield a brand-new technique in the field of both code based PKC and multivariate PKC, for much improving the security.
We shall show that the K(XVI)SE(1)PKC can be effectively constructed based on the Reed-Solomon code of m=8, extensively used in the present day storage systems
or the various digital transmission systems.
Last updated: 2016-10-31
Light-hHB: A New Version of hHB with Improved Session Key Exchange
Uncategorized
Uncategorized
This paper offers a new version of the hHB protocol denoted Light-hHB. This proposal uses the same framework as hHB, that is a two stages protocol: the first one for the establishment of a session key between the reader and the tag and the second one similar to HB+. We also introduce in this paper a novel and lightweight key exchange protocol inspired by the BB84 protocol named the non-quantum key exchange protocol. With the use of a practical implementation of the latter protocol in the first stage of Light-hHB, the transmission cost is drastically reduced compared to the one of hHB, which is its main drawback. In the context of RFID tags, Light-hHB is significantly more practical than hHB and achieves the same security goals.
Adaptive Proofs have Straightline Extractors (in the Random Oracle Model)
Abstract. The concept of adaptive security for proofs of knowledge was recently studied by Bernhard et al. They formalised adaptive security in the ROM and showed that the non-interactive version of the Schnorr protocol obtained using the Fiat-Shamir transformation is not adaptively secure unless the one-more discrete logarithm problem is easy. Their only construction for adaptively secure protocols used the Fischlin transformation [3] which yields protocols with straight-line extractors. In this paper we provide two further key insights. Our main result shows that any adaptively secure protocol must have a straight-line extractor: even the most clever rewinding strategies cannot offer any benefits against adaptive provers.
Then, we show that any Fiat-Shamir transformed SIGMA-protocol is not adaptively secure unless a related problem which we call the SIGMA-one-wayness problem is easy. This assumption concerns not just Schnorr but applies to a whole class of SIGMA-protocols including e.g. Chaum-Pedersen and representation proofs. We also prove that SIGMA-one-wayness is hard in the generic group model. Taken together, these results suggest that Fiat-Shamir transformed SIGMA-protocols should not be used in settings where adaptive security is important.
Construction of Lightweight S-Boxes using Feistel and MISTY structures (Full Version)
The aim of this work is to find large S-Boxes, typically operating on 8
bits, having both good cryptographic properties and a low implementation
cost. Such S-Boxes are suitable building-blocks in many lightweight
block ciphers since they may achieve a better security level than
designs based directly on smaller S-Boxes. We focus on S-Boxes
corresponding to three rounds of a balanced Feistel and of a balanced
MISTY structure, and generalize the recent results by Li and Wang on the
best differential uniformity and linearity offered by such a
construction. Most notably, we prove that Feistel networks supersede
MISTY networks for the construction of 8-bit permutations. Based on
these results, we also provide a particular instantiation of an 8-bit
permutation with better properties than the S-Boxes used in several
ciphers, including Robin, Fantomas or CRYPTON.
Privacy-Preserving Content-Based Image Retrieval in the Cloud (Extended Version)
Storage requirements for visual data have been increasing in recent years, following the emergence of many new highly interactive multimedia services and applications for both personal and corporate use. This has been a key driving factor for the adoption of cloud-based data outsourcing solutions. However, outsourcing data storage to the Cloud also leads to new challenges that must be carefully addressed, especially regarding privacy. In this paper we propose a secure framework for outsourced privacy-preserving storage and retrieval in large image repositories. Our proposal is based on IES-CBIR, a novel Image Encryption Scheme that displays Content-Based Image Retrieval properties. Our solution enables both encrypted storage and searching using CBIR queries while preserving privacy. We have built a prototype of the proposed framework, formally analyzed and proven its security properties, and experimentally evaluated its performance and precision. Our results show that IES-CBIR is provably secure, allows more efficient operations than existing proposals, both in terms of time and space complexity, and enables more reliable practical application scenarios.
Detecting Mobile Application Spoofing Attacks by Leveraging User Visual Similarity Perception
Uncategorized
Uncategorized
Mobile application spoofing is an attack where a malicious mobile application
mimics the visual appearance of another one. If such an attack is successful,
the integrity of what the user sees as well as the confidentiality of what she
inputs into the system can be violated by the adversary. A common example of
mobile application spoofing is a phishing attack where the adversary tricks the
user into revealing her password to a malicious application that resembles the
legitimate one.
In this work, we propose a novel approach for addressing mobile application
spoofing attacks by leveraging the visual similarity of application screens. We
use deception rate as a novel metric for measuring how many users would confuse
a spoofing application for the genuine one. We conducted a large-scale online
study where participants evaluated spoofing samples of popular mobile
applications. We used the study results to design and implement a prototype
spoofing detection system, tailored to the estimation of deception rate for
mobile application login screens.
Choosing Parameters for NTRUEncrypt
We describe a methods for generating parameter sets and calculating security estimates for NTRUEncrypt. Analyses are provided for the standardized product-form parameter sets from IEEE 1363.1-2008 and for the NTRU Challenge parameter sets.
Reconciling User Privacy and Implicit Authentication for Mobile Devices
In an implicit authentication system, a user profile is used as an additional factor to strengthen the authentication of mobile users. The profile consists of features that are constructed using the history of user actions on her mobile device over time. The profile is stored on the server and is used to authenticate an access request originated from the device at a later time. An access request will include a vector of recent measurements of the features on the device, that will be subsequently matched against the features stored at the server, to accept or reject the request. The features however include private information such as user location or web sites that have been visited. We propose a privacy-preserving implicit authentication system that achieves implicit authentication without revealing information about the usage profiles of the users to the server. We propose an architecture, give a formal security model and a construction with provable security in two settings where: (i) the device follows the protocol, and (ii) the device is captured and behaves maliciously.
A Brief Comparison of Simon and Simeck
Uncategorized
Uncategorized
Simeck is a new lightweight block cipher design based on combining the
design principles of the Simon and Speck block cipher. While the design allows a smaller and more efficient hardware implementation, its security margins are not well understood. The lack of design rationals of its predecessors further leaves some uncertainty on the security of Simeck.
In this work we give a short analysis of the impact of the design changes by comparing the upper bounds on the probability of differential and linear trails with Simon. We also give a comparison of the effort of finding those bounds, which surprisingly is significant lower for Simeck while covering a larger number of rounds at the same time.
Furthermore, we provide new differentials for Simeck which can cover more rounds compared to previous results on Simon and study how to choose good differentials for attacks and show that one can find better differentials by building them from a larger set of trail with initially lower probability.
We also provide experimental results for the differentials for Simon32 and Simeck32 which show that there exist keys for which the probability of the differential is significant higher than expected.
Based on this we mount key recovery attacks on 19/26/33 rounds of
Simeck32/48/64, which also give insights on the reduced key guessing effort due to the different set of rotation constants.
Linear Overhead Optimally-resilient Robust MPC Using Preprocessing
We present a new technique for robust secret reconstruction with $O(n)$ communication complexity. By applying this technique, we achieve $O(n)$ communication complexity per multiplication for a wide class of robust practical Multi-Party Computation (MPC) protocols. In particular our technique applies to robust threshold computationally secure protocols in the case of $t<n/2$ in the pre-processing model. Previously in the pre-processing model, $O(n)$ communication complexity per multiplication was only known in the case of computationally secure non-robust protocols in the dishonest majority setting (i.e. with $t<n$) and in the case of perfectly-secure robust protocols with $t<n/3$. A similar protocol was sketched by Damgård and Nielsen, but no details were given to enable an estimate of the communication complexity. Surprisingly our robust reconstruction protocol applies for both the synchronous and asynchronous settings.
Indistinguishability Obfuscation: from Approximate to Exact
Uncategorized
Uncategorized
We show general transformations from subexponentially-secure approximate indistinguishability obfuscation (IO) where the obfuscated circuit agrees with the original circuit on a $1/2+\epsilon$ fraction of inputs, into exact indistinguishability obfuscation where the
obfuscated circuit and the original circuit agree on all inputs (except for a negligible probability over the coin tosses of the obfuscator). As a step towards our results, which is of independent interest, we also obtain an approximate-to-exact transformation for functional encryption. At the core of our techniques is a method for ``fooling'' the obfuscator into giving us the correct answer, while preserving the indistinguishability-based security. This is achieved based on various types of secure computation protocols that can be obtained from different standard assumptions.
Put together with the recent results of Canetti, Kalai and Paneth (TCC 2015), Pass and Shelat (Eprint 2015), and Mahmoody, Mohammed and Nemathaji (Eprint 2015), we show how to convert indistinguishability obfuscation schemes in various ideal models into exact obfuscation schemes in the plain model.
Point-Function Obfuscation: A Framework and Generic Constructions
We give a definitional framework for point-function obfuscation in which security is parameterized by a class of algorithms we call target generators. Existing and new notions are captured and explained as corresponding to different choices of this class. This leads to an elegant question: Is it possible to provide a generic construction, meaning one that takes an arbitrary class of target generators and returns a point-function obfuscator secure for it? We answer this in the affirmative with three generic constructions, the first based on indistinguishability obfuscation, the second on deterministic public-key encryption and the third on universal computational extractors. By exploiting known constructions of the primitives assumed, we obtain new point-function obfuscators, including many under standard assumptions. We end with a broader look that relates different known and possible notions of point function obfuscation to each other and to ours.
Demystifying incentives in the consensus computer
Uncategorized
Uncategorized
Cryptocurrencies like Bitcoin and the more recent Ethereum
system allow users to specify scripts in transactions and contracts to support
applications beyond simple cash transactions. In this work, we analyze the
extent to which these systems can enforce the correct semantics of scripts.
We show that when a script execution requires nontrivial computation effort,
practical attacks exist which either waste miners' computational resources or
lead miners to accept incorrect script results. These attacks drive miners to
an ill-fated choice, which we call the {\em verifier's dilemma}, whereby
rational miners are well-incentivized to accept unvalidated blockchains. We
call the framework of computation through a scriptable cryptocurrency a
consensus computer and develop a model that captures incentives for verifying
computation in it. We propose a resolution to the verifier's dilemma which
incentivizes correct execution of certain applications, including outsourced
computation, where scripts require minimal time to verify. Finally we discuss
two distinct, practical implementations of our consensus computer in real
cryptocurrency networks like Ethereum.
Differentially private instance-based noise mechanisms in practice
Uncategorized
Uncategorized
Differential privacy is a widely used privacy model today, whose privacy guarantees are obtained to the price of a random perturbation of the result. In some situations, basic differentially private mechanisms may add too much noise to reach a reasonable level of privacy. To answer this shortcoming, several works have provided more technically involved mechanisms, using a new paradigm of differentially private mechanisms called instance-based noise mechanisms.
In this paper, we exhibit for the first time theoretical conditions for an instance-based noise mechanism to be (epsilon, delta) differentially private. We exploit the simplicity of these conditions to design a novel instance-based noise differentially private mechanism. Conducting experimental evaluations, we show that our mechanism compares favorably to existing instance-based noise mechanisms, either regarding time complexity or accuracy of the sanitized result. By contrast with some prior works, our algorithms do not involve the computation of all local sensitivities, a computational task which was proved to be NP hard in some cases, namely for statistic queries on graphs.
Our framework is as general as possible and can be used to answer any
query, which is in contrast with recent designs of instance-based noise mechanisms where only graph statistics queries are considered.
Four Neighbourhood Cellular Automata as Better Cryptographic Primitives
Three-neighbourhood Cellular Automata (CA) are widely studied and accepted as suitable cryptographic primitive. Rule 30, a 3-neighbourhood CA rule, was proposed as an ideal candidate for cryptographic primitive by Wolfram. However, rule 30 was shown to be weak against Meier-Staffelbach attack. The cryptographic properties like diffusion and randomness increase with increase in neighbourhood radius and thus opens the avenue of exploring the cryptographic properties of 4-neighbourhood CA. This work explores whether four-neighbourhood CA can be a better cryptographic primitive. We construct a class of cryptographically suitable 4-neighbourhood nonlinear CA rules that resembles rule 30. One 4-neighbourhood nonlinear CA from this selected class is shown to be resistant against Meier-Staffelbach attack on rule 30, justifying the applicability of 4-neighbourhood CA as better cryptographic primitives.
FURISC: FHE Encrypted URISC Design
Uncategorized
Uncategorized
This paper proposes design of a Fully Homomorphic Ultimate RISC (FURISC) based
processor. The FURISC architecture supports arbitrary operations on data encrypted
with Fully Homomorphic Encryption (FHE) and allows the execution of encrypted programs stored in processors with encrypted memory addresses. The FURISC architecture is designed based on fully homomorphic single RISC instructions like {\em Subtract Branch if Negative} (SBN) and {\em MOVE}. This paper explains how the use of FHE for designing the ultimate RISC processor is better in terms of security compared to previously proposed somewhat homomorphic encryption (SHE) based processor. The absence of randomization in SHE can lead to Chosen Plaintext Attacks (CPA) which is alleviated by the use of the FHE based Ultimate RISC instruction. Furthermore, the use of FURISC helps to develop fully homomorphic applications by tackling the {\em termination} problem, which is a major obstacle for FHE processor design. The paper compares the MOVE based FHE RISC processor with the SBN alternative, and shows that the later is more efficient in terms of number of instructions and time required for the execution of a program. Finally, an SBN based FURISC processor simulator has been designed
to demonstrate that various algorithms can indeed be executed on data encrypted with FHE, providing a solution to the termination problem for FHE based processors and the CPA insecurity of SHE processors simultaneously.
Chosen IV Cryptanalysis on Reduced Round ChaCha and Salsa
Recently, ChaCha20 (the stream cipher ChaCha with 20 rounds) is in the process of being a standard and thus it attracts serious interest in cryptanalysis. The most significant effort to analyse Salsa and ChaCha had been explained by Aumasson et al long back (FSE 2008) and further, only minor improvements could be achieved. In this paper, first we revisit the work of Aumasson et al to provide a clearer insight of the existing attack (2^{248} complexity for ChaCha7, i.e., 7 rounds) and showing certain improvements (complexity around 2^{243}) by exploiting additional Probabilistic Neutral Bits. More importantly, we describe a novel idea that explores proper choice of IVs corresponding to the keys, for which the complexity can be improved further (2^{239}). The choice of IVs corresponding to the keys is the prime observation of this work. We systematically show how a single difference propagates after one round and how the differences can be reduced with proper choices of IVs. For Salsa too (Salsa20/8, i.e., 8 rounds), we get improvement in complexity, reducing it to 2^{245.5} from 2^{247.2} reported by Aumasson et al.
On the Security of a Self-healing Group Key Distribution Scheme
Recently, in Journal of Security and Communication Networks (5(12):1363-1374, DOI: 10.1002/sec.429), Wang et al. proposed a group key distribution scheme with self-healing property for wireless networks in which resource is constrained. They claimed that their key distribution scheme satisfies forward security, backward security and can resist collusion attack. Unfortunately, we found some security flaws in their scheme. In this paper, we present a method to attack this scheme. The attack illustrates that this scheme does not satisfy forward security, which also directly breaks the collusion resistance capability.
Novel algorithms and hardware architectures for Montgomery Multiplication over GF(p)
This report describes the design and implementation results in FPGAs of a scalable hardware architecture for computing modular multiplication in prime fields GF($p$), based on the Montgomery multiplication (MM) algorithm. Starting from an existing digit-serial version of the MM algorithm, a novel {\it digit-digit} based MM algorithm is derived and two hardware architectures that compute that algorithm are described. In the proposed approach, the input operands (multiplicand, multiplier and modulus) are represented using as radix $\beta = 2^k$. Operands of arbitrary size can be multiplied with modular reduction using almost the same hardware since the multiplier's kernel module that performs the modular multiplication depends only on $k$. The novel hardware architectures proposed in this paper were verified by modeling them using VHDL and implementing them in the Xilinx FPGAs Spartan and Virtex5. Design trade-offs are analyzed considering different operand sizes commonly used in cryptography and different values for $k$. The proposed designs for MM are well suited to be implemented in modern FPGAs, making use of available dedicated multiplier and memory blocks reducing drastically the FPGA's standard logic while keeping an acceptable performance compared with other implementation approaches. From the Virtex5 implementation, the proposed MM multiplier reaches a throughput of 242Mbps using only 219 FPGA slices and achieving a 1024-bit modular multiplication in 4.21$\mu$secs.
Cliptography: Clipping the Power of Kleptographic Attacks
Kleptography, introduced 20 years ago by Young and Yung [Crypto ’96], considers the (in)security of malicious implementations (or instantiations) of standard cryptographic prim- itives that embed a “backdoor” into the system. Remarkably, crippling subliminal attacks are possible even if the subverted cryptosystem produces output indistinguishable from a truly secure “reference implementation.” Bellare, Paterson, and Rogaway [Crypto ’14] recently initiated a formal study of such attacks on symmetric key encryption algorithms, demonstrating a kleptographic attack can be mounted in broad generality against randomized components of cryptographic systems.
We enlarge the scope of current work on the problem by permitting adversarial subversion of (randomized) key generation; in particular, we initiate the study of cryptography in the complete subversion model, where all relevant cryptographic primitives are subject to kleptographic attacks. We construct secure one-way permutations and trapdoor one-way permutations in this “complete subversion” model, describing a general, rigorous immunization strategy to clip the power of kleptographic subversions. Our strategy can be viewed as a formal treatment of the folklore “nothing up my sleeve” wisdom in cryptographic practice. We also describe a related “split program” model that can directly inform practical deployment. We additionally apply our general immunization strategy to directly yield a backdoor-free PRG. This notably amplifies previous results of Dodis, Ganesh, Golovnev, Juels, and Ristenpart [Eurocrypt ’15], which require an honestly generated random key.
We then examine two standard applications of (trapdoor) one-way permutations in this complete subversion model and construct “higher level” primitives via black-box reductions. We showcase a digital signature scheme that preserves existential unforgeability when all algorithms (including key generation, which was not considered to be under attack before) are subject to kleptographic attacks. Additionally, we demonstrate that the classic Blum– Micali pseudorandom generator (PRG), using an “immunized” one-way permutation, yields a backdoor-free PRG.
Alongside development of these secure primitives, we set down a hierarchy of kleptographic attack models which we use to organize past results and our new contributions; this taxonomy may be valuable for future work.
On the Complexity of Additively Homomorphic UC Commitments
Uncategorized
Uncategorized
We present a new constant round additively homomorphic commitment scheme with (amortized) computational and communication complexity linear in the size of the string committed to. Our scheme is based on the non-homomorphic commitment scheme of Cascudo \emph{et al.} presented at PKC 2015. However, we manage to add the additive homo- morphic property, while at the same time reducing the constants. In fact, when opening a large enough batch of commitments we achieve an amor- tized communication complexity converging to the length of the message committed to, i.e., we achieve close to rate 1 as the commitment protocol by Garay \emph{et al.} from Eurocrypt 2014. A main technical improvement over the scheme mentioned above, and other schemes based on using error correcting codes for UC commitment, we develop a new technique which allows to based the extraction property on erasure decoding as opposed to error correction. This allows to use a code with significantly smaller minimal distance and allows to use codes without efficient decoding.
Our scheme only relies on standard assumptions. Specifically we require a pseudorandom number generator, a linear error correcting code and an ideal oblivious transfer functionality. Based on this we prove our scheme secure in the Universal Composability (UC) framework against a static and malicious adversary corrupting any number of parties.
On a practical note, our scheme improves significantly on the non- homomorphic scheme of Cascudo \emph{et al.} Based on their observations in regards to efficiency of using linear error correcting codes for commit- ments we conjecture that our commitment scheme might in practice be more efficient than all existing constructions of UC commitment, even non-homomorphic constructions and even constructions in the random oracle model. In particular, the amortized price of computing one of our commitments is less than that of evaluating a hash function once.
Foundations of Reactive Garbling Schemes
Garbled circuits is a cryptographic technique, which has been used among other
things
for the construction of two and three-party secure computation, private
function evaluation and secure outsourcing. Garbling schemes is a primitive
which formalizes the syntax and security properties of garbled circuits.
We define a generalization of garbling schemes called \emph{reactive garbling
schemes}.
We consider functions and garbled functions taking multiple inputs and giving
multiple outputs.
Two garbled functions can be linked together:
an encoded output of one garbled function can be transformed
into an encoded input of the other garbled function without communication
between the parties.
Reactive garbling schemes also allow partial evaluation of garbled functions
even when only some of the encoded inputs are provided.
It is possible to further evaluate the linked garbled functions
when more garbled inputs become available.
It is also possible to later garble more functions and link them to the
ongoing garbled evaluation.
We provide rigorous definitions for reactive garbling schemes.
We define a new notion of security for reactive garbling schemes called
confidentiality.
We provide both simulation based and indistinguishability based notions of
security.
We also show that the simulation based notion of security
implies the indistinguishability based notion of security.
We present an instantiation of reactive garbling schemes. We present an
application of reactive garbling schemes to reactive
two-party computation secure against a malicious adversary. We demonstrate
how garbling schemes can be used to give abstract black-box descriptions and
proof of several advanced applications of garbled circuits in the literature,
including Minilego
and Lindell's forge-and-loose technique.
Fast and Secure Linear Regression and Biometric Authentication with Security Update
We explicitly present a homomorphic encryption scheme with a flexible encoding of plaintexts. We prove its security under the LWE assumption, and innovatively show how the scheme can be used to handle computations over both binary strings and real numbers. In addition, using the scheme and its features, we build fast and secure systems of
- linear regression using gradient descent, namely finding a reasonable linear relation between data items which remain encrypted. Compared to the best previous work over a simulated dataset of $10^8$ records each with 20 features, our system dramatically reduces the server running time from about 8.75 hours (of the previous work) to only about 10 minutes.
- biometric authentication, in which we show how to reduce ciphertext sizes by half and to do the computation at the server very fast, compared with the state-of-the-art.
Moreover, as key rotation is a vital task in practice and is recommended by many authorized organizations for key management,
- we show how to do key rotation over encrypted data, without any decryption involved, and yet homomorphic properties of ciphertexts remain unchanged. In addition, our method of doing key rotation handles keys of different security levels (e.g., 80- and 128-bit securities), so that the security of ciphertexts and keys in our scheme can be "updated", namely can be changed into a higher security level.
SpecTre: A Tiny Side-Channel Resistant Speck Core for FPGAs
Emerging applications such as the Internet of Things require security solutions that are small and low cost, yet feature solid protection against a wide range of sophisticated attacks. Lightweight cryptographic schemes such as the Speck cipher that was recently proposed by the NSA aim to solve some of these challenges. However, before using Speck in any practical application, sound protection against side-channel attacks must be in place. In this work, we propose a bit-serialized implementation of Speck, to achieve minimal area footprint. We further propose a Speck core that is provably secure against first-order side-channel attacks using a threshold implementation technique which depends on secure multiparty computation. The resulting design is a tiny crypto core that provides AES-like security in under 45 slices on a low-cost Xilinx Spartan 3 FPGA. The first-order side-channel resistant version of the same core needs less than 100 slices. The security of the protected core is validated by state-of-the-art side-channel leakage detection tests.
Systematic Reverse Engineering of Cache Slice Selection in Intel Processors
Dividing last level caches into slices is a popular method to prevent memory accesses from becoming a bottleneck on modern multicore processors. In order to assess and understand the benefits of cache slicing in detail, a precise knowledge of implementation details such as the slice selection algorithm are of high importance.
However, slice selection methods are mostly unstudied, and processor manufacturers choose not to publish their designs, nor their design rationale.
In this paper, we present a tool that allows to recover the slice selection algorithm for Intel processors. The tool uses cache access information to derive equations that allow the reconstruction of the applied slice selection algorithm. Thereby, the tool provides essential information for performing last level cache attacks and enables further exploration of the behavior of modern caches.
The tool is successfully applied to a range of Intel CPUs with different slices and architectures. Results show that slice selection algorithms have become more complex over time by involving an increasing number of bits of the physical address. We also demonstrate that among the most recent processors, the slice selection algorithm depends on the number of CPU cores rather than the processor model.
Counting Keys in Parallel After a Side Channel Attack
Uncategorized
Uncategorized
Side channels provide additional information to skilled adversaries that reduce the effort to determine an unknown key. If sufficient side channel information is available, identification of the secret key can even become trivial. However, if not enough side information is available, some effort is still required to find the key in the key space (which now has reduced entropy). To understand the security implications of side channel attacks it is then crucial to evaluate this remaining effort in a meaningful manner. Quantifying this effort can be done by looking at two key questions: first, how `deep' (at most) is the unknown key in the remaining key space, and second, how `expensive' is it to enumerate keys up to a certain depth?
We provide results for these two challenges. Firstly, we show how to construct an extremely efficient algorithm that accurately computes the rank of a (known) key in the list of all keys, when ordered according to some side channel attack scores. Secondly, we show how our approach can be tweaked such that it can be also utilised to enumerate the most likely keys in a parallel fashion. We are hence the first to demonstrate that a smart and parallel key enumeration algorithm exists.
Binary Field Multiplication on ARMv8
In this paper, we show efficient implementations of binary field multiplication over ARMv8.
We exploit an advanced 64-bit polynomial multiplication (\texttt{PMULL}) supported by ARMv8
and conduct multiple levels of asymptotically faster Karatsuba multiplication.
Finally, our method conducts binary field multiplication within 57 clock cycles for B-251.
Our proposed method on ARMv8 improves the performance by a factor of $5.5$ times than previous techniques on ARMv7.
Classical Cryptographic Protocols in a Quantum World
Cryptographic protocols, such as protocols for secure function evaluation (SFE), have played a crucial role in the development of modern cryptography. The extensive theory of these protocols, however, deals almost exclusively with classical attackers. If we accept that quantum information processing is the most realistic model of physically feasible computation, then we must ask: what classical protocols remain secure against quantum attackers?
Our main contribution is showing the existence of classical two-party protocols for the secure evaluation of any polynomial-time function under reasonable computational assumptions (for example, it suffices that the learning with errors problem be hard for quantum polynomial time). Our result shows that the basic two-party feasibility picture from classical cryptography remains unchanged in a quantum world.
Cryptanalysis for Secure and Efficient Smart-Card-Based Remote User Authentication Scheme for Multi-server Environment
Multi-server authentication is going to be an integral part of remote authentication with the passage of time. The remote authentication has been part and parcel of internet based communication. In the last decade several multi-server authentication techniques has been presented. However there is still a need of more efficient and robust techniques. Lately, Saraswathi et al., presented a multi-server authentication scheme that has been found under much vulnerability like stolen card attack, misrepresentation attack, and forward secrecy attacks. This paper presents the cryptanalysis for Saraswathi et al. scheme and shows the review analysis.
On the discrete logarithm problem in finite fields of fixed characteristic
For $q$ a prime power, the discrete logarithm problem (DLP) in $\mathbb{F}_{q}^{\times}$ consists in finding, for any $g \in \mathbb{F}_{q}^{\times}$ and $h \in \langle g \rangle$, an integer $x$ such that $g^x = h$. For each prime $p$ we exhibit infinitely many extension fields $\mathbb{F}_{p^n}$ for which the DLP in $\mathbb{F}_{p^n}^{\times}$ can be solved in expected quasi-polynomial time.
A One-time Stegosystem and Applications to Efficient Covert Communication
We present the first information-theoretic steganographic protocol with an asymptotically optimal ratio of key length to message length that operates on arbitrary covertext distributions with constant min-entropy. Our results are also applicable to the computational setting: our stegosystem can be composed over a pseudorandom generator to send longer messages in a computationally secure fashion. In this respect our scheme offers a significant improvement in terms of the number of pseudorandom bits generated by the two parties in comparison to previous results known in the computational setting. Central to our approach for improving the overhead for general distributions is the use of combinatorial constructions that have been found to be useful in other contexts for derandomization: almost $t$-wise independent function families.
Security of Linear Secret-Sharing Schemes against Mass Surveillance
Uncategorized
Uncategorized
Following the line of work presented recently by Bellare, Paterson and Rogaway, we formalize and investigate the resistance of linear secret-sharing schemes to mass surveillance. This primitive is widely used to design IT systems in the modern computer world, and often it is implemented by a proprietary code that the provider (“big brother”) could manipulate to covertly violate the privacy of the users (by implementing Algorithm-Substitution Attacks or ASAs). First, we formalize the security notion that expresses the goal of big brother and prove that for any linear secret-sharing scheme there exists an undetectable subversion of it that efficiently allows surveillance. Second, we formalize the security notion that assures that a sharing scheme is secure against ASAs and construct the first sharing scheme that meets this notion. This work could serve as an important building block towards constructing systems secure against mass surveillance.
Integral Cryptanalysis on Full MISTY1
MISTY1 is a block cipher designed by Matsui in 1997. It was well evaluated and standardized by projects, such as CRYPTREC, ISO/IEC, and NESSIE. In this paper, we propose a key recovery attack on the full MISTY1, i.e., we show that 8-round MISTY1 with 5 FL layers does not have 128-bit security. Many attacks against MISTY1 have been proposed, but there is no attack against the full MISTY1. Therefore, our attack is the first cryptanalysis against the full MISTY1. We construct a new integral characteristic by using the propagation characteristic of the division property, which was proposed in 2015. We first improve the division property by optimizing a public S-box and then construct a 6-round integral characteristic on MISTY1. Finally, we recover the secret key of the full MISTY1 with $2^{63.58}$ chosen plaintexts and $2^{121}$ time complexity. Moreover, if we can use $2^{63.994}$ chosen plaintexts, the time complexity for our attack is reduced to $2^{107.9}$. Note that our cryptanalysis is a theoretical attack. Therefore, the practical use of MISTY1 will not be affected by our attack.
ANONIZE: A Large-Scale Anonymous Survey System
A secure ad-hoc survey scheme enables a survey authority to independently (without any interaction) select an ad-hoc group of registered users based only on their identities (e.g., their email addresses), and create a survey where only selected users can anonymously submit exactly one response.
We present a formalization of secure ad-hoc surveys and present:
* an abstract provably-secure implementation based on standard cryptographic building blocks (which in particular are implied by the existence of enhanced trapdoor permutations in the CRS model);
* a practical instantiation of our abstract protocol, called ANONIZE, which is provably-secure in the random oracle model based on cryptographic assumptions on groups with bilinear maps.
As far as we know, ANONIZE constitutes the first implementation of a large-scale secure computation protocol (of non-trivial functionalities) that can scale to millions of users.
Indifferentiability of Confusion-Diffusion Networks
We show the first positive results for the indifferentiability security of the confusion-diffusion networks (which are extensively used in the design of block ciphers and hash functions). In particular, our result shows that a constant number of confusion-diffusion rounds is sufficient to extend the domain of a public random permutation.
Another Look at Normal Approximations in Cryptanalysis
Statistical analysis of attacks on symmetric ciphers often require assuming the normal behaviour of a test statistic.
Typically such an assumption is made in an asymptotic sense. In this work, we consider concrete versions of some important
normal approximations that have been made in the literature. To do this, we use the Berry-Esséen theorem to derive
explicit bounds on the approximation errors. Analysing these error bounds in the cryptanalytic context throws up several
surprising results. One important implication is that this puts in doubt the applicability of the order statistics
based approach for analysing key recovery attacks on block ciphers. This approach has been earlier used to obtain several
results on the data complexities of (multiple) linear and differential cryptanalysis. The non-applicability of the order
statistics based approach puts a question mark on the data complexities obtained using this approach. Fortunately, we
are able to recover all of these results by utilising the hypothesis testing framework. Detailed consideration of the
error in normal approximation also has implications for $\chi^2$ and the log-likelihood ratio (LLR) based test statistics.
The normal approximation of the $\chi^2$ test statistics has some serious and counter-intuitive restrictions. One such
restriction is that for multiple linear cryptanalysis as the number of linear approximations grows so does the requirement
on the number of plaintext-ciphertext pairs for the approximation to be proper. The issue of satisfactorily addressing the
problems with the application of the $\chi^2$ test statistics remains open. For the LLR test statistics, previous work
used a normal approximation followed by another approximation to simplify the parameters of the normal approximation. We
derive the error bound for the normal approximation which turns out to be difficult to interpret. We show that the approximation
required for simplifying the parameters restricts the applicability of the result. Further, we argue that this approximation
is actually not required. More generally, the message of our work is that all cryptanalytic attacks should properly derive and
interpret the error bounds for any normal approximation that is made.
Optimizing MAKWA on GPU and CPU
We present here optimized implementations of the MAKWA password hashing
function on an AMD Radeon HD 7990 GPU, and compare its efficiency with an Intel
i7 4770K CPU for systematic dictionary attacks. We find that the GPU seems to
get more hashing done for a given budget, but not by a large amount (the GPU is less
than twice as efficient as the CPU). Raising the MAKWA modulus size to 4096 bits,
instead of the default 2048 bits, should restore the balance in favour of the CPU. We
also find that power consumption, not hardware retail price, is likely to become the
dominant factor for industrialized, long-term attacking efforts.
EdDSA for more curves
The original specification of EdDSA was suitable only for finite fields Fq with q mod 4 = 1. The main purpose of this document is to extend EdDSA to allow finite fields Fq with any odd q. This document also extends EdDSA to support prehashing, i.e., signing the hash of a message.
Quantum Cryptanalysis of NTRU
This paper explores some attacks that someone with a Quantum Computer may be able to perform against NTRUEncrypt, and in particular NTRUEncrypt as implemented by the publicly available library from Security Innovation. We show four attacks that an attacker with a Quantum Computer might be able to perform against encryption performed by this library. Two of these attacks recover the private key from the public key with less effort than expected; in one case taking advantage of how the published library is implemented, and the other, an academic attack that works against four of the parameter sets defined for NTRUEncrypt. In addition, we also show two attacks that are able to recover plaintext from the ciphertext and public key with less than expected effort. This has potential implications on the use of NTRU within TOR, as suggested by Whyte and Schanck
Hawk: The Blockchain Model of Cryptography and Privacy-Preserving Smart Contracts
Emerging smart contract systems over decentralized cryp-
tocurrencies allow mutually distrustful parties to transact
safely with each other without trusting a third-party inter-
mediary. In the event of contractual breaches or aborts, the
decentralized blockchain ensures that other honest parties
obtain commesurate remuneration. Existing systems, how-
ever, lack transactional privacy. All transactions, including
flow of money between pseudonyms and amount trasacted,
are exposed in the clear on the blockchain.
We present Hawk, a decentralized smart contract system
that does not store financial transactions in the clear on
the blockchain, thus retaining transactional privacy from the
public’s view. A Hawk programmer can write a private smart
contract in an intuitive manner without having to implement
cryptography, and our compiler automatically generates an
efficient cryptographic protocol where contractual parties in-
teract with the blockchain, using cryptographic primitives
such as succint zero-knowledge proofs.
To formally define and reason about the security of our
protocols, we are the first to formalize the blockchain model
of secure computation. The formal modeling is of indepen-
dent interest. We advocate the community to adopt such a
formal model when designing interesting applications atop
decentralized blockchains.
Preprocessing-Based Verification of Multiparty Protocols with Honest Majority
This paper presents a generic “GMW-style” method for turning passively secure protocols into protocols secure against covert attacks, adding relatively cheap offline preprocessing and post-execution verification phases. In the preprocessing phase, each party generates and shares a sufficient amount of verified multiplication triples that will be later used to assist that party’s proof. The execution phase, after which the computed result is already available to the parties, has only negligible overhead that comes from signatures on sent messages. In the postprocessing phase, the verifiers repeat the computation of the prover in secret-shared manner, checking that they obtain the same messages that the prover sent out during execution. The verification preserves the privacy guarantees of the original protocol. It is applicable to protocols doing computations over finite rings, even if the same protocol performs its computation over several distinct rings. We apply our verification method to the Sharemind platform for secure multiparty computations (SMC), evaluate its performance and compare it to other existing SMC platforms offering security against stronger than passive attackers.
Decaf: Eliminating cofactors through point compression
We propose a new unified point compression format for Edwards, Twisted Edwards and Montgomery curves over large-characteristic fields, which effectively divides the curve's cofactor by 4 at very little cost to performance. This allows cofactor-4 curves to efficiently implement prime-order groups.
Function-Hiding Inner Product Encryption
Uncategorized
Uncategorized
We extend the reach of functional encryption schemes that are provably secure under simple assumptions against unbounded collusion to include function-hiding inner product schemes. Our scheme is a private key functional encryption scheme, where ciphertexts and secret keys correspond to vectors and a decryptor learns the value of the inner product of ciphertext and secret key vectors. Our scheme employs asymmetric bilinear maps and relies only on the SXDH assumption to satisfy a natural indistinguishability-based security notion where arbitrarily many key and ciphertext vectors can be simultaneously changed as long as the key-ciphertext dot product relationships are all preserved.
Privacy-preserving Frequent Itemset Mining for Sparse and Dense Data
Frequent itemset mining is a task that can in turn be used for other purposes such as associative rule mining. One problem is that the data may be sensitive, and its owner may refuse to give it for analysis in plaintext. There exist many privacy-preserving solutions for frequent itemset mining, but in any case enhancing the privacy inevitably spoils the efficiency. Leaking some less sensitive information such as data density might improve the efficiency. In this paper, we devise an approach that works better for sparse matrices and compare it to the related work that uses similar security requirements on similar secure multiparty computation platform.
Smart Security Management in Secure Devices
Among other threats, secure components are subjected to physical attacks whose aim is to recover the secret information they store. Most of the work carried out to protect these components generally consists in developing protections (or countermeasures) taken one by one. But this ``countermeasure-centered'' approach drastically decreases the performance of the chip in terms of power, speed and availability. In order to overcome this limitation, we propose a complementary approach: smart dynamic management of the whole set of countermeasures embedded in the component. Two main specifications for such management are required in a real world application (for example, a conditional access system for Pay-TV): it has to provide capabilities for the chip to distinguish between attacks and normal use cases (without the help of a human being and in a robust but versatile way); it also has to be based on mechanisms which dynamically find a trade-off between security and performance.
In this article, a prototype which enables such security management is described. The solution is based on a double-processor architecture: one processor embeds a representative set of countermeasures (and mechanisms to define their parameters) and executes the application code. The second processor, on the same chip, applies a given security strategy, but without requesting sensitive data from the first processor. The chosen strategy is based on fuzzy logic reasoning to enable the designer to describe, using a fairly simple formalism, both the attack paths and the normal use cases. A proof of concept has been proposed for the smart card part of a conditional access for Pay-TV, but it could easily be fine-tuned for other applications.
GMU Hardware API for Authenticated Ciphers
Uncategorized
Uncategorized
In this paper, we propose a universal hardware API for authenticated ciphers, which can be used in any future implementations of authenticated ciphers submitted to the CAESAR competition. A common interface and communication protocol would help in reducing any potential biases, and would make the comparison in hardware more reliable and fair. By design, our proposed API is equally suitable for hardware implementations of authenticated ciphers developed manually (at the register-transfer level), and those obtained using high-level synthesis tools. Our implementation of the proposed interface and communication protocol includes universal, open-source pre processing and post-processing units, common for all CAESAR candidates. Apart from the full documentation, examples, and the source code of the pre-processing and post-processing units, we are making available in public domain a) a universal testbench to verify the functionality of any CAESAR candidate implemented using the GMU hardware API, b) a Python script used to automatically generate test vectors for this testbench, c) VHDL wrappers used to determine the maximum clock frequency
and the resource utilization of all implementations, and d) RTL VHDL source codes of high-speed implementations of AES and the Keccak Permutation F. We hope that the existence of these resources will substantially reduce the time necessary to develop hardware implementations of all CAESAR candidates for the purpose of evaluation, comparison, and future deployment in real products.
The Fallacy of Composition of Oblivious RAM and Searchable Encryption
Oblivious RAM (ORAM) is a tool proposed to hide access pattern leakage, and there has been a lot of progress in the efficiency of ORAM schemes; however, less attention has been paid to study the applicability of ORAM for cloud applications such as symmetric searchable encryption (SSE). Although, searchable encryption is one of the motivations for ORAM research, no in-depth study of the applicability of ORAM to searchable encryption exists as of June 2015. In this work, we initiate the formal study of using ORAM to reduce the access pattern leakage in searchable encryption.
We propose four new leakage classes and develop a systematic methodology to study the applicability of ORAM to SSE. We develop a worst-case communication baseline for SSE. We show that completely eliminating leakage in SSE is impossible. We propose single keyword schemes for our leakage classes and show that either they perform worse than streaming the entire outsourced data (for a large fraction of queries) or they do not provide meaningful reduction in leakage. We present detailed evaluation using the Enron email corpus and the complete English Wikipedia corpus.
De Bruijn Sequences from Joining Cycles of Nonlinear Feedback Shift Registers
De Bruijn sequences are a class of nonlinear recurring sequences that have wide applications in cryptography and modern communication systems. One main method for constructing them is to join the cycles of a feedback shift register (FSR) into a full cycle, which is called the cycle joining method. Jansen et al. (IEEE Trans on Information Theory 1991) proposed an algorithm for joining cycles of an arbitrary FSR. This classical algorithm is further studied in this paper. Motivated by their work, we propose a new algorithm for joining cycles, which doubles the efficiency of the classical cycle joining algorithm. Since both algorithms need FSRs that only generate short cycles, we also propose efficient ways to construct short-cycle FSRs. These FSRs are nonlinear and are easy to obtain. As a result, a large number of de Bruijn sequences are constructed from them. We explicitly determine the size of these de Bruijn sequences. Besides, we show a property of the pure circulating register, which is important for searching for short-cycle FSRs.
Improved Linear Hull Attack on Round-Reduced \textsc{Simon} with Dynamic Key-guessing Techniques
Uncategorized
Uncategorized
\textsc{Simon} is a lightweight block cipher family proposed by NSA in 2013. It has drawn many cryptanalysts' attention and varieties of cryptanalysis results have been published, including differential, linear, impossible differential, integral cryptanalysis and so on.
In this paper, we give the improved linear attacks on all reduced versions of \textsc{Simon} with dynamic key-guessing technique, which was proposed to improve the differential attack on \textsc{Simon} recently.
By establishing the boolean function of parity bit in the linear hull distinguisher and reducing the function according to the property of AND operation, we can guess different subkeys (or equivalent subkeys) for different situations, which decrease the number of key bits involved in the attack and decrease the time complexity in a further step.
As a result, 23-round \textsc{Simon}32/64, 24-round \textsc{Simon}48/72, 25-round \textsc{Simon}48/96, 30-round \textsc{Simon}64/96, 31-round \textsc{Simon}64/128, 37-round \textsc{Simon}96/96, 38-round \textsc{Simon}96/144, 49-round \textsc{Simon}128/128, 51-round \textsc{Simon}128/192 and 53-round \textsc{Simon}128/256 can be attacked.
As far as we know, our attacks on most reduced versions of \textsc{Simon}
are the best compared with the previous cryptanalysis results.
However, this does not shake the security of \textsc{Simon} family with full rounds.
Communication Complexity of Conditional Disclosure of Secrets and Attribute-Based Encryption.
Uncategorized
Uncategorized
We initiate a systematic treatment of the communication complexity of conditional disclosure of
secrets (CDS), where two parties want to disclose a secret to a third party if and only if their respective inputs
satisfy some predicate. We present a general upper bound and the first non-trivial lower bounds for conditional
disclosure of secrets. Moreover, we achieve tight lower bounds for many interesting setting of parameters for
CDS with linear reconstruction, the latter being a requirement in the application to attribute-based encryption.
In particular, our lower bounds explain the trade-off between ciphertext and secret key sizes of several existing
attribute-based encryption schemes based on the dual system methodology.
Secure Multi-Party Shuffling
In secure multi-party shuffling, multiple parties, each holding an input, want to agree on a random permutation of their inputs while keeping the permutation secret. This problem is important as a primitive in many privacy-preserving applications such as anonymous communication, location-based services, and electronic voting.
Known techniques for solving this problem suffer from poor scalability, load-balancing issues, trusted party assumptions, and/or weak security guarantees.
In this paper, we propose an unconditionally-secure protocol for multi-party shuffling that scales well with the number of parties and is load-balanced. In particular, we require each party to send only a polylogarithmic number of bits and perform a polylogarithmic number of operations while incurring only a logarithmic round complexity. We show security under universal composability against up to about n/3 fully-malicious parties. We also provide simulation results showing that our protocol improves significantly over previous work. For example, for one million parties, when compared to the state of the art, our protocol reduces the communication and computation costs by at least three orders of magnitude and slightly decreases the number of communication rounds.
Analyzing the Efficiency of Biased-Fault Based Attacks
The traditional fault analysis techniques developed over the past decade rely on a fault
model, a rigid assumption about the nature of the fault. A practical challenge for all faults attacks is to identify a fault injection method that achieves the presumed fault model.
In this paper, we analyze a class of more recently proposed fault analysis techniques,
which adopt a biased fault model. Biased fault attacks enable
a more flexible fault model, and are therefore easier to adopt to practice.
The purpose of our analysis is to evaluate the relative efficiency of several recently proposed biased-fault attacks, including Fault Sensitivity Analysis (FSA), Non-Uniform Error Value Analysis (NUEVA), Non-Uniform Faulty Value Analysis (NUFVA), and Differential Fault Intensity Analysis (DFIA).
We compare the relative performance of each technique in a common framework, using a common circuit and using a common fault injection method. We show that, for an identical circuit and an identical fault injection method, the number of faults per attack greatly varies according with the analysis technique.
In particular, DFIA is more efficient than FSA, and FSA is more efficient than both NUEVA and NUFVA. In terms of number of fault injections until full key disclosure, for a typical case, FSA uses 8x more faults than DFIA, and NUEVA uses 33x more faults than DFIA. Hence, the post-processing technique selected in a biased-fault attack has a significant impact on the probability of a successful attack.
Strong Security of the Strongly Multiplicative Ramp Secret Sharing based on Algebraic Curves
We introduce a coding theoretic criterion for
Yamamoto's strong security
of the ramp secret sharing scheme.
After that, by using it, we show the strong security of
the strongly multiplicative
ramp secret sharing proposed by Chen et al. in 2008.
Cryptanalysis of a modern rotor machine in a multicast setting
At FSE '93, Anderson presented a modern byte-oriented ro-
tor machine that is suitable for fast software implementation. Building
on a combination of chosen ciphertexts and chosen plaintexts, we show
that in a setting with multiple recipients the recovery of an (equivalent) secret key can be feasible within minutes in a standard computer algebra system.
Last updated: 2016-08-12
A Hybrid Gaussian Sampler for Lattices over Rings
Gaussian sampling over lattices is a cornerstone of lattice-based cryptography as it allows to build numerous cryptographic primitives. There are two main algorithms performing this task. The first one is due to Klein (SODA 2000) and Gentry, Peikert and Vaikuntanathan (STOC 2008), and outputs vectors of good quality but runs rather slowly, in quadratic time. The second one is due to Peikert (CRYPTO 2010) and outputs vectors of slightly worse quality, but can be made to run in quasilinear time in the ring setting.
We present a Gaussian Sampler optimized for lattices over the ring of integer of a cyclotomic number field. At a high-level it works as Klein's sampler but uses an efficient variant of Peikert's sampler as a subroutine. The result is a new sampler that samples vectors with a quality close to Klein's sampler and achieves the same quasilinear complexity as Peikert's sampler. In practice, we get close to the best of both worlds.
Diversity and Transparency for ECC
Generating and standardizing elliptic curves to use
them in a cryptographic context is a hard task.
In this note, we don’t make an explicit proposal
for an elliptic curve, but we deal with the following
issues.
Security: We give a list of criteria that should be
satisfied by a secure elliptic curve. Although a few
of these criteria are incompatible, we detail what we
think are the best choices for optimal security.
Transparency: We sketch a way to generate a
curve in a fully transparent way so that it can be
trusted and not suspected to belong to a (not publicly
known to be) vulnerable class. In particular, since the
computational cost of verifying the output of such a
process may be quite high, we sketch out the format
of a certificate that eases the computations. We think
that this format might deserve being standardized.
Single-Cycle Implementations of Block Ciphers
Security mechanisms to protect our systems and data from malicious adversaries have become essential. Strong encryption algorithms are an important building block of these solutions. However, each application has its own requirements and it is not always possible to find a cipher that meets them all. This work compares unrolled combinatorial hardware implementations of six lightweight block ciphers, along with an AES implementation as a baseline. Up until now, the majority of such ciphers were designed for area-constrained environments where speed is often not crucial, but recently the need for single-cycle, low-latency block ciphers with limited area requirements has arisen to build security architectures for embedded systems. Our comparison shows that some designers are already on this track, but a lot of work still remains to be done.