All papers in 2015 (1255 results)
Last updated: 2016-03-25
Security Attack on CloudBI: Practical privacy-preserving outsourcing of biometric identification in the cloud
In ESORICS 2015, Wang et al. proposed a privacy-preserving outsourcing design for biometric identification using public cloud platforms, namely CloudBI. CloudBI introduces two designs: CloudBI-I and CloudBI-II. CloudBI-I is more efficient and CloudBI-II has stronger privacy protection. Based on the threat model of CloudBI, CloudBI-II is claimed to be secure even when the cloud service provider can act as a user to submit fingerprint information for identification. However, this security argument is not hold and CloudBI-II can be completely broken when the cloud service provider submit a small number of identification requests. In this technical report, we will review the design of CloudBI-II and introduce the security attack that can efficiently break it.
Mitigating Multi-Target Attacks in Hash-based Signatures
This work introduces XMSS-T, a new hash-based signature scheme with tight security. Previous hash-based signature schemes are facing a loss of security, linear in performance parameters like the total tree height. Our new scheme can use hash functions with a smaller output length at the same security level, immediately leading to a smaller signature size. XMSS-T is stateful, however, the same techniques also apply directly to the recent stateless hash-based signature scheme SPHINCS (Eurocrypt 2015), and the signature size is improved as a result.
Being a little more specific and technical, the tight security stems from new multi-target notions of hash-function properties which we define and analyze. We give precise complexity for breaking these security properties under both classical and quantum generic attacks, thus establishing a reliable estimate for the quantum security of XMSS-T. Especially, we prove quantum upper and lower bounds for the query complexity tailored for cryptographic applications, whereas standard techniques in quantum query complexity have limitations such as they usually only consider worst-case complexity. Our proof techniques may be useful elsewhere.
We also implement XMSS-T and compare its performance to that of the most recent stateful hash-based signature scheme XMSS (PQCrypto 2011).
Functional Encryption for Inner Product with Full Function Privacy
Uncategorized
Uncategorized
Functional encryption (FE) supports constrained decryption keys that allow decrypters
to learn specific functions of encrypted messages. In numerous practical applications of FE, confidentiality
must be assured not only for the encrypted data but also for the functions for which
functional keys are provided. This paper presents a non-generic simple private key FE scheme for
the inner product functionality, also known as inner product encryption (IPE). In contrast to the
existing similar schemes, our construction achieves the strongest indistinguishability-based notion
of function privacy in the private key setting without employing any computationally expensive
cryptographic tool or non-standard complexity assumption. Our construction is built in the asymmetric
bilinear pairing group setting of prime order. The security of our scheme is based on the
well-studied Symmetric External Diffie-Hellman (SXDH) assumption.
Identity-based Hierarchical Key-insulated Encryption without Random Oracles
Key-insulated encryption is one of the effective solutions to a key exposure problem. At Asiacrypt'05, Hanaoka et al. proposed an identity-based hierarchical key-insulated encryption (hierarchical IKE) scheme. Although their scheme is secure in the random oracle model, it has a ``hierarchical key-updating structure,'' which is attractive functionality that enhances key exposure resistance.
In this paper, we first propose the hierarchical IKE scheme without random oracles. Our hierarchical IKE scheme is secure under the symmetric external Diffie-Hellman (SXDH) assumption, which is known as the simple and static one. Particularly, in the non-hierarchical case, our construction is the first IKE scheme that achieves constant-size parameters including public parameters, secret keys, and ciphertexts.
Furthermore, we also propose the first public-key-based key-insulated encryption (PK-KIE) in the hierarchical setting by using our technique.
Non-Malleable Functions and Their Applications
We formally study ``non-malleable functions'' (NMFs), a general cryptographic primitive which simplifies and relaxes ``non-malleable one-way/hash functions'' (NMOWHFs) introduced by Boldyreva et al. (Asiacrypt 2009) and refined by Baecher et al. (CT-RSA 2010). NMFs focus on basic functions, rather than one-way/hash functions considered in the literature of NMOWHFs.
We mainly follow Baecher et al. to formalize a game-based definition for NMFs. Roughly, a function $f$ is non-malleable if given an image $y^* \leftarrow f(x^*)$ for a randomly chosen $x^*$, it is hard to output a mauled image $y$ with a transformation $\phi$ from some prefixed transformation class s.t. $y = f(\phi(x^*))$. A distinctive strengthening of our non-malleable notion is that $\phi$ such that $\phi(x^*) = x^*$ is allowed. We also consider adaptive non-malleability, which stipulates that non-malleability holds even when an inversion oracle is available.
We investigate the relations between non-malleability and one-wayness in depth. In non-adaptive setting, we show that for any achievable transformation class, non-malleability implies one-wayness for poly-to-one functions but not vise versa.In adaptive setting, we show that for most algebra-induced transformation class, adaptive non-malleability (ANM) is equivalent to adaptive one-wayness (AOW) for injective functions. These results establish theoretical connections between non-malleability and one-wayness for functions, which extend to trapdoor functions as well, and thus resolve the open problems left by Kiltz et al. (Eurocrypt 2010). We also study the relations between standard OW/NM and hinted OW/NM, where the latter notions are typically more useful in practice. Towards efficient realizations of NMFs, we give a deterministic construction from adaptive trapdoor functions
and a randomized construction from all-but-one lossy functions and one-time signature.
This partially solves an open problem posed by Boldyreva et al. (Asiacrypt 2009).
Finally, we explore applications of NMFs in security against related-key attacks (RKA). We first show that the implication AOW $\Rightarrow$ ANM provides key conceptual insight into addressing non-trivial copy attacks in RKA security. We then show that NMFs give rise to a generic construction of continuous non-malleable key derivation functions, which have proven to be very useful in achieving RKA security for numerous cryptographic primitives.
Particularly, our construction simplifies and clarifies the construction by Qin et al. (PKC 2015).
Improved Test Pattern Generation for Hardware Trojan Detection using Genetic Algorithm and Boolean Satisfiability
Test generation for \emph{Hardware Trojan Horses} (HTH) detection is extremely challenging, as
Trojans are designed to be triggered by very rare logic conditions at internal nodes
of the circuit.
In this paper, we propose a \textit{Genetic Algorithm} (GA) based Automatic Test Pattern
Generation (ATPG) technique, enhanced by automated solution to an associated
\textit{Boolean Satisfiability} problem. The main insight is that
given a specific internal trigger condition, it is not possible to attack an arbitrary
node (payload) of the circuit, as the effect of the induced logic malfunction
by the HTH might not get propagated to the output. Based on this observation, a
fault simulation based framework has been proposed, which enumerates the
feasible payload nodes for a specific triggering condition. Subsequently,
a compact set of test vectors is selected based on their ability to detect the logic
malfunction at the feasible payload nodes, thus increasing their effectiveness.
Test vectors generated by the proposed scheme were found to achieve higher
detection coverage over large population
of HTH in ISCAS benchmark circuits,
compared to a previously proposed logic testing based Trojan detection technique.
Comment on Quantum Cryptography---Which is More Important, Signal Security, Information Security or Communication Reliability
Signal security aims to prevent the adversary from copying communication signals---so it is with quantum cryptography. Information security focuses on preventing the adversary from knowing plaintext or cheating users---so it is with classical cryptography. Communication reliability means that the intended receiver can recover the right communication signals sent by the sender.
In this note, we stress that in the presence of an adversary quantum cryptography can do nothing except for detecting the presence, because the intrusion of adversary has to disturb communication signals so that the intended receiver can not recover the right signals. But classical cryptography works well in the presence of eavesdropping although it cannot detect it. We also remark that in the past decades the functionality of quantum cryptography to detect eavesdropping has been overstated. The plan to build a large quantum photonic network is infeasible.
Adaptively Secure Garbled Circuits from One-Way Functions
Uncategorized
Uncategorized
A garbling scheme is used to garble a circuit $C$ and an input $x$ in a way that reveals the output $C(x)$ but hides everything else. In many settings, the circuit can be garbled off-line without strict efficiency constraints, but the input must be garbled very efficiently on-line, with much lower complexity than evaluating the circuit. Yao's scheme has essentially optimal on-line complexity, but only achieves selective security, where the adversary must choose the input $x$ prior to seeing the garbled circuit. It has remained an open problem to achieve adaptive security, where the adversary can choose $x$ after seeing the garbled circuit, while preserving on-line efficiency.
In this work, we modify Yao's scheme in a way that allows us to prove adaptive security under one-way functions. As our main instantiation, we get a scheme where the on-line complexity is only proportional to the width $w$ of the circuit, which corresponds to the space complexity of the computation, but is independent of the circuit's depth $d$. Alternately, we can also get an instantiation where the on-line complexity is only proportional to the input/output size and the depth $d$ of the circuit but independent of its width $w$, albeit in this case we incur a $2^{O(d)}$ security loss in our reduction. More broadly, we relate the on-line complexity of adaptively secure garbling schemes in our framework to a certain type of pebble complexity of the circuit.
As our main tool, of independent interest, we develop a new notion of somewhere equivocal encryption, which allows us to efficiently equivocate on a small subset of the message bits.
Trap Me If You Can -- Million Dollar Curve
A longstanding problem in cryptography is the generation of publicly verifiable randomness. In particular, public verifiability allows to generate parameters for a cryptosystem in a way people can legitimately trust. There are many examples of standards using arbitrary constants which are now challenged and criticized for this reason, some of which even being suspected of containing a trap. Several sources of public entropy have already been proposed such as lotteries, stock market prices, the bitcoin blockchain, board games, or even Twitter and live webcams.
In this article, we propose a way of combining lotteries from several different countries which would require an adversary to manipulate several independent draws in order to introduce a trap in the generated cryptosystem. Each and every time a new source of public entropy is suggested, it receives its share of criticism for being "easy to manipulate". We do not expect our solution to be an exception on this aspect, and will gladly receive any suggestion allowing to increase the confidence in the cryptosystem parameters we generate.
Our method allows to build what we call a Publicly verifiable RNG, from which we extract a seed that is used to instantiate and initialize a Blum-Blum-Shub random generator. We then use the binary stream produced by this generator as an input to a filtering function which deterministically outputs secure and uniformly distributed parameters from uniform bitstreams.
We apply our methodology to the ECDH cryptosystem, and propose the "Million Dollar Curve" as an alternative to curves P-256 and Curve25519.
Generic Transformation of a CCA2-Secure Public-Key Encryption Scheme to an eCK-Secure Key Exchange Protocol in the Standard Model
Uncategorized
Uncategorized
LaMacchia, Lauter and Mityagin presented a strong security model for authenticated key agreement, namely the eCK model. They also constructed a protocol, namely the NAXOS protocol, that enjoys a simple security proof in the eCK model. However, the NAXOS protocol uses a random-oracle-based technique to combine the long-term secret key and the per-session-randomness; so-called NAXOS- trick, in order to achieve the eCK security definition. For NAXOS-trick-based protocols, the leakage of per-session-randomness modelled in the eCK model is somewhat unnatural, because the eCK model leaks per-session-randomness, while the output of the NAXOS-trick computation remains safe. In this work, we present a standard model eCK-secure protocol construction, eliminating the NAXOS-trick. Moreover, our protocol is a generic constructions, which can be instantiated with arbitrary suitable cryptographic primitives. Thus, we present a generic eCK-secure, NAXOS-free, standard model key exchange protocol. To the best of our knowledge this is the first paper on generic transformation of a CCA2-secure public key encryption scheme to an eCK-secure key exchange protocol in the standard model.
Missing a trick: Karatsuba variations
There are a variety of ways of applying the Karatsuba idea to multi-digit multiplication. These apply particularly well in the context where digits do not use the full word-length of the computer, so that partial products can be safely accumulated without fear of overflow. Here we re-visit the ``arbitrary degree'' version of Karatsuba and show that the cost of this little-known variant has been over-estimated in the past. We also attempt to definitively answer the question as to the cross-over point where Karatsuba performs better than the classic method.
Universally Composable Direct Anonymous Attestation
Direct Anonymous Attestation (DAA) is one of the most complex cryptographic algorithms that has been deployed in practice. In spite of this, and the long body of work on the subject, there is still no fully satisfactory security definition for DAA. This was already acknowledged by Bernard et al. (IJIC'13) who showed that in existing models even fully insecure protocols may be deemed secure. Bernard et al. therefore proposed an extensive set of security games, which however aimed only at a simplified setting, termed pre-DAA. In pre-DAA the host platform that runs the TPM is assumed to be trusted too. Consequently, their notion does not guarantee any security if the TPM is embedded in a potentially corrupt host, which is a significant restriction. In this paper, we give a comprehensive security definition for full DAA in the form of an ideal functionality in the Universal Composability model. Our definition considers the host and TPM to be individual entities that can be in different corruption states. None of the existing DAA schemes immediately satisfies our strong security notion, and we therefore also propose a realization that is based on a DAA scheme supported by the TPM 2.0 standard and rigorously prove it secure in our model.
Variation of GGH15 Multilinear Maps
Recently, Coron presented an attack of GGH15 multilinear maps, which breaks the multipartite Diffie-Hellman key exchange protocol based on GGH15. In this paper, we describe a variation of GGH15, which seems to thwart known attacks.
On values of vectorial Boolean functions and related problems in APN functions
In this paper we prove that there are only differential 4-uniform functions which are on distance 1 from an APN function. Also we prove that there are no APN functions of distance 1 from another APN functions up to dimension 5. We determine some properties of
the set of values of an arbitrary vectorial Boolean function from F_n^2 to F_n^2 in connection to the set of values of its derivatives. These results are connected to several open question concerning metric properties of APN functions.
Verifiable ASICs
A manufacturer of custom hardware (ASICs) can undermine the intended execution of that hardware; high-assurance execution thus requires controlling the manufacturing chain.
However, a trusted platform might be orders of magnitude worse in performance or price than an advanced, untrusted platform.
This paper initiates exploration of an alternative: using verifiable computation (VC), an untrusted ASIC computes proofs of correct execution, which are verified by a trusted processor or ASIC.
In contrast to the usual VC setup, here the prover and verifier together must impose less overhead than the alternative of executing directly on the trusted platform.
We instantiate this approach by designing and implementing physically realizable, area-efficient, high throughput ASICs (for a prover and verifier), in fully synthesizable Verilog.
The system, called Zebra, is based on the CMT and Allspice interactive proof protocols, and required new observations about CMT, careful hardware design, and attention to architectural challenges.
For a class of real computations, Zebra meets or exceeds the performance of executing directly on the trusted platform.
Quantum Cryptography Beyond Quantum Key Distribution
Uncategorized
Uncategorized
Quantum cryptography is the art and science of exploiting quantum mechanical effects in order to perform cryptographic tasks. While the most well-known example of this discipline is quantum key distribution (QKD), there exist many other applications such as quantum money, randomness generation, secure two- and multi-party computation and delegated quantum computation. Quantum cryptography also studies the limitations and challenges resulting from quantum adversaries---including the impossibility of quantum bit commitment, the difficulty of quantum rewinding and the definition of quantum security models for classical primitives.
In this review article, aimed primarily at cryptographers unfamiliar with the quantum world, we survey the area of theoretical quantum cryptography, with an emphasis on the constructions and limitations beyond the realm of QKD.
Verifiable side-channel security of cryptographic implementations: constant-time MEE-CBC
We provide further evidence that implementing software countermeasures against timing attacks is a non-trivial task and requires domain-specific software development processes: we report an implementation bug in the S2N library, recently released by AWS Labs. This bug (now fixed) allowed bypassing the balancing countermeasures against timing attacks deployed in the implementation of the MAC-then-Encode-then-CBC-Encrypt (MEE-CBC) component, creating a timing side-channel similar to that exploited by Lucky 13.
Although such an attack could only be launched when the MEE-CBC component is used in isolation - Albrecht and Paterson recently confirmed in independent work that S2N's second line of defence provides adequate mitigation - its existence shows that conventional software validation processes are not being effective in this domain. To solve this problem, we define a methodology for proving security of implementations in the presence of timing attackers: first, prove black-box security of an algorithmic description of a cryptographic construction; then, establish functional correctness of an implementation with respect to the algorithmic description; and finally, prove that the implementation is leakage secure.
We present a proof-of-concept application of our methodology to MEE-CBC, bringing together three different formal verification tools to produce an assembly implementation of this construction that is verifiably secure against adversaries with access to some timing leakage. Our methodology subsumes previous work connecting provable security and side-channel analysis at the implementation level, and supports the verification of a much larger case study. Our case study itself provides the first provable security validation of complex timing countermeasures deployed, for example, in OpenSSL.
Last updated: 2016-12-29
Exploiting PUF Unreliability to Secure Wireless Sensing
Uncategorized
Uncategorized
Wireless sensors attract increased attention from both academia and industry owing to emerging applications such as Internet of Things (IoT), smart homes, e-health, etc. It is widely accepted that security assessment to this super large distributed ubiquitous devices and privacy of collected data are ultimate important, Sensor security that relies on traditional cryptography
is vulnerable to various attacks and usually does not lend itself to low-cost and lightweight applications. To overcome it, this paper proposes an alternative secure wireless sensing approach that is suitable for those resource-restricted IoT devices. In particular, we exploit the unreliability of a physical unclonable function (PUF) that is sensitive to ambient environmental variations to guarantee the veracity of the sensed value. In this case, the PUF itself acts as a sensor or is integrated with a sensor, called a PUF sensor. Thus, for a PUF sensor, the processes of cryptography and sensing are inseparable. In our security analyses, it is assumed that i) the PUF sensor is located in a hostile environment, ii) the communication channel is insecure, and iii) no pricey crypto module relying on stored secret keys is involved. Even under such cases, the PUF sensor still provides high level security at lowcost. In addition, the PUF sensor is inherently unclonable. We validate such an alternative wireless sensing approach based on an proof-of-concept experimental implementation of the proposed PUF sensor.
Secure Goods Supply Chain and Key Exchange with Virtual Proof of Reality
A new security protocol of {\it virtual proof of reality} (VP) is recently proposed by Ruhrmair {\it et al.} The VP allows one party, the prover, making a physical statement to the other party, the verifier, over a digital communication channel without using any secret keys except the message sent between these two parties. The physical statement could be a physical feature---eg. temperature---or phenomena---eg. destruction---of the hardware in the prover's system. We present two applications---secure key exchange and secure goods supply chain---building on the VP of temperature, location, and destruction. Moreover, we experimentally demonstrate the first electrical circuit-based VP of destruction through the proposed hardware security primitive---a hybrid memristor and physical unclonable function (memristor-PUF) architecture, which takes advantage of the PUF extracted from static variations of CMOS devices inherent to the fabrication process and dynamic variations attributed to switching variabilities of nano memristors.
Asynchronous Secure Multiparty Computation in Constant Time
In the setting of secure multiparty computation, a set of mutually distrusting parties wish to securely compute a joint function. It is well known that if the communication model is asynchronous, meaning that messages can be arbitrarily delayed by an unbounded (yet finite) amount of time, secure computation is feasible if and only if at least two-thirds of the parties are honest, as was shown by Ben-Or, Canetti, and Goldreich [STOC'93] and by Ben-Or, Kelmer, and Rabin [PODC'94]. The running-time of all currently known protocols depends on the function to evaluate. In this work we present the first asynchronous MPC protocol that runs in constant time.
Our starting point is the asynchronous MPC protocol of Hirt, Nielsen, and Przydatek [Eurocrypt'05, ICALP'08]. We integrate \emph{threshold fully homomorphic encryption} in order to reduce the interactions between the parties, thus completely removing the need for the expensive \emph{king-slaves} approach taken by Hirt et al.. Initially, assuming an honest majority, we construct a constant-time protocol in the asynchronous Byzantine agreement (ABA) hybrid model. Using a concurrent ABA protocol that runs in constant expected time, we obtain a constant expected time asynchronous MPC protocol, secure facing static malicious adversaries, assuming t<n/3.
On the Security of One Password Authenticated Key Exchange Protocol
In this paper the Security Evaluated Standardized Password Authenticated Key Exchange (SESPAKE) protocol is proposed (this protocol is approved in the standardization system of the Russian Federation) and its cryptographic properties are analyzed. The SESPAKE protocol includes a key agreement step and a key authentication step. We define new indistinguishability-based adversary model with a threat of false authentication that is an extension of the original indistinguishability-based model up to the case of protocols with authentication step without key diversification. We prove the protocol security under two types of threats: a classic threat of distinguishing a generated session key from a random string and a threat of false authentication. This protocol is the first password authenticated key exchange protocol (PAKE) protocol without key diversification for a full version of which a security proof has been obtained. The paper also contains a brief review of the known results dedicated to analysis of cryptographic properties of PAKE protocols.
A Bounded-Space Near-Optimal Key Enumeration Algorithm for Multi-Dimensional Side-Channel Attacks
Uncategorized
Uncategorized
Enumeration of cryptographic keys in order of likelihood based on side-channel leakages has a significant importance in cryptanalysis. Previous algorithms enumerate the keys in optimal order, however their space complexity is $\Omega(n^{d/2})$ when there are d subkeys and n candidate values per subkey. We propose a new key enumeration algorithm that has a space complexity bounded by $O(d^2 w+dn)$, when w is a design parameter, which allows the enumeration of many more keys without exceeding the available space. The trade-off is that the enumeration order is only near-optimal, with a bounded ratio between optimal and near-optimal ranks.
Before presenting our algorithm we provide bounds on the guessing entropy of the full key in terms of the easy-to-compute guessing entropies of the individual subkeys. We use these results to quantify the near-optimality of our algorithm's ranking, and to bound its guessing entropy.
We evaluated our algorithm through extensive simulations. We show that our algorithm continues its near-optimal-order enumeration far beyond the rank at which the optimal algorithm fails due to insufficient memory, on realistic SCA scenarios. Our simulations utilize a new model of the true rank distribution, based on long tail Pareto distributions, that is validated by empirical data and may be of independent interest.
Constant-round Leakage-resilient Zero-knowledge from Collision Resistance
In this paper, we present a constant-round leakage-resilient zero-knowledge argument system for NP under the assumption of the existence of collision-resistant hash function family. That is, using collision-resistant hash functions, we construct a constant-round zero-knowledge argument system that has the following zero-knowledge property: Even against any cheating verifier that obtains arbitrary amount of leakage on the prover's internal secret state, a simulator can simulate the verifier's view by obtaining the same amount of leakage on the witness. Previously, leakage-resilient zero-knowledge proofs/arguments for NP were constructed only under a relaxed security definition (Garg, Jain, and Sahai, CRYPTO'11) or under the DDH assumption (Pandey, TCC'14).
Our leakage-resilient zero-knowledge argument system satisfies an additional property that it is simultaneously leakage-resilient zero-knowledge, meaning that both zero-knowledgeness and soundness hold in the presence of leakage.
On Cryptographic Anonimity and Unpredicatbility in Secret Sharing
We revisit the notions of cryptographic anonymity and share unpredictability in secret sharing, introducing more systematic and fine grained definitions. We derive tight negative and positive results characterizing access structures with respect to the generalized definitions.
Degenerate Curve Attacks
Invalid curve attacks are a well-known class of attacks against
implementations of elliptic curve cryptosystems, in which an
adversary tricks the cryptographic device into carrying out scalar
multiplication not on the expected secure curve, but on some other,
weaker elliptic curve of his choosing. In their original form, however,
these attacks only affect elliptic curve implementations using
addition and doubling formulas that are independent of at least one
of the curve parameters. This property is typically satisfied for
elliptic curves in Weierstrass form but not for newer models that
have gained increasing popularity in recent years, like Edwards and
twisted Edwards curves. It has therefore been suggested (e.g. in
the original paper on invalid curve attacks) that such alternate
models could protect against those attacks.
In this paper, we dispel that belief and present the first attack of
this nature against (twisted) Edwards curves, Jacobi quartics, Jacobi
intersections and more. Our attack differs from invalid curve attacks
proper in that the cryptographic device is tricked into carrying out a
computation not on another elliptic curve, but on a group isomorphic
to the multiplicative group of the underlying base field. This often
makes it easy to recover the secret scalar with a single invalid
computation.
We also show how our result can be used constructively, especially on
curves over random base fields, as a fault attack countermeasure
similar to Shamir's trick.
Extend FHEW to General Case
Uncategorized
Uncategorized
When talking about FHE, refresh process is a little different from bootstrapping process. Bootstrapping always means that a scheme homomorphic decrypting its process, while refresh imply that use another scheme, always in large scale, to perform its decryption process. In EUROCRYPT’2015, Ducas and Micciancio proposed a FHE which can perform refresh process in less than a second, called DM14, while the scheme only support bite plaintext space, which is cumbersome for many applications. Extending DM14 to a large plaintext space becomes an open problem. In order to solve it, we improved the msbExtract process to endure a large base, by mapping the element to position. As a result, we constructed an efficient FHE with large plaintext space and quickly refresh process. We implemented our scheme in computer, and made a comparison between our performance and DM14. The result is that the running time is almost same, when extend the plaintext space from 2 to 8.
When are Identification Protocols with Sparse Challenges Safe? The Case of the Coskun and Herley Attack
Cryptographic identification protocols enable a prover to prove its identity to a verifier. A subclass of such protocols are shared-secret challenge-response identification protocols in which the prover and the verifier share the same secret and the prover has to respond to a series of challenges from the verifier. When the prover is a human, as opposed to a machine, such protocols are called human identification protocols. To make human identification protocols usable, protocol designers have proposed different techniques in the literature. One such technique is to make the challenges sparse, in the sense that only a subset of the shared secret is used to compute the response to each challenge. Coskun and Herley demonstrated a generic attack on shared-secret challenge-response type identification protocols which use sparse challenges. They showed that if the subset of the secret used is too small, an eavesdropper can learn the secret after observing a small number of challenge-response pairs. Unfortunately, from their results, it is not possible to find the safe number of challenge-response pairs a sparse-challenge protocol can be used for, without actually implementing the attack on the protocol and weeding out unsafe parameter sizes. Such a task can be time-consuming and computationally infeasible if the subset of the secret used is not small enough. In this work, we show an analytical estimate of the number of challenge-response pairs required by an eavesdropper to find the secret through the Coskun and Herley attack. Against this number, we also give an analytical estimate of the time complexity of the attack. Our results will help protocol designers to choose safe parameter sizes for identification protocols that employ sparse challenges.
Indistinguishable Proofs of Work or Knowledge
Uncategorized
Uncategorized
We introduce a new class of protocols called Proofs of Work or Knowledge (PoWorKs). In a PoWorK, a prover can convince a verifier that she has either performed work or that she possesses knowledge of a witness to a public statement without the verifier being able to distinguish which of the two has taken place.
We formalize PoWorK in terms of three properties, completeness,
f-soundness and indistinguishability (where f is a function that determines the tightness of the proof of work aspect) and present a construction that transforms 3-move HVZK protocols into 3-move public-coin PoWorKs. To formalize the work aspect in a PoWorK~protocol we define cryptographic puzzles that adhere to certain uniformity conditions, which may also be of independent interest. We
instantiate our puzzles in the random oracle (RO) model
as well as via constructing ``dense'' versions of suitably hard one-way functions.
We then showcase PoWorK~protocols by presenting a number of applications. We first show how non-interactive PoWorKs can be used to reduce spam email by forcing users sending an e-mail to either prove to the mail server they are approved contacts of the recipient or to perform computational work. As opposed to previous
approaches that applied proofs of work to this problem,
our proposal of using PoWorKs is privacy-preserving as it hides
the list of the receiver's approved contacts from the mail server.
Our second application, shows how PoWorKs can be used to compose
cryptocurrencies that are based on proofs of work (``Bitcoin-like'') with cryptocurrencies that are based on knowledge relations (these include cryptocurrencies that are based on ``proof of stake'', and others). The resulting PoWorK-based cryptocurrency inherits the robustness properties of the underlying two systems while PoWorK-indistinguishability ensures a uniform population of miners.
Finally, we show that PoWorK~protocols imply straight-line quasi-polynomial simulatable arguments of knowledge and based on our construction we obtain an efficient straight-line concurrent 3-move statistically quasi-polynomial simulatable argument of
knowledge.
Cryptanalysis of a public key cryptosystem based on Diophantine equations via weighted LLL reduction
Uncategorized
Uncategorized
Post-quantum cryptography now plays a central role in cryptography. Many candidates of post-quantum cryptosystems (PQC) have been already proposed but require public keys of large sizes. Constructing PQC with public keys of small sizes is strongly desired. In [Oku15], Okumura proposed a public key cryptosystem based on the difficulty of solving Diophantine equations of degree increasing type (DEC for short). DEC is proposed as an analogue of the Algebraic Surface Cryptosystem [AGM09]. DEC has been expected to avoid the analogues of all attacks against ASC (and the previous versions of ASC). Moreover, DEC has been expected to be a candidate of PQC and to achieve the high security with public keys of small sizes, e.g., about 1;200 bits with 128 bit security.
In this paper, we propose a polynomial time attack against DEC. We show that the security of DEC depends on the difficulty of finding special (relatively) short vectors in some lattices obtained from a public key and a ciphertext. The most important target vector in our attack
is not necessarily a shortest vector in a lattice of low rank but only some entries are relatively small. In our attack, the LLL algorithm with respect to well-known norms such as the $p$-norms ($1 \leq p \leq 1$) does not seem to work well for finding such vectors. The most technical point of our method is to heuristically find a special norm, which we call a weighted norm, such that the most important target vector becomes a (nearly) shortest vector in a lattice of low rank. We call the LLL algorithm with respect to a weighted norm the ``weighted LLL algorithm" in this paper. Our experimental results by a standard PC with Magma suggest that our attack via the weighted LLL algorithm can break the one-wayness of DEC for 128 bit security proposed in [Oku15] with sufficiently high probability.
Privacy protection in electronic education based on polymorphic pseudonymization
In [13.] Dutch government proposes an identity scheme supporting personal data exchange of pupils with private e-textbook publishers.
This design propagates sharing personal numbers of pupils among private parties violating the data minimisation principle in privacy laws. We describe a privacy friendly alternative, giving pupils (and parents) control on exchange of their personal data.
Three generic forms based on homomorphic encryption are used as building blocks. These forms do not yield personal numbers, or even personal data from a legal perspective, and have strong, unlinkability properties. Only if required a school provides a party with a party-specific {\em pseudonym} identifying a pupil. For this the school is provided an {\em encrypted pseudonym} by a central party based on a {\em polymorphic pseudonym} formed by the school. Only intended parties, not even schools, have access to pseudonyms. Different publishers can send pupil test results to a school without being able to assess whether pupils are identical.
We also describe support for privacy friendly attributes and user inspection as required by privacy laws.
Single Key Recovery Attacks on 9-round Kalyna-128/256 and Kalyna-256/512
The Kalyna block cipher has recently been established as the Ukranian encryption standard in June, 2015. It was selected in a Ukrainian National Public Cryptographic Competition running from 2007 to 2010.
Kalyna supports block sizes and key lengths of 128, 256 and 512 bits. Denoting the variants of Kalyna as Kalyna-$b/k$, where $b$ denotes the block size and $k$ denotes the keylength, the design specifies $k \in \{b, 2b\}$. In this work, we re-evaluate the security bound of some reduced round Kalyna variants, specifically Kalyna-$128/256$ and Kalyna-$256/512$ against key recovery attacks in the single key model. We first construct new 6-round distinguishers and then use these distinguishers to demonstrate 9-round attacks on these Kalyna variants. These attacks improve the previous best 7-round attacks on the same.\\
Our 9-round attack on Kalyna-128/256 has data, time and memory complexity of $2^{105}$, $2^{245.83}$ and $2^{226.86}$ respectively. For our 9-round attack on Kalyna-256/512, the data/time/memory complexities are $2^{217}$, $2^{477.83}$ and $2^{443.45}$ respectively. The time and data complexities for Kalyna-256/512 reported in this work improve upon the previous best 7-round attack complexities on the same. The attacks presented in this work are currently the best on Kalyna. We apply multiset attack - a variant of meet-in-the-middle attack to achieve these results.
Cryptoleq: A Heterogeneous Abstract Machine for Encrypted and Unencrypted Computation
The rapid expansion and increased popularity of cloud computing comes with no shortage of privacy concerns about outsourcing computation to semi-trusted parties. Leveraging the power of encryption, in this paper we introduce Cryptoleq: an abstract machine based on the concept of One Instruction Set Computer, capable of performing general-purpose computation on encrypted programs. The program operands are protected using the Paillier partially homomorphic cryptosystem, which supports addition on the encrypted domain. Full homomorphism over addition and multiplication, which is necessary for enabling general-purpose computation, is achieved by inventing a heuristically obfuscated software re-encryption module written using Cryptoleq instructions and blended into the executing program. Cryptoleq is heterogeneous, allowing mixing encrypted and unencrypted instruction operands in the same program memory space. Programming with Cryptoleq is facilitated using an enhanced assembly language that allows development of any advanced algorithm on encrypted datasets. In our evaluation, we compare Cryptoleq's performance against a popular fully homomorphic encryption library, and demonstrate correctness using a typical Private Information Retrieval problem.
ECC on Your Fingertips: A Single Instruction Approach for Lightweight ECC Design in GF (p)
Lightweight implementation of Elliptic Curve Cryptography
on FPGA has been a popular research topic due to the boom of ubiquitous computing. In this paper we propose a novel single instruction
based ultra-light ECC crypto-processor coupled with dedicated hard-IPs
of the FPGAs. We show that by using the proposed single instruction
framework and using the available block RAMs and DSPs of FPGAs,
we can design an ECC crypto-processor for NIST curve P-256, requiring
only 81 and 72 logic slices on Virtes-5 and Spartan-6 devices respectively.To the best of our knowledge, this is the first implementation of ECC which requires less than 100 slices on any FPGA device family.
Twisted Polynomials and Forgery Attacks on GCM
Uncategorized
Uncategorized
Polynomial hashing as an instantiation of universal hashing is a widely employed method for the construction of MACs and authenticated encryption
(AE) schemes, the ubiquitous GCM being a prominent example. It is also
used in recent AE proposals within the CAESAR competition which aim at
providing nonce misuse resistance, such as POET. The algebraic structure
of polynomial hashing has given rise to security concerns: At
CRYPTO~2008, Handschuh and Preneel describe key recovery attacks, and at
FSE~2013, Procter and Cid provide a comprehensive framework for forgery
attacks. Both approaches rely heavily on the ability to construct
\emph{forgery polynomials} having disjoint sets of roots, with many roots
(``weak keys'') each. Constructing such polynomials beyond naïve
approaches is crucial for these attacks, but still an open problem.
In this paper, we comprehensively address this issue. We propose to use
\emph{twisted polynomials} from Ore rings as forgery polynomials. We
show how to construct sparse forgery polynomials with full control over
the sets of roots. We also achieve complete and explicit disjoint
coverage of the key space by these polynomials. We furthermore leverage
this new construction in an improved key recovery algorithm.
As cryptanalytic applications of our twisted polynomials, we
develop the first universal forgery attacks on GCM in the weak-key
model that do not require nonce reuse. Moreover, we present universal
weak-key forgery attacks for the recently proposed nonce-misuse resistant
AE schemes POET, Julius, and COBRA.
Chosen-Ciphertext Security from Subset Sum
Uncategorized
Uncategorized
We construct a public-key encryption (PKE) scheme whose security is polynomial-time equivalent to the hardness of the Subset Sum problem. Our scheme achieves the standard notion of indistinguishability against chosen-ciphertext attacks (IND-CCA) and can be used to encrypt messages of arbitrary polynomial length, improving upon a previous construction by Lyubashevsky, Palacio, and Segev (TCC 2010) which achieved only the weaker notion of semantic security (IND-CPA) and whose concrete security decreases with the length of the message
being encrypted.
At the core of our construction is a trapdoor technique which originates in the work of Micciancio and Peikert (Eurocrypt 2012).
On the Asymptotic Complexity of Solving LWE
Uncategorized
Uncategorized
We provide for the first time an asymptotic comparison of all known algorithms for the search version of the Learning with Errors (LWE) problem. This includes an analysis of several lattice-based approaches as well as the combinatorial BKW algorithm. Our analysis of the lattice-based approaches defines a general framework, in which the algorithms of Babai, Lindner-Peikert and several pruning strategies appear as special cases. We show that within this framework, all lattice algorithms achieve the same asymptotic complexity.
For the BKW algorithm, we present a refined analysis for the case of only a polynomial number of samples via amplification, which allows for a fair comparison with lattice-based approaches. Somewhat surprisingly, such a small number of samples does not make the asymptotic complexity significantly inferior, but only affects the constant in the exponent.
As the main result we obtain that both, lattice-based techniques and BKW with a polynomial number of samples, achieve running time $2^{\bigO(n)}$ for $n$-dimensional LWE, where we make the constant hidden in the big-$\bigO$ notion explicit as a simple and easy to handle function of all LWE-parameters. In the lattice case this function also depends on the time to compute a BKZ lattice basis with block size $\Theta(n)$. Thus, from a theoretical perspective our analysis reveals how LWE's complexity changes as a function of the LWE-parameters, and from a practical perspective our analysis is a useful tool to choose LWE-parameters resistant to all known attacks.
Last updated: 2017-01-08
Unclonable encryption revisited ($4 \times 2 = 8$)
Uncategorized
Uncategorized
Withdrawn. Superseded by preprint 2016/1122.
Two-Round Man-in-the-Middle Security from LPN
Secret-key authentication protocols have recently
received a considerable amount of attention, and a long line of
research has been devoted to devising efficient protocols with
security based on the hardness of the learning-parity with noise
(LPN) problem, with the goal of achieving low communication and
round complexities, as well as highest possible security guarantees.
In this paper, we construct 2-round authentication protocols that
are secure against sequential man-in-the-middle (MIM) attacks with
tight reductions to LPN, Field-LPN, or other problems. The best
prior protocols had either loose reductions and required 3 rounds
(Lyubashevsky and Masny, CRYPTO'13) or had a much larger key (Kiltz
et al., EUROCRYPT'11 and Dodis et al., EUROCRYPT'12). Our
constructions follow from a new generic deterministic and
round-preserving transformation enhancing actively-secure protocols
of a special form to be sequentially MIM-secure while only adding a
limited amount of key material and computation.
Robust Pseudo-Random Number Generators with Input Secure Against Side-Channel Attacks
A pseudo-random number generator (PRNG) is a deterministic algorithm that produces numbers whose distribution is indistinguishable from uniform. In this paper, we extend the formal model of PRNG with input defined by Dodis et al. at CCS 2013 to deal with partial leakage of sensitive information. The resulting security notion, termed leakage-resilient robust PRNG with input, encompasses all the previous notions, but also allows the adversary to continuously get some leakage on the manipulated data. Dodis et al. also proposed an efficient construction, based on simple operations in a finite field and a classical deterministic pseudo-random generator PRG. Here, we analyze this construction with respect to our new stronger security model, and prove that with a stronger PRG, it also resists leakage. We show that this stronger PRG can be obtained by tweaking some existing constructions based on AES. We also propose a new instantiation which may be better in specific cases. Eventually, we show that the resulting scheme remains quite efficient in spite of its new security properties. It can thus be recommended in contexts where side-channel resistance is required.
Last updated: 2017-04-17
$Area-Time$ Efficient Hardware Implementation of Elliptic Curve Cryptosystem
Uncategorized
Uncategorized
The strength of ECC lies in the hardness of elliptic curve discrete
logarithm problem (ECDLP) and the high level security with
significantly smaller keys. Thus, using smaller key sizes is a gain
in term of speed, power, bandwidth, and storage. Point
multiplication is the most common operation in ECC and the most used
method to compute it is Montgomery Algorithm. This paper describes
an area-efficient hardware implementation of Elliptic Curve
Cryptography (ECC) over $GF(2^m)$. We used the Montgomery modular
multiplication method for low cost implementation. Then, to
accelerate the elliptic curve point multiplication, we firstly
adopted projective coordinates, and then we reduced the number of
multiplication block used, so we have a gain at area occupation and
execution time. We detailed our optimized hardware architecture and
we prove that it outperform existing ones regarding area, power, and
energy consumption. Our hardware implementation, on a Xilinx virtex
5 ML 50 FPGA, used only 9670 Slices achieving maximum frequency of
221 MHz, it computed scalar multiplication in only 2.58 $\mu$s. FPGA
implementations represent generally the first step to obtain faster
ASIC implementations.Further, we implemented our design on an ASIC
CMOS 45 nm technology, it uses 0.121 $mm^2$ of area cell, it runs at
a frequency of
990 MHz and consumes 39(mW).
Two-faced processes and existence of RNG with proven properties
Random and pseudorandom number generators (RNG and PRNG) are
used for many purposes including cryptographic, modeling and simulation applications.
For
such applications a generated bit sequence should mimic true random, i.e., by definition, such a
sequence could be interpreted as the result of the flips of a “fair” coin with sides that are labeled “0” and “1”. It is known that the Shannon entropy of this process is 1 per letter, whereas for any other stationary process with binary alphabet the Shannon entopy is stricly less than 1. On the other hand, the entropy of the PRNG output should be much less than 1 bit (per letter), but the output sequence should look like truly random. We describe random processes for which those, in a first glance contradictory properties, are valid.
More precisely, it is shown that there exist binary-alphabet random processes whose entropy is less than 1 bit (per letter), but a frequency of occurrences of any word $|u|$ goes to $2^{- |u|}$, where $|u|$ is the length of $u$. In turn, it gives a possibility to construct RNG and PRNG which possess theoretical guarantees. This, in turn, is important for applications such as those in cryptography.
Non-Transferable Proxy Re-Encryption
Uncategorized
Uncategorized
Proxy re-encryption (PRE) allows a semi-trusted proxy to transform a ciphertext for Alice into a ciphertext of the same message for Bob. The traditional security notion of PRE focuses on preventing the proxy with the re-encryption key learning anything about the encrypted messages. However, such a basic security requirement is clearly not enough for many scenarios where the proxy can collude with Bob. A desirable security goal is therefore to prevent a malicious proxy colluding with Bob to re-delegate Alice’s decryption right. In 2005, Ateniese, Fu, Green and Hohenberger first proposed this intriguing problem called non-transferability, in the sense that the only way for Bob to transfer Alice’s decryption capability is to expose his own secret key. It captures the notion that Bob cannot collude with the proxy and transfer Alice’s decryption right without compromising his own decryption capability. However, over the last decade, no solutions have achieved this property.
In this paper, we positively resolve this open problem. In particular, we give the first construction of nontransferable proxy re-encryption where the attacker is allowed to obtain one pair of keys consisting of Bob’s secret key and the corresponding re-encryption key. Using indistinguishability obfuscation and k-unforgeable authentication as main tools, our scheme is provably secure in the standard model. The essential idea behind our approach is to allow Bob’s secret key to be evoked in the process of decrypting Alice’s ciphertext while hiding the fact that only Bob could decrypt it by the obfuscated program. In addition, we also show a negative result: a CPA secure proxy re-encryption scheme with “error-freeness” property cannot be non-transferable.
Simpler, Faster, and More Robust T-test Based Leakage Detection
Uncategorized
Uncategorized
The TVLA procedure using the t-test has become a popular
leakage detection method. To protect against environmental fluctuation
in laboratory measurements, we propose a paired t-test to improve
the standard procedure. We take advantage of statistical matched-pairs
design to remove the environmental noise effect in leakage detection.
Higher order leakage detection is further improved with a moving average
method. We compare the proposed test with standard t-test on synthetic data and physical measurements. Our results show that the proposed tests are robust to environmental noise.
Simple Security Definitions for and Constructions of 0-RTT Key Exchange
Zero Round-Trip Time (0-RTT) key exchange protocols allow for the transmission of cryptographically protected payload data without requiring the prior exchange of messages of a cryptographic key exchange protocol, while providing perfect forward secrecy. The 0-RTT KE concept was first realized by Google in the QUIC Crypto protocol, and a 0-RTT mode has been intensively discussed for inclusion in TLS 1.3.
In 0-RTT KE two keys are generated, typically using a Diffie-Hellman key exchange. The first key is a combination of an ephemeral client share and a long-lived server share. The second key is computed using an ephemeral server share and the same ephemeral client share.
In this paper, we propose simple security models, which catch the intuition behind known 0-RTT KE protocols; namely that the first (respectively, second) key should remain indistinguishable from a random value, even if the second (respectively, first) key is revealed. We call this property strong key independence. We also give the first constructions of 0-RTT KE which are provably secure in these models, based on the generic assumption that secure non-interactive key exchange (NIKE) exists.
Footprint scheduling for Dining-Cryptographer networks
In many communication scenarios it is not sufficient to protect only the content of the communication, it is necessary to also protect the identity of communicating parties. Various protocols and technologies have been proposed to offer such protection, for example, anonymous
proxies, mix-networks, or onion routing. The protocol that offers the
strongest anonymity guarantees, namely unconditional sender and recipient untraceability, is the Dining Cryptographer (DC) protocol proposed by Chaum in 1988. Unfortunately the strong anonymity guarantees come at the price of limited performance and scalability and multiple issues that make deployment complicated in practice.
In this paper we address one of those issues, namely slot reservation.
We propose footprint scheduling as a new technique for participants to
negotiate communication slots without losing anonymity and at the same
time hiding the number of actively sending users. Footprint scheduling is at the same time simple, efficient and yields excellent results, in particular in very dynamic networks with a frequently changing set of participants and frequently changing activity rate.
Choosing and generating parameters for low level pairing implementation on BN curves
Uncategorized
Uncategorized
Many hardware and software pairing implementations can be found in the literature and some pairing friendly parameters are given. However, depending on the situation, it could be useful to generate other nice parameters (e.g. resistance to subgroup attacks, larger security levels, database of pairing friendly curves). The main purpose of this paper is to describe explicitly and exhaustively what should be done to generate the best possible parameters and to make the best choices depending on the implementation context (in terms of pairing algorithm, ways to build the tower field, $\mathbb{F}_{p^{12}}$ arithmetic, groups involved and their generators, system of coordinates).
We focus on low level implementations, assuming that $\mathbb{F}_p$ additions have a significant cost compared to other $\mathbb{F}_p$ operations. However, the results obtained are still valid in the case where $\mathbb{F}_p$ additions can be neglected. We also explain why the best choice for the polynomials defining the tower field $\mathbb{F}_{p^{12}}$ is only depending on the value of the BN parameter $u$ modulo small integers like $12$ as a nice application of old elementary arithmetic results. Moreover, we use this opportunity to give some new improvements on $\mathbb{F}_{p^{12}}$ arithmetic (in a pairing context) in terms of $\mathbb{F}_p$-addition allowing to save around $10\%$ of them depending on the context.
Log Analysis of Estonian Internet Voting 2013--2015
In this report we describe our efforts in analysing log files produced by the Estonian i-voting system in the KOV2013, EP2014 and RK2015 elections in combination with other information available, so as to detect attacks against the i-voting system, detect system malfunctions and study voter behaviour.
Quantum Security of the Fujisaki-Okamoto and OAEP Transforms
Uncategorized
Uncategorized
In this paper, we present a hybrid encryption scheme that is chosen
ciphertext secure in the quantum random oracle model. Our scheme is a
combination of an asymmetric and a symmetric encryption scheme that are
secure in a weak sense. It is a slight modification of the Fujisaki-Okamoto transform that is secure against classical adversaries. In addition, we modify the OAEP-cryptosystem and prove its security in the quantum random oracle model based on the existence of a partial-domain one-way injective function secure against quantum adversaries.
Fast Optimistically Fair Cut-and-Choose 2PC
Secure two party computation (2PC) is a well-studied problem with many real world applications. Due to Cleve's result on general impossibility of fairness, however, the state-of-the-art solutions only provide security with abort. We investigate fairness for 2PC in presence of a trusted Arbiter, in an optimistic setting where the Arbiter is not involved if the parties act fairly. Existing fair solutions in this setting are by far less efficient than the fastest unfair 2PC.
We close this efficiency gap by designing protocols for fair 2PC with covert and malicious security that have competitive performance with the state-of-the-art unfair constructions. In particular, our protocols only requires the exchange of a few extra messages with sizes that only depend on the output length; the Arbiter's load is independent of the computation size; and a malicious Arbiter can only break fairness, but not covert/malicious security even if he colludes with a party. Finally, our solutions are designed to work with the state-of-the-art optimizations applicable to garbled circuits and cut-and-choose 2PC such as free-XOR, half-gates, and the cheating-recovery paradigm.
Two Kinds of Biclique Attacks on Lightweight Block Cipher PRINCE
Uncategorized
Uncategorized
Inspired by the paper [10], using better differential characteristics in the biclique construction, we give another balanced biclique attack on full rounds PRINCE with the lower complexity in this paper. Our balanced biclique attack has 2^62.67 computational complexity and 2^32 data complexity. Furthermore, we first illustrate a star-based biclique attack on full rounds PRINCE cipher in this paper. Our star-based biclique attack has computational complexity 2^63.02 and the required data can be reduced to only a single plaintext-ciphertext pair, this is the optimal data complexity among the existing results of full round attack on PRINCE.
Comment on Demonstrations of Shor's Algorithm in the Past Decades
We remark that the experimental demonstrations of Shor's algorithm in the past decades are falsely claimed and flawed, because they had used too less qubits in the first quantum register to accomplish the step of Continued Fraction Expansion in Shor's algorithm. More worse, the amount of qubits used in some experiments are too less to represent all residues modulo n, which means that the number n cannot be truly involved in the related computations.
Simple Photonic Emission Attack with Reduced Data Complexity
This work proposes substantial algorithmic enhancements to the SPEA attack of Schlosser et al. by adding cryptographic post-processing, and improved signal processing to the photonic measurement phase. Our improved approach provides three crucial benefits: (1) For some SBox/SRAM configurations the original SPEA method is unable to identify a unique key, and terminates with up to 2^48 key candidates; using our new solver we are able to find the correct key regardless of the respective SBox/SRAM configuration. (2) Our methods reduce the number of required (complex photonic) measurements by an order of magnitude, thereby shortening the duration of the attack significantly. (3) Due to the unavailability of the attack equipment of Schlosser et al. we additionally developed a novel Photonic Emission Simulator which we matched against the real equipment of the original SPEA work. With this simulator we were able to verify our enhanced SPEA attack by a full AES recovery which uses only a small number of photonic measurements.
Deniable Functional Encryption
Uncategorized
Uncategorized
Deniable encryption, first introduced by Canetti et al. (CRYPTO 1997), allows a sender and/or receiver of encrypted communication to produce fake but authentic-looking coins and/or secret keys that ``open'' the communication to a different message. Here we initiate its study for the more general case of functional encryption (FE), as introduced by Boneh et al. (TCC 2011), wherein a receiver in possession of a key k can compute from any encryption of a message x the value F(k,x) according to the scheme's functionality F. Our results are summarized as follows:
We put forth and motivate the concept of deniable FE, for which we consider two models. In the first model, as previously considered by O'Neill et al. (CRYPTO 2011) in the case of identity-based encryption, a receiver gets assistance from the master authority to generate a fake secret key. In the second model, there are ``normal'' and ``deniable'' secret keys, and a receiver in possession of a deniable secret key can produce a fake but authentic-looking normal key on its own. This parallels the ``multi-distributional'' model of deniability previously considered for public-key encryption.
In the first model, we show that any FE scheme for the general circuit functionality (as several recent candidate construction achieve) can be converted into an FE scheme having receiver deniability, without introducing any additional assumptions. In addition we show an efficient receiver deniable FE for Boolean Formulae from bilinear maps. In the second (multi-distributional) model, we show a specific FE scheme for the general circuit functionality having receiver deniability. This result additionally assumes differing-inputs obfuscation and relies on a new technique we call delayed trapdoor circuits. To our knowledge, a scheme in the multi-distributional model was not previously known even in the simpler case of identity-based encryption.
Finally, we show that receiver deniability for FE implies some form of simulation security, further motivating study of the latter and implying optimality of our results.
Secret, verifiable auctions from elections
Uncategorized
Uncategorized
Auctions and elections are seemingly
disjoint.
Nevertheless, similar cryptographic primitives are used
in both domains. For instance, mixnets, homomorphic encryption and trapdoor
bit-commitments have been used by state-of-the-art schemes in both domains.
These developments have appeared independently. For example, the adoption of
mixnets in elections preceded a similar adoption in auctions by over two decades.
In this paper, we demonstrate a relation between auctions and elections:
we present a generic construction for auctions from election schemes. Moreover,
we show that the construction guarantees secrecy and verifiability,
assuming the underlying election scheme satisfies analogous security properties.
We demonstrate the applicability of our work by deriving auction schemes from
the Helios family of election schemes.
Our results advance the unification of auctions and elections, thereby facilitating the progression of both domains.
The graph of minimal distances of bent functions and its properties
A notion of the graph of minimal distances of bent functions is introduced. It is an undirected graph ($V$, $E$) where $V$ is the set of all bent functions in $2k$ variables and $(f, g) \in E$ if the Hamming distance between $f$ and $g$ is equal to $2^k$ (it is the minimal possible distance between two different bent functions). The maximum degree of the graph is obtained and it is shown that all its vertices of maximum degree are quadratic. It is proven that a subgraph of the graph induced by all functions affinely equivalent to Maiorana---McFarland bent functions is connected.
CCA Security for Self-Updatable Encryption: Protecting Cloud Data When Clients Read/Write Ciphertexts
Self-updatable encryption (SUE) is a new kind of public-key encryption, motivated by cloud computing, which enables anyone (i.e. cloud server with no access to private keys) to update a past ciphertext to a future ciphertext by using a public key. The main applications of SUE is revocable-storage attribute-based encryption (RS-ABE) that provides an efficient and secure access control to encrypted data stored in cloud storage. In this setting, there is a new threat such that a revoked user still can access past ciphertexts given to him by a storage server. RS-ABE solves this problem by combining user revocation and ciphertext updating functionalities. The mechanism was designed with semantic security (CPA).
Here, we propose the first SUE and RS-ABE schemes, secure against a relevant form of CCA, which allows ciphertexts submitted by attackers to decryption servers. Due to the fact that some ciphertexts are easily derived from others, we employ a different notion of CCA which avoids easy challenge related messages (we note that this type of idea was employed in other contexts before). Specifically, we define "time extended challenge" (TEC) CCA security for SUE which excludes ciphertexts that are easily derived from the challenge (over time periods) from being queried on (namely, once a challenge is decided by an adversary, no easy modification of this challenge to future and past time periods is allowed to be queried upon). We then propose an efficient SUE scheme with such CCA security, and we also define similar CCA security for RS-ABE and present an RS-ABE scheme with this CCA security.
A Star-based Independent Biclique Attack on Full Rounds SQUARE
Uncategorized
Uncategorized
SQUARE is an iterated block cipher proposed by Daemen et.al. in FSE1997. Inspired by Bogdanov et.al.’s recent works [12], we first present an improved biclique attack, i.e. stat-based independent biclique attack on full rounds SQUARE in this paper. We construct a one round stat-based independent biclique for the initial round, and utilize matching with precomputation techniques to recover the whole key from the remaining rounds. The computing complexity of our attack is about $2^(126.17)$ encryptions and required data can be reduced to a single plaintext-ciphertext pair. To be the best of our knowledge, our attack has an optimal computing complexity and data complexity of biclique attack on full rounds SQUARE.
Heuristic Tool for Linear Cryptanalysis with Applications to CAESAR Candidates
Differential and linear cryptanalysis are the general purpose tools to analyze various cryptographic primitives. Both techniques have in common that they rely on the existence of good differential or linear characteristics. The difficulty of finding such characteristics depends on the primitive. For instance, AES is designed to be resistant against differential and linear attacks and therefore, provides upper bounds on the probability of possible linear characteristics. On the other hand, we have primitives like SHA-1, SHA-2, and Keccak, where finding good and useful characteristics is an open problem. This becomes particularly interesting when considering, for example, competitions like CAESAR. In such competitions, many cryptographic primitives are waiting for analysis. Without suitable automatic tools, this is a virtually infeasible job. In recent years, various tools have been introduced to search for characteristics. The majority of these only deal with differential characteristics. In this work, we present a heuristic search tool which is capable of finding linear characteristics even for primitives with a relatively large state, and without a strongly aligned structure. As a proof of concept, we apply the presented tool on the underlying permutations of the first round CAESAR candidates Ascon, Icepole, Keyak, Minalpher and Proest.
A compression method for homomorphic ciphertexts
In this work we describe a message packing and unpacking method for homomorphic ciphertexts. Messages are packed into the coefficients of plaintext polynomials. We propose an unpacking procedure which allows to obtain a ciphertext for each packed message. The packing and unpacking of ciphertexts represents a solution for reducing the transmission bottleneck in cloud based applications, in particular when sending homomorphic calculations results. The results we obtain (packing ratio, unpacking time) are compared to existing packing methods based on trans-ciphering.
Symmetric and Dual PRFs from Standard Assumptions: A Generic Validation of a Prevailing Assumption
A two-input function is a dual PRF if it is a PRF when keyed by either of its inputs. Dual PRFs are assumed in the design and analysis of numerous primitives and protocols including HMAC, AMAC, TLS 1.3 and MLS. But, not only do we not know whether particular functions on which the assumption is made really are dual PRFs; we do not know if dual PRFs even exist. What if the goal is impossible? This paper addresses this with a foundational treatment of dual PRFs, giving constructions based on standard assumptions. This provides what we call a generic validation of the dual PRF assumption. Our approach is to introduce and construct symmetric PRFs, which imply dual PRFs and may be of independent interest. We give a general construction of a symmetric PRF based on a function having a weak form of collision resistance coupled with a leakage hardcore function, a strengthening of the usual notion of hardcore functions we introduce. We instantiate this general construction in two ways to obtain two specific symmetric and dual PRFs, the first assuming any collision-resistant hash function, and the second assuming any one-way permutation. A construction based on any one-way function evades us and is left as an intriguing open problem.
On-the-fly Homomorphic Batching/Unbatching
We introduce a homomorphic batching technique that can
be used to pack multiple ciphertext messages into one ciphertext for parallel processing. One is able to use the method to batch or unbatch messages homomorphically to further improve the flexibility of encrypted domain evaluations. In particular, we show various approaches to implement Number Theoretic Transform (NTT) homomorphically in FFT speed. Also, we present the limitations that we encounter in application of these methods. We implement homomorphic batching in various settings and present concrete performance figures. Finally, we present an implementation of a homomorphic NTT method which we process each element in an independent ciphertext. The advantage of this method is we are able to batch independent homomorphic NTT evaluations and
achieve better amortized time.
Secure Distributed Computation on Private Inputs
The recent notion of encryption switching protocol (ESP) allows two players to obliviously switch between two encryption schemes. Instantiated from multiplicatively homomorphic encryption and additively homomorphic encryption, ESPs provide a generic solution to two-party computation and lead to particularly efficient protocols for arithmetic circuits in terms of interaction and communication.
In this paper, we further investigate their applications and show how ESPs can be used as an alternative to fully-homomorphic encryption (FHE) to outsource computation on sensitive data to cloud providers. Our interactive solution relies on two non-colluding servers which obliviously perform the operations on encrypted data, and eventually send back the outcome in an encrypted form to the appropriate players. Our solution makes use of a nice combination of the Paillier encryption scheme and the Damgard-Jurik variant with multiple trapdoors, which notably allows cross-user evaluations on encrypted data.
ARITHMETIC USING WORD-WISE HOMOMORPHIC ENCRYPTION
Uncategorized
Uncategorized
Homomorphic encryption has progressed rapidly in both efficiency and versatility since its emergence in 2009. Meanwhile, a multitude of pressing privacy needs --- ranging from cloud computing to healthcare management to the handling of shared databases such as those containing genomics data --- call for immediate solutions that apply fully homomorpic encryption (FHE) and somewhat homomorphic encryption (SHE) technologies. Further progress towards these ends requires new ideas for the efficient implementation of algebraic operations on word-based (as opposed to bit-wise) encrypted data. Whereas handling data encrypted at the bit level leads to % leaves us with prohibitively slow algorithms for the arithmetic operations that are essential for cloud computing, the word-based approach hits its bottleneck when operations such as integer comparison are needed.
In this work, we tackle this challenging problem, proposing solutions to problems --- including comparison and division --- in
word-based encryption via a leveled FHE scheme. We present concrete performance figures for all proposed primitives.
HOMOMORPHIC AUTOCOMPLETE
With the rapid progress in fully homomorpic encryption (FHE)
and somewhat homomorphic encryption (SHE) schemes, we are wit-
nessing renewed efforts to revisit privacy preserving protocols. Several works have already appeared in the literature that provide solutions to these problems by employing FHE or SHE techniques. These applications range from cloud computing to computation over confidential patient data to several machine learning problems such as classifying privatized data. One application where privacy is a major concern is web search – a task carried out on a daily basis by billions of users around the world. In this work, we focus on a more surmountable yet essential version of the search problem, i.e. autocomplete. By utilizing a SHE scheme we propose concrete solutions to a homomorphic autocomplete problem. To investigate the real-life viability, we tackle a number of problems in the way towards a practical implementation such as communication and computational efficiency.
Collision Attacks against CAESAR Candidates -- Forgery and Key-Recovery against AEZ and Marble
In this paper we study authenticated encryption algorithms inspired by the OCB mode (Offset Codebook). These algorithms use secret offsets (masks derived from a whitening key) to turn a block cipher into a tweakable block cipher, following the XE or XEX construction.
OCB has a security proof up to 2^n/2 queries, and a matching forgery attack was described by Ferguson, where the main step of the attack recovers the whitening key. In this work we study recent authenticated encryption algorithms inspired by OCB, such as Marble, AEZ, and COPA. While Ferguson’s attack is not applicable to those algorithms, we show that it is still possible to recover the secret mask with birthday complexity. Recovering the secret mask easily leads to a forgery attack, but it also leads to more devastating attacks, with a key-recovery attack against Marble and AEZ v2 and v3 with birthday complexity.
For Marble, this clearly violates the security claims of full n-bit security. For AEZ, this matches the security proof, but we believe it is nonetheless a quite undesirable property that collision attacks allow to recover the master key, and more robust designs would be desirable.
Our attack against AEZ is generic and independent of the internal permutation (in particular, it still works with the full AES), but the key-recovery is specific to the key derivation used in AEZ v2 and v3. Against Marble, the forgery attack is generic, but the key-recovery exploits the structure of the E permutation (4 AES rounds). In particular, we introduce a novel cryptanalytic method to attack 3 AES rounds followed by 3 inverse AES rounds, which can be of independent interest.
A Guide to Fully Homomorphic Encryption
Fully homomorphic encryption (FHE) has been dubbed the holy grail of cryptography, an elusive goal which could solve the IT world's problems of security and trust. Research in the area exploded after 2009 when Craig Gentry showed that FHE can be realised in principle. Since that time considerable progress has been made in finding more practical and more efficient solutions. Whilst research quickly developed, terminology and concepts became diverse and confusing so that today it can be difficult to understand what the achievements of different works actually are. The purpose of this paper is to address three fundamental questions: What is FHE? What can FHE be used for? What is the state of FHE today? As well as surveying the field, we clarify different terminology in use and prove connections between different FHE notions.
A Formal Analysis of Prefetching in Profiled Cache-Timing Attacks on Block Ciphers
Formally bounding side-channel leakage is important to bridge the gap between the theory and practice in cryptography. However, bounding side-channel leakages is difficult because leakage in a crypto-system could be from several sources. Moreover the amount of leakage from a source may vary depending on the implementation of the cipher and the form of attack. To formally analyze the security of a crypto-system against a form of attack, it is therefore essential to consider each source of leakage independently. This paper considers data prefetching, which is used in most modern day cache memories to reduce the miss penalty. To the best of our knowledge, we show for the first time that micro-architectural features like prefetching is a major source of leakage in profiled cache-timing attacks. We further quantify the leakage due to important data prefetching algorithms, namely sequential and arbitrary-stride prefetching. The analytical results, with supported experimentation, brings out interesting facts like the effect of placement of tables in memory and the cipher’s implementation on the leakage in profiled cache-timing attacks.
Private Large-Scale Databases with Distributed Searchable Symmetric Encryption
With the growing popularity of remote storage, the ability to outsource a large private database yet be able to search on this encrypted data is critical. Searchable symmetric encryption (SSE) is a practical method of encrypting data so that natural operations such as searching can be performed on this data. It can be viewed as an efficient private-key alternative to powerful tools such as fully homomorphic encryption, oblivious RAM, or secure multiparty computation. The main drawbacks of existing SSE schemes are the limited types of search available to them and their leakage. In this paper, we present a construction of a private outsourced database in the two-server model (e.g. two cloud services) which can be thought of as an SSE scheme on a B-tree that allows for a wide variety of search features such as range queries, substring queries, and more. Our solution can hide all leakage due to access patterns (``metadata'') between queries and features a tunable parameter that provides a smooth tradeoff between privacy and efficiency. This allows us to implement a solution that supports databases which are terabytes in size and contain millions of records with only a 5x slowdown compared to MySQL when the query result size is around 10% of the database, though the fixed costs dominate smaller queries resulting in over 100x relative slowdown (under 1 second actual).
In addition, our solution also provides a mechanism for allowing data owners to set filters that prevent prohibited queries from returning any results, without revealing the filtering terms. Finally, we also present the benchmarks of our prototype implementation. This is the full version of the extended abstract to appear at CT-RSA 2016.
Invariant Subspace Attack Against Full Midori64
In this paper, we present an invariant subspace attack against block cipher Midori64 which has recently been proposed by Banik et al. at Asiacrypt 2015 to achieve low energy consumption. We show that when each nibble of the key has the value 0 or 1 and each nibble of the plaintext has the value 8 or 9, each nibble of the ciphertext also has the value 8 or 9 with probability one regardless of the number of rounds applied. This fact indicates that Midori64 has a class of $2^{32}$ weak keys that can be distinguished with a single query. It also indicates that the number of keys generated uniformly at random for Midori64 must not exceed $2^{96}$, i.e., the pseudorandom-permutation security of Midori64 is only up to 96 bits instead of 128 bits. Interestingly, given the information that the key is from the $2^{32}$ weak key subspace, key recovery can be performed within time complexity $2^{16}$ and data complexity $2^1$. We have confirmed the correctness of the analysis by implementing the attack. At the current stage, our attacks do not apply to Midori128.
Compact Attribute-Based Encryption and Signcryption for General Circuits from Multilinear Maps
Designing attribute-based systems supporting highly expressive access policies has been
one of the principal focus of research in attribute-based cryptography. While attribute-based encryption
(ABE) enables fine-grained access control over encrypted data in a multi-user environment,
attribute-based signature (ABS) provides a powerful tool for preserving signer anonymity. Attributebased
signcryption (ABSC), on the other hand, is a combination of ABE and ABS into a unified
cost-effective primitive. In this paper, we start by presenting a key-policy ABE supporting general
polynomial-size circuit realizable decryption policies and featuring compactness. More specifically,
our ABE construction exhibits short ciphertexts and shorter decryption keys compared to existing
similar works. We then proceed to design a key-policy ABSC scheme which enjoys several interesting
properties that were never achievable before. It supports arbitrary polynomial-size circuits, thereby
handles highly sophisticated control over signing and decryption rights. Besides, it generates short
ciphertext as well. Our ABE construction employs multilinear map of level $n + l + 1$, while that
used for our ABSC scheme has level $n + n' + l + 1$, where $n$, $n'$, and $l$ represent respectively the
input length of decryption policy circuits, input size of signing policy circuits, and depth of both
kinds of circuits. Selective security of our constructions are proven in the standard model under the
Multilinear Decisional Diffie-Hellman and Multilinear Computational Diffie-Hellman assumptions
which are standard complexity assumptions in the multilinear map setting. Our key-policy constructions
can be converted to the corresponding ciphertext-policy variants achieving short ciphertext by
utilizing the technique of universal circuits.
On an almost-universal hash function family with applications to authentication and secrecy codes
Uncategorized
Uncategorized
Universal hashing, discovered by Carter and Wegman in 1979, has many important applications in computer science. MMH$^*$, which was shown to be $\Delta$-universal by Halevi and Krawczyk in 1997, is a well-known universal hash function family. We introduce a variant of MMH$^*$, that we call GRDH, where we use an arbitrary integer $n>1$ instead of prime $p$ and let the keys $\mathbf{x}=\langle x_1, \ldots, x_k \rangle \in \mathbb{Z}_n^k$ satisfy the conditions $\gcd(x_i,n)=t_i$ ($1\leq i\leq k$), where $t_1,\ldots,t_k$ are given positive divisors of $n$. Then via connecting the universal hashing problem to the number of solutions of restricted linear congruences, we prove that the family GRDH is an $\varepsilon$-almost-$\Delta$-universal family of hash functions for some $\varepsilon<1$ if and only if $n$ is odd and $\gcd(x_i,n)=t_i=1$ $(1\leq i\leq k)$. Furthermore, if these conditions are satisfied then GRDH is $\frac{1}{p-1}$-almost-$\Delta$-universal, where $p$ is the smallest prime divisor of $n$. Finally, as an application of our results, we propose an authentication code with secrecy scheme which strongly generalizes the scheme studied by Alomair et al. [{\it J. Math. Cryptol.} {\bf 4} (2010), 121--148], and [{\it J.UCS} {\bf 15} (2009), 2937--2956].
Restricted linear congruences
Uncategorized
Uncategorized
In this paper, using properties of Ramanujan sums and of the discrete Fourier transform of arithmetic functions, we give an explicit formula for the number of solutions of the linear congruence $a_1x_1+\cdots +a_kx_k\equiv b \pmod{n}$, with $\gcd(x_i,n)=t_i$ ($1\leq i\leq k$), where $a_1,t_1,\ldots,a_k,t_k, b,n$ ($n\geq 1$) are arbitrary integers. As a consequence, we derive necessary and sufficient conditions under which the above restricted linear congruence has no solutions. The number of solutions of this kind of congruence was first considered by Rademacher in 1925 and Brauer in 1926, in the special case of $a_i=t_i=1$ $(1\leq i \leq k)$. Since then, this problem has been studied, in several other special cases, in many papers; in particular, Jacobson and Williams [{\it Duke Math. J.} {\bf 39} (1972), 521--527] gave a nice explicit formula for the number of such solutions when $(a_1,\ldots,a_k)=t_i=1$ $(1\leq i \leq k)$. The problem is very well-motivated and has found intriguing applications in several areas of mathematics, computer science, and physics, and there is promise for more applications/implications in these or other directions.
Efficient Pseudorandom Functions via On-the-Fly Adaptation
Pseudorandom functions (PRFs) are one of the most fundamental building blocks in cryptography with numerous applications such as message authentication codes and private key encryption. In this work, we propose a new paradigm to construct PRFs with the overall goal to build efficient PRFs from standard assumptions with an almost tight proof of security. We start from a PRF for any small domain (i.e.~poly-sized domain) and we turn it into a bounded pseudorandom functions (bPRF). Recall that a function $F$ is an $\ell$-bounded pseudorandom function, if the outputs of $F$ are pseudorandom for the first $\ell$ distinct queries to $F$. In the second step, we apply a novel technique which we call \emph{on-the-fly adaptation} that turns any bPRF into a fully-fledged (large domain) PRF. Both steps of our paradigm have a tight security reduction, meaning that any successful attacker can be turned into an efficient algorithm for the underlying hard computational problem without any significant increase in the running time or loss of success probability.
Instantiating our paradigm with specific number theoretic assumptions, we construct a PRF based on $k$-LIN (and thus DDH) that is faster than all known constructions, which reduces almost tightly to the underlying problem, and which has shorter keys.
Instantiating our paradigm with general assumptions, we construct a PRF with very flat circuits whose security almost tightly reduces to the security of some small domain PRF. As an exemplifying instance, we use the PRF of Banerjee, Peikert, and Rosen (EUROCRYPT'12) based on LWE, as the underlying small domain PRF and we obtain a construction that is secure (for large domains) under a weaker assumption, with a tighter proof of security, and the resulting circuit is shallower than instantiating the BPR scheme with a large domain directly.
Extension Field Cancellation: a New Central Trapdoor for Multivariate Quadratic Systems
This paper introduces a new central trapdoor for multivariate quadratic (MQ) public-key cryptosystems that allows for encryption, in contrast to time-tested MQ primitives such as Unbalanced Oil and Vinegar or Hidden Field Equations which only allow for signatures. Our construction is a mixed-field scheme that exploits the commutativity of the extension field to dramatically reduce the complexity of the extension field polynomial implicitly present in the public key. However, this reduction can only be performed by the user who knows concise descriptions of two simple polynomials, which constitute the private key. After applying this transformation, the plaintext can be recovered by solving a linear system. We use the minus and projection modifiers to inoculate our scheme against known attacks. A straightforward C++ implementation confirms the efficient operation of the public key algorithms.
Authenticated Range \& Closest Point Queries in Zero-Knowledge
We present an efficient method for answering one-dimensional range and closest-point queries in a verifiable and privacy-preserving manner. We consider a model where a data owner outsources a dataset of key-value pairs to a server, who answers range and closest-point queries issued by a client and provides proofs of the answers. The client verifies the correctness of the answers while learning nothing about the dataset besides the answers to the current and previous queries. Our work yields for the first time a zero-knowledge privacy assurance to authenticated range and closest-point queries. Previous work leaked the size of the dataset and used an inefficient proof protocol. Our construction is based on hierarchical identity-based encryption. We prove its security and analyze its efficiency both theoretically and with experiments.
Chaskey: a MAC Algorithm for Microcontrollers -- Status Update and Proposal of Chaskey-12 --
The Chaskey MAC algorithm was presented by Mouha et al. at SAC 2014. It is designed for real-world applications where 128-bit keys are required, but standard cryptographic algorithms cannot be implemented because of stringent requirements on speed, energy consumption, or code size. Shortly after its publication, Chaskey was considered for standardization by ISO/IEC JTC 1/SC 27/WG 2. At the October 2015 meeting, the ISO/IEC committee decided to terminate the study period on Chaskey, and to circulate a first working draft. Since Chaskey was introduced, many follow-up results were published, including improved cryptanalysis results, new security proofs and more efficient implementations. This paper gives a comprehensive overview of those results, and introduces a twelve-round variant of Chaskey: Chaskey-12. Although the original eight-round Chaskey remains unbroken, Chaskey-12 has a much more conservative design, while reducing the performance by only 15% to 30%, depending on the platform.
Construction of Transition Matrices for Binary FCSRs
Stream ciphers based on Linear Feedback Shift Registers (LFSRs) have faced algebraic attacks. To avoid this
kind of attacks, Feedback with Carry Shift Registers (FCSRs) have been proposed as an alternative. In order to eliminate a so-called LFSRization weakness, FCSRs have been implemented using ring representation instead of the Galois one. A ring FCSR is determined by its transition matrix $A$. Its connection integer, which is related to the properties of the output sequences,
is $q=\mbox{det}(I-2A)$.
In this paper, we show how to calculate the determinant $\mbox{det}(I-2A)$ of transition matrices with a critical path of length 1 and fan-out 2. Moreover, we propose algorithms to construct such transition matrices (binary case) based on searching target connection integers.
Secure Comparator: a ZKP-Based Authentication System
Uncategorized
Uncategorized
This paper presents Secure Comparator, a way to implement Zero Knowledge Proof algorithm called Socialist Millionaire’s Problem, to compare secrets between two parties. Compared to existing implementations, Secure Comparator provides better security guarantees, stronger cryptographic math, and, possibly, more integration-friendly architecture.
A construction of 3-dimensional lattice sieve for number field sieve over F_{p^n}
The security of pairing-based cryptography is based on the hardness of solving the discrete logarithm problem (DLP) over extension field F_{p^n} of characteristic p and degree n. Joux et al. proposed an asymptotically fastest algorithm for solving DLP over F_{p^n} (JLSV06-NFS) as the extension of the number field sieve over prime field F
_p (JL03-NFS). The lattice sieve is often used for a large-scaled experiment of solving DLP over F_p by the number field sieve. Franke and Kleinjung proposed a 2-dimensional lattice sieve which efficiently enumerates all the points in a given sieve region of the lattice. However, we have to consider a sieve region of more than 2 dimensions in the lattice sieve of JLSV06-NFS.
In this paper, we extend the Franke-Kleinjung method to 3-dimensional sieve region. We construct an appropriate basis using the Hermite normal form, which can enumerate the points in a given sieve region of the 3-dimensional lattice. From our experiment on F_{p^{12}} of 303 bits, we are able to enumerate more than 90\% of the points in a sieve region in the lattice generated by special-q. Moreover, we implement the number field sieve using the proposed 3-dimensional lattice sieve. Our implementation of the JLSV06 over F_{p^6} of 240 bits is about as efficient as that of the current record over F_{p^6} using 3-dimensional line sieve by Zajac.
Textbook Non-Malleable Commitments
We present a new non-malleable commitment protocol. Our protocol has the following features:
\begin​{itemize} \item The protocol has only \emph{three rounds} of interaction. Pass (TCC 2013) showed an impossibility result for a two-round non-malleable commitment scheme w.r.t. a black-box reduction to any ``standard" intractability reduction. Thus, this resolves the round complexity of non-malleable commitment at least w.r.t. black-box security reductions. Our construction is secure as per the standard notion of non-malleability w.r.t. commitment.
\item Our protocol is \emph{truly efficient}. In our basic protocol, the entire computation of the committer is dominated by just three invocations of a non-interactive statically binding commitment scheme, while, the receiver computation (in the commitment stage) is limited to just sampling a random string. Unlike many previous works, we directly construct a protocol for large tags and hence avoid any non-malleability amplification steps.
\item Our protocol makes black-box use of its underlying cryptographic primitives. Previously, the best known black-box construction of non-malleable commitments required a larger (constant) number of rounds. Our basic protocol secure against synchronizing adversaries is based on black-box use of any non-interactive statistically binding commitment (which, in turn, can be based on any one-to-one one-way function). Our extended protocol requires a mildly stronger assumption and more invocations of the underlying non-interactive commitment scheme.
\item Our construction is public-coin and makes use of only black-box simulation. Prior to our work, no public-coin constant round non-malleable commitment schemes were known based on black-box simulation. \end{itemize}
Our techniques depart \emph{significantly} from the techniques used previously to construct non-malleable commitment schemes. As a main technical tool, we rely on non-malleable codes in the split state model. Our proofs of security are purely combinatorial in nature.
In addition, we also present a simple construction of constant round non-malleable commitments from any one-way function. While this result is not new, the main feature is its simplicity compared to \emph{any} previous construction of non-malleable commitments (in any number of rounds). We believe the construction is simple enough to be covered in a graduate level course on cryptography. The construction uses non-malleable codes in the split state model in a black-box way.
On the CCA (in)security of MTProto
Telegram is a popular messaging app which supports end-to-end encrypted communication. In Spring 2015 we performed an audit of Telegram's source code. This short paper summarizes our findings.
Our main discovery is that the symmetric encryption scheme used in Telegram -- known as MTProto -- is not IND-CCA secure, since it is possible to turn any ciphertext into a different ciphertext that decrypts to the same message.
We stress that this is a theoretical attack on the definition of security and we do not see any way of turning the attack into a full plaintext-recovery attack. At the same time, we see no reason why one should use a less secure encryption scheme when more secure (and at least as efficient) solutions exist.
The take-home message (once again) is that well-studied, provably secure encryption schemes that achieve strong definitions of security (e.g., authenticated-encryption) are to be preferred to home-brewed encryption schemes.
On the Efficiency of FHE-based Private Queries
Private query processing is a very attractive problem in the fields of both cryptography and databases. In this work, we restrict our attention to the efficiency aspect of the problem, particularly for basic queries with conditions on various combinations of \emph{equality}. Without loss of generality, these conditions can be regarded as a Boolean function, and this Boolean function can then be evaluated at ciphertexts produced by a fully homomorphic encryption (FHE) scheme \emph{without decryption}. From the efficiency perspective, the remaining concern is to efficiently test the equality function without severely downgrading the performance of FHE-based querying solutions.
To this end, we first analyze the multiplicative depth required for an equality test algorithm with respect to the plaintext space inhabited by general FHE schemes. The primary reason for this approach is that given an equality test algorithm, its efficiency is measured in terms of the multiplicative depth required to construct its arithmetic circuit expression. Indeed, the implemented equality test algorithm dominates the entire performance of FHE-based query solutions, apart from the performance of the underlying FHE scheme. Then, we measure the multiplicative depth considering an FHE scheme that takes an extension field as its plaintext space and that supports the depth-free evaluation of Frobenius maps. According to our analysis, when the plaintext space of an FHE scheme is a field of characteristic $2$, the equality test algorithm for $\ell$-bit messages requires the lowest multiplicative depth $\lceil\log{\ell}\rceil$. Furthermore, we design a set of private query protocols for conjunctive, disjunctive, and threshold queries based on the equality test algorithm. Similarly, applying the equality test algorithm over $\mathbb{F}_{2^{\ell}}$, our querying protocols require the minimum depths. More specifically, a multiplicative depth of $\lceil\log{\ell}\rceil+\lceil\log{(1+\rho)}\rceil$ is required for conjunctive and disjunctive queries, and a depth of $\lceil\log{\ell}\rceil+2\lceil\log{(1+\rho)}\rceil$ is required for threshold conjunctive queries, when their query conditions have $\rho$ attributes to be compared. Finally, we provide a communication-efficient version of our solutions, though with additional computational costs, when an upper bound $\delta$ ($0\leq \delta\leq 1$) on the selectivity of a database is given. Consequently, we reduce the communication cost from $n$ to approximately $\lfloor\delta n\rfloor$ ciphertexts with $\lceil\log{n}\rceil$ additional depth when the database consists of $n$ tuples.
Improved Data Confidentiality of Audit Trail Data in Multi-Tenant Cloud
Cloud computing is delivery of services rather than a product and among different cloud deployment models, the public cloud provides improved scalability and cost reduction when compared to others. Security and privacy of data is one of the key factors in transitioning to cloud. Typically the cloud providers have a demilitarized zone protecting the data center along with a reverse proxy setup. The reverse proxy gateway acts as initial access point and provides additional capabilities like load balancing, caching, security monitoring capturing events, syslogs related to hosts residing in the cloud. The audit-trail logs captured by reverse proxy server comprise important information related to all the tenants. While the PKI infrastructure works in cloud scenario it becomes cumbersome from manageability point of view and they lack flexibility in providing controlled access to data. In this paper we evaluate
risks associated with security and privacy of audit logs produced by reverse proxy server. We provide a two-phase approach for sharing the audit-logs with users allowing fine-grained access. In this paper we evaluate certain Identity-Based and AttributeBased Encryption schemes and provide detailed analysis on performance.
On Data Complexity of Distinguishing Attacks vs. Message Recovery Attacks on Stream Ciphers
We revisit the different approaches used in the literature to estimate
the data complexity of distinguishing attacks on stream ciphers and analyze their inter-relationships. In the process, we formally argue which approach is applicable (or not applicable) in what scenario. To our knowledge, this is the first kind of such an exposition. We also perform a rigorous statistical analysis of the message recovery attack that exploits a distinguisher and show that in practice there is a significant gap between the data complexities of a message recovery attack and the underlying distinguishing attack. This gap is not necessarily determined by a constant factor as a function of the false positive and negative rate, as one would expect. Rather this gap is also a function of the number of samples of the distinguishing attack. We perform a case study on RC4 stream cipher to demonstrate that the typical complexities for message recovery attack inferred in the literature are but under-estimates and the actual estimates are quite larger.
Secure Multiparty Computation with General Interaction Patterns
We present a unified framework for studying secure multi-party computation (MPC) with *arbitrarily restricted interaction patterns*. Our study generalizes both standard MPC and recent models for MPC with specific restricted interaction patterns (such as a chain or a star), which were studied by Halevi et al. (Crypto 2011), Goldwasser et al. (Eurocrypt 2014), and Beimel et al. (Crypto 2014).
Since restricted interaction patterns cannot always yield full security for MPC, we start by formalizing the notion of "best possible security" for any interaction pattern. We then obtain the following results:
* Completeness theorem. We prove that the star interaction pattern is *complete* for the problem of MPC with general interaction patterns.
* Positive results. We present both information-theoretic and computationally secure protocols for computing arbitrary functions with general interaction patterns. We also present more efficient protocols for computing symmetric functions and for computing arbitrary functions over a chain.
* Negative results. We give evidence that our information-theoretic protocols for general functions will be hard to substantially improve on.
All of our protocols rely on a correlated randomness setup, which is *necessary* for computing general functions in our setting. In the computational case, we also present a generic procedure to make any correlated randomness setup *reusable*, in the common random string model.
Although most of our information-theoretic protocols have exponential complexity, they may be practical for functions on small domains (e.g., {0,1}^20), where they are concretely faster than their computational counterparts.
Last updated: 2016-03-24
An Application Specific Instruction Set Processor (ASIP) for the Niederreiter Cryptosystem
The Niederreiter public-key cryptosystem is based on the security
assumption that decoding generic linear binary codes is NP complete, and
therefore, is regarded as an alternative post-quantum solution to resist quantum computing.
Current hardware implementations for the Niederreiter cryptosystem focus on data encryption/decryption
but few of them consider digital signature producing given that signature scheme is much
different from encrytion/decrytion and complicated to be integrated.
In this work, we address the problem of achieving efficient Niederreiter digital signature and
extending it to execute encryption/decryption on reconfigurable hardware.
We first present a new parameter selection method by which both encryption/decryption and signature are able
to be performed with the same hardware configurations. Then we design a compact ASIP architecture with the proposed parameter selection and resource sharing elaboration.
FGPA experiments show that the proposed unified architecture can achieve encryption, decryption and signature with $1.41~\mu s$, $798.57~\mu s$ and $14.07~s$ respectively while maintaining acceptable area tradeoffs ($4254\times$slices, $29\times$36Kb-BRAMs and $3\times$DSP48E1s) on Virtex-6 devices.
Last updated: 2015-12-15
On the Security of a access polynomial based self-healing key management schemes in wireless sensor networks
Uncategorized
Uncategorized
Secure group communication in wireless sensors networks (WSN) is a critical issue. Self-healing mechanism is that nodes in WSN can recover the lost session keys without requiring anything to the group manager (GM). Self-healing can be applied to key management, which can efficiently achieve the secure group communication. In 2015, Sun {\em et al.} proposed a self-healing group key distribution (SGKD) scheme based on access polynomial and sliding-window in WSN, and they claim that the proposed scheme can achieve any-wise backward secrecy, any-wise forward secrecy and $\delta$ collusion resistance. However, we find the scheme can not resist the collusion attack of new joined nodes and revoked nodes, and the legitimate cannot achieve session key recovery in some case.
Characterizing NTRU-Variants Using Group Ring and Evaluating their Lattice Security
The encryption scheme NTRU is designed over a quotient ring of a polynomial ring.
Basically, if the ring is changed to any other ring, NTRU-like cryptosystem is constructible.
In this paper, we propose a variant of NTRU using group ring, which is called GR-NTRU.
GR-NTRU includes NTRU as a special case.
Moreover, we analyze and compare the security of GR-NTRU for several concrete groups.
It is easy to investigate the algebraic structure of group ring by using group representation theory.
We apply this fact to the security analysis of GR-NTRU.
We show that the original NTRU and multivariate NTRU are most secure among several GR-NTRUs which we investigated.
Strength in Numbers: Threshold ECDSA to Protect Keys in the Cloud
Uncategorized
Uncategorized
Side-channel attacks utilize information leakage in the implementation of an otherwise secure cryptographic algorithm to extract secret information. For example, adversaries can extract the secret key used in a cryptographic algorithm by observing cache-timing data. Threshold cryptography enables the division of private keys into shares, distributed among several nodes; the knowledge of a subset of shares does not leak information about the private key, thereby defending against memory disclosure and side-channel attacks. This work shows that applying threshold cryptography to ECDSA—the elliptic curve variant of DSA—yields a fully distributive signature protocol that does not feature a single point of failure. Our security analysis shows that Threshold ECDSA protects against a wide range of side-channel attacks, including cache attacks, and counteracts memory disclosure attacks. We further provide the first performance analysis of Threshold ECDSA, and provide a proof of concept of the protocol in practice.
Last updated: 2016-08-23
SCP: A Computationally-Scalable Byzantine Consensus Protocol For Blockchains
Uncategorized
Uncategorized
In this paper, we design a new blockchain Byzantine consensus protocol SCP where the throughput scales nearly linearly with the computation: the more computing power available, the more blocks selected per unit time. SCP is also efficient that the number of messages it requires is nearly linear in the network size. The {\em computational scalability} property offers the flexibility to tune bandwidth consumption by adjusting computational parameters (e.g. proof-of-work difficulty). The key ideas lie in securely establishing identities for network participants, randomly placing them in several committees and running a classical consensus protocol within each committee to propose blocks in {\em parallel}. We further design a mechanism to allow reaching consensus on blocks without broadcasting actual block data, while still enabling efficient block verification. We prove that our protocol is secure, efficient and applicable to several case studies. We conduct scalability experiments on Amazon EC2 with upto 80 cores, and confirm that SCP matches its theoretical scaling properties.
Constraining Pseudorandom Functions Privately
In a constrained pseudorandom function (PRF), the master secret key can be
used to derive constrained keys, where each constrained key k is constrained
with respect to some Boolean circuit C. A constrained key k can be used to
evaluate the PRF on all inputs x for which C(x) = 1. In almost all existing
constrained PRF constructions, the constrained key k reveals its constraint C.
In this paper we introduce the concept of private constrained PRFs, which
are constrained PRFs with the additional property that a constrained key
does not reveal its constraint. Our main notion of privacy captures the
intuition that an adversary, given a constrained key k for one of two
circuits C_0 and C_1, is unable to tell which circuit is associated with the
key k. We show that constrained PRFs have natural applications to
searchable symmetric encryption, cryptographic watermarking, and much more.
To construct private constrained PRFs we first demonstrate that our strongest
notions of privacy and functionality can be achieved using
indistinguishability obfuscation. Then, for our main constructions, we build
private constrained PRFs for bit-fixing constraints and for puncturing
constraints from concrete algebraic assumptions.
Ceremonies for End-to-End Verifiable Elections
Uncategorized
Uncategorized
State-of-the-art e-voting systems rely on voters to perform certain actions to ensure that the election authorities are not manipulating the election result. This so-called ``end-to-end (E2E) verifiability'' is the hallmark of current e-voting protocols; nevertheless, thorough analysis of current systems is still far frombeing complete.
In this work, we initiate the study of e-voting protocols as ceremonies. A ceremony, as introduced by Ellison in 2007, is an extension of the notion of a protocol that includes human participants as separate nodes of the system that should be taken into account when performing the security analysis. that centers on the two properties of end-to-end verifiability and voter privacy and allows the consideration of arbitrary behavioural distributions for the human participants.
We then analyse the Helios system as an e-voting ceremony. Security in the e-voting ceremony model requires the specification of a class of human behaviours with respect to which the security properties can be preserved. We show how end-to-end verifiability and voter privacy are sensitive to human behaviour in the protocol by characterizing the set of behaviours under which the security can be preserved and also showing explicit scenarios where it fails.
We then provide experimental evaluation with human subjects from two different sources where people used Helios: the elections of the International Association for Cryptologic Research (IACR) and a poll of senior year computer science students. We report on the auditing behaviour of the participants as we measured it and we discuss the effects on the level of certainty that can be given by each of the two electorates.
The outcome of our analysis is a negative one: the auditing behaviour of people (including cryptographers) is not sufficient to ensure the correctness of the tally with good probability in either case studied. The same holds true even for simulated data that capture the case of relatively well trained participants while, finally, the security of the ceremony can be shown but under the assumption of essentially ideally behaving human subjects. We note that while our results are stated for Helios, they automatically transfer to various other e-voting systems that, as Helios, rely on client-side encryption to encode the voter's choice
Meet-in-the-Middle Attacks on Reduced-Round Midori-64
Midori is a lightweight block cipher designed by Banik et al. at ASIACRYPT 2015. One version of Midori uses a 64-bit state, another uses a 128-bit state and we denote these versions Midori-64 and Midori-128. Each of these versions uses a 128-bit key. In this paper, we focus on the key-recovery attacks on reduced-round Midori-64 with meet-in-the-middle method. We use the differential enumeration technique and key-dependent sieve technique which are popular to analyze AES to attack Midori-64. We propose a 6-round distinguisher, and achieve a 10-round attack with time complexity of 2^{99.5} 10-round Midori-64 encryptions, data complexity of 2^{61.5} chosen-plaintexts and memory complexity of 2^{92.7} 64-bit blocks. After that, by adding one round at the end, we get an 11-round attack with time complexity of 2^{122} 11-round Midori-64 encryptions, data complexity of 2^{53} chosen-plaintexts and memory complexity of 2^{89.2} 64-bit blocks. Finally, with a 7-round distinguisher, we get an attack on 12-round Midori-64 with time complexity of 2^{125.5} 12-round Midori-64 encryptions, data complexity of 2^{55.5} chosen-plaintexts and memory complexity of 2^{106} 64-bit blocks. To the best of our knowledge, this is recently the best attack on Midori-64.
Beyond Bitcoin - Part I: A critical look at blockchain-based systems
After more than six years from the launch of Bitcoin, it has become ev-
ident that the decentralized transaction ledger functionality implemented through the blockchain technology can be used not only for cryptocurrencies, but to register, confirm and transfer any kind of contract and property.
In this work we analyze the most relevant functionalities and
known issues of this technology, with the intent of pointing out the possible behaviours that are not as efficient as they should when thinking with a broader outlook. Our analysis would be the starting point for the introduction of a new approach to blockchain creation
and management, which will be the subject of a forthcoming paper.
A Guess-and-Determine Attack on Reduced-Round Khudra and Weak Keys of Full Cipher
Khudra is a lightweight block cipher designed for Field Programmable Gate Array (FPGA) based platforms.
The cipher has an 18-round generalized type-2 Feistel structure with 64-bit block size.
The key schedule takes 80-bit master key and produces 32-bit round keys performing very simple operations.
In this work, we analyze the security of Khudra.
We first show that the effective round key length is 16-bit.
By the help of this observation, we improve the 14-round MITM attack proposed by Youssef et al. by reducing the memory complexity from $2^{64.8}$ to $2^{32.8}$.
Also, we propose a new guess-and-determine type attack on 14 rounds where only 2 known plaintext-ciphertext pairs are required to mount the attack in a time complexity of $2^{64}$ encryption operations.
To the best of our knowledge, this is the best attack in the single key model in terms of time, memory and data complexities where the data complexity is equal to the minimum theoretical data requirement.
Moreover, we present two observations on differential probabilities of the round function and the symmetric structure of the cipher.
We introduce $2^{40}$ weak keys for the full cipher by exploiting the symmetric structure of the cipher.
The Moral Character of Cryptographic Work
Uncategorized
Uncategorized
Cryptography rearranges power: it configures who can do what, from what. This makes cryptography an inherently \textit{political} tool, and it confers on the field an intrinsically \textit{moral} dimension. The Snowden revelations motivate a reassessment of the political and moral positioning of cryptography. They lead one to ask if our inability to effectively address mass surveillance constitutes a failure of our field. I believe that it does. I call for a community-wide effort to develop more effective means to resist mass surveillance. I plea for a reinvention of our disciplinary culture to attend not only to puzzles and math, but, also, to the societal implications of our work.
Cyber and Physical Access Control in Legacy System Using Passwords
Uncategorized
Uncategorized
Password---a secret combination of symbols---plays an important role in physical world security (e.g. watchword to prevent unauthorized entry into military forbidden area) from ancient times. With emergence and advance of digital computers and computer network, passwords are also widely adopted in cyber world security protection. In most applications, password protection stands on the frontier of cyber/physical security defense. Compromise of passwords might render the whole system insecure, and make thereafter sophisticated cryptography solution ineffective. However, secure management of a lot of random passwords is a great challenge to human brains. We propose a visual cryptography technique, which allows users to store and manage ciphertexts of randomly chosen passwords in mobile phone and decrypt them \emph{manually} on demand. The stored passwords remain confidential, even if the mobile phone is infected by spyware (Assume the spyware can capture phone screen, and monitor phone CPU and RAM).
We also analyze the security and feasibility of proposed method.
Leveraging on this technique, we give a simple access control system based on passwords, which provides a low cost alternative solution for legacy system besides smart card based solution.
Cryptanalysis of multi-HFE
Multi-HFE (Chen et al., 2009) is one of cryptosystems whose public key is a set of multivariate quadratic forms over a finite field. Its quadratic forms are constructed by a set of multivariate quadratic forms over an extension field. Recently, Bettale et al. (2013) have studied the security of HFE and multi-HFE against the min-rank attack and found that multi-HFE is not more secure than HFE of similar size. In the present paper, we propose a new attack on multi-HFE
by using a diagonalization approach. As a result, our attack can recover equivalent secret keys of multi-HFE in polynomial time for odd characteristic case. In fact, we experimentally succeeded to recover equivalent secret keys of several examples of multi-HFE in about fifteen seconds on average, which was recovered in about nine days by the min-rank attack.
Students and Taxes: a Privacy-Preserving Social Study Using Secure Computation
We describe the use of secure multi-party computation for performing a large-scale privacy-preserving statistical study on real government data. In 2015, statisticians from the Estonian Center of Applied Research (CentAR) conducted a big data study to look for correlations between working during university studies and failing to graduate in time. The study was conducted by linking the database of individual tax payments from the Estonian Tax and Customs Board and the database of higher education events from the Ministry of Education and Research. Data collection, preparation and analysis were conducted using the Sharemind secure multi-party computation system that provided end-to-end cryptographic protection to the analysis. Using ten million tax records and half a million education records in the analysis, this is the largest cryptographically private statistical study ever conducted on real data.
A note on the optimality of frequency analysis vs. $\ell_p$-optimization
Uncategorized
Uncategorized
Naveed, Kamara, and Wright's recent paper "Inference Attacks on Property-Preserving Encrypted Databases" (ACM-CCS 2015) evaluated four attacks on encrypted databases, such as those based on the design of CryptDB (Popa et al., SOSP 2011). Two of these attacks---frequency analysis and l_p-optimization---apply to deterministically encrypted columns when there is a publicly-available auxiliary data set that is "well-correlated" with the ciphertext column. In their experiments, frequency analysis performed at least as well as l_p-optimization for p=1, 2, and 3. We use maximum likelihood estimation to confirm their intuition and show that frequency analysis is an optimal cryptanalytic technique in this scenario.