All papers in 2015 (Page 7 of 1255 results)
Randomizing the Montgomery Powering Ladder
In this paper, we present novel randomized techniques to enhance Montgomery powering ladder. The proposed techniques increase the resistance against side-channel attacks and especially recently published correlation collision attacks in the horizontal setting. The first of these operates by randomly changing state such that the difference between registers varies, unpredictably, between two states. The second algorithm takes a random walk, albeit tightly bounded, along the possible addition chains required to compute an exponentiation. We also generalize the Montgomery powering ladder and present randomized (both left-to-right and right-to-left) $m$-ary exponentiation algorithms.
Cryptanalysis of a Markov Chain Based User Authentication Scheme
Session key agreement protocol using smart card is extremely popular in client-server environment for secure communication. Remote user authentication protocol plays a crucial role in our daily life such as e-banking, bill-pay, online games, e-recharge, wireless sensor network, medical system, ubiquitous devices etc. Recently, Djellali et al. proposed a session key agreement protocol using smart card for ubiquitous devices. The main focus of this paper is to analyze security pitfalls of smart card and password based user authentication scheme. We have carefully reviewed Djellali et al.'s scheme and found that the same scheme suffers from several security weaknesses such as off-line password guessing attack, privileged insider attack. Moreover, we demonstrated that the Djellali et al.'s scheme does not provide proper security protection on the secret key of the server and presents inefficient password change phase.
An Efficient Multi-Message Multi-Receiver Signcryption Scheme with Forward Secrecy on Elliptic Curves
Secure multicast communication has application in growing number of applications. Forward secrecy is of prime importance and insures message confidentiality even long-term private key compromised. We present an efficient construction of multi message multi receiver signcryption with forward secrecy on elliptic curves. It provides confidentiality, integrity, authenticity, non-repudiation, public verifiability, unforgeability and forward secrecy of multi message multicast. It is efficient in computation cost and communication overhead and suitable for resource constrained IP-based secure multi message multicast systems.
DAA-TZ: An Efficient DAA Scheme for Mobile Devices using ARM TrustZone
Direct Anonymous Attestation (DAA) has been studied for applying to mobile devices based on ARM TrustZone. However, current solutions bring in extra performance overheads and security risks when adapting existing DAA schemes originally designed for PC platform. In this paper, we propose a complete and efficient DAA scheme (DAA-TZ) specifically designed for mobile devices using TrustZone. By considering the application scenarios, DAA-TZ extends the interactive model of original DAA and provides anonymity for a device and its user against remote service providers. The proposed scheme requires only one-time switch of TrustZone for signing phase and elaborately takes pre-computation into account. Consequently, the frequent on-line signing just needs at most three exponentiations on elliptic curve. Moreover, we present the architecture for trusted mobile devices. The issues about key derivation and sensitive data management relying on a root of trust from SRAM Physical Unclonable Function (PUF) are discussed. We implement a prototype system and execute DAA-TZ using MNT and BN curves with different security levels. The comparison result and performance evaluation indicate that our scheme meets the demanding requirement of mobile users in respects of both security and efficiency.
Last updated: 2016-01-28
Homomorphic Signature Schemes - A survey
Homomorphic signature schemes are an important primitive for many applications and since their introduction numerous solutions have been presented. Thus, in this work we provide the first exhaustive, complete, and up-to-dated survey about the state of the art of homomorphic signature schemes. First, the general framework where homomorphic signatures are defined is described and it is shown how the currently available types of homomorphic signatures, these are the linearly homomorphic signature schemes, the homomorphic schemes supporting polynomial functions, the fully homomorphic signature schemes, and the homomorphic aggregate signature schemes, can then be derived from such a framework. In addition, this work also presents a description of each of the schemes presented so far together with the properties it provides. Furthermore, three use cases, electronic voting, smart grids, and electronic health records, where homomorphic signature schemes can be employed are described. For each of these applications the requirements that a homomorphic signature scheme should fulfill are defined and the suitable schemes already available are listed. This also highlights the shortcomings of current solutions. Thus, this work concludes with several ideas for future research in the direction of homomorphic signature schemes.
Modelling ciphersuite and version negotiation in the TLS protocol
Real-world cryptographic protocols such as the widely used Transport Layer Security (TLS) protocol support many different combinations of cryptographic algorithms (called ciphersuites) and simultaneously support different versions. Recent advances in provable security have shown that most modern TLS ciphersuites are secure authenticated and confidential channel establishment (ACCE) protocols, but these analyses generally focus on single ciphersuites in isolation. In this paper we extend the ACCE model to cover protocols with many different sub-protocols, capturing both multiple ciphersuites and multiple versions, and define a security notion for secure negotiation of the optimal sub-protocol. We give a generic theorem that shows how secure negotiation follows, with some additional conditions, from the authentication property of secure ACCE protocols. Using this framework, we analyse the security of ciphersuite and three variants of version negotiation in TLS, including a recently proposed mechanism for detecting fallback attacks.
Secure Execution Architecture based on PUF-driven Instruction Level Code Encryption
A persistent problem with program execution, despite numerous mitigation attempts, is its inherent vulnerability to the injection of malicious code. Equally unsolved is the susceptibility of firmware to reverse engineering, which undermines the manufacturer's code confidentiality. We propose an approach that solves both kinds of security problems employing instruction-level code encryption combined with the use of a physical unclonable function (PUF). Our novel Secure Execution PUF-based Processor (SEPP) architecture is designed to minimize the attack surface, as well as performance impact, and requires no significant changes to the development process. This is possible based on a tight integration of a PUF directly into the processor's instruction pipeline. Furthermore, cloud scenarios and distributed embedded systems alike inherently depend on remote execution; our approach supports this, as the secure execution environment needs not to be locally available at the developers site. We implemented an FPGA-based prototype based on the OpenRISC Reference Platform. To assess our results, we performed a security analysis of the processor and evaluated the performance impact of the encryption. We show that the attack surface is significantly reduced compared to previous approaches while the performance penalty is at a reasonable factor of about 1.5.
A New Encryption Standard of Ukraine: The Kalyna Block Cipher
The Kalyna block cipher was selected during Ukrainian National Public Cryptographic Competition (2007-2010) and its slight modification was approved as the new encryption standard of Ukraine in 2015. Main requirements for Kalyna were both high security level and high performance of software implementation on general-purpose 64-bit CPUs. The cipher has SPN-based (Rijndael-like) structure with increased MDS matrix size, a new set of four different S-boxes, pre- and postwhitening using modulo 2^{64} addition and a new construction of the key schedule. Kalyna supports block size and key length of 128, 256 and 512 bits (key length can be either equal or double of the block size). On the time of this paper publishing, no more effective cryptanalytic attacks than exhaustive search are known. In this paper we present the adapted English translated specification of Kalyna as it is given in the national standard of Ukraine.
On the Hardness of Proving CCA-security of Signed ElGamal
The well-known Signed ElGamal scheme consists of ElGamal
encryption with a non-interactive Schnorr proof of knowledge. While this
scheme should be intuitively secure against chosen-ciphertext attacks
in the random oracle model, its security has not yet been proven nor
disproven so far, without relying on further non-standard assumptions
like the generic group model. Currently, the best known positive result
is that Signed ElGamal is non-malleable under chosen-plaintext attacks.
In this paper we provide evidence that Signed ElGamal may not be CCA
secure in the random oracle model. That is, building on previous work of
Shoup and Gennaro (Eurocrypt’98), Seurin and Treger (CT-RSA 2013),
and Bernhard et al. (PKC 2015), we exclude a large class of potential
reductions that could be used to establish CCA security of the scheme.
Adaptive Proofs of Knowledge in the Random Oracle Model
We formalise the notion of adaptive proofs of knowledge in the random oracle model,
where the extractor has to recover witnesses for multiple, possibly adaptively chosen
statements and proofs. We also discuss extensions to simulation soundness, as typically
required for the ``encrypt-then-prove'' construction of strongly secure encryption
from IND-CPA schemes.
Utilizing our model we show three results:
(1) Simulation-sound adaptive proofs exist.
(2) The ``encrypt-then-prove'' construction with a simulation-sound
adaptive proof yields CCA security. This appears to be a ``folklore'' result
but which has never been proven in the random oracle model. As a corollary, we
obtain a new class of CCA-secure encryption schemes.
(3) We show that the
Fiat-Shamir transformed Schnorr protocol is _not_ adaptively secure and
discuss the implications of this limitation.
Our result not only separates
adaptive proofs from proofs of knowledge, but also gives a strong hint why
Signed ElGamal as the most prominent encrypt-then-prove example has not been
proven CCA-secure without making further assumptions.
Efficient ephemeral elliptic curve cryptographic keys
We show how any pair of authenticated users can on-the-fly agree on an el- liptic curve group that is unique to their communication session, unpredictable to outside observers, and secure against known attacks. Our proposal is suitable for deployment on constrained devices such as smartphones, allowing them to efficiently generate ephemeral parameters that are unique to any single cryptographic application such as symmetric key agreement. For such applications it thus offers an alternative to long term usage of stan- dardized or otherwise pre-generated elliptic curve parameters, obtaining security against cryptographic attacks aimed at other users, and eliminating the need to trust elliptic curves generated by third parties.
Decomposition attack on SASASASAS
We demonstrate the first attacks on the SPN ciphers with 6, 7, 8, and 9 secret layers. In particular, we show a decomposition attack on the SASASASAS scheme when the S-box size M and the block length N satisfy the condition M^2 < N (for example, 8-bit S-box and 128-bit block).
Last updated: 2016-03-04
New Dynamic Provable Data Possession Protocols with Public Verifiability and Data Privacy
Uncategorized
Uncategorized
An efficient Dynamic Provable Data Possession scheme with Public Verifiability and Data Privacy was recently published in ACISP'15. It appears that three attacks menace this scheme. The first one enables the server to store only one block of a file $m$ and still pass the data integrity verification on any number of file blocks. The second attack permits the server to keep the old version of a file block $m_{i}$ and the corresponding verification metadata $T_{m_{i}}$, after the client asked to modify them by sending the new version of these elements, and still pass the data integrity verification.
The last attack allows the Third Party Auditor (TPA) to distinguish files when proceeding the data integrity checking, without accessing their contents.
In this paper, we propose several solutions to overcome all the aforementioned issues. For the two first attacks, we give two new constructions of the scheme, one using Index Hash Tables and the other based on the Merkle Hash Trees. We compare the efficiency of these two new systems with the previous one. For the third attack, we suggest a weaker security model for data privacy that applies to the new construction based on the Index Hash Tables, and we use the existing strong model to prove the data privacy security for the new construction using Merkle Hash Trees.
The Pythia PRF Service
Conventional cryptographic services such as hardware-security modules and software-based key-management systems offer the ability to apply a pseudorandom function (PRF) such as HMAC to inputs of a client’s choosing. These services are used, for example, to harden stored password hashes against offline brute-force attacks.
We propose a modern PRF service called PYTHIA designed to offer a level of flexibility, security, and ease- of-deployability lacking in prior approaches. The keystone of PYTHIA is a new cryptographic primitive called a verifiable partially-oblivious PRF that reveals a portion of an input message to the service but hides the rest. We give a construction that additionally supports efficient bulk rotation of previously obtained PRF values to new keys. Performance measurements show that our construction, which relies on bilinear pairings and zero-knowledge proofs, is highly practical. We also give accompanying formal definitions and proofs of security.
We implement PYTHIA as a multi-tenant, scalable PRF service that can scale up to hundreds of millions of distinct client applications on commodity systems. In our prototype implementation, query latencies are 15 ms in local-area settings and throughput is within a factor of two of a standard HTTPS server. We further report on implementations of two applications using PYTHIA, showing how to bring its security benefits to a new enterprise password storage system and a new brainwallet system for Bitcoin.
Short Accountable Ring Signatures Based on DDH
Ring signatures and group signatures are prominent cryptographic primitives offering a combination of privacy and authentication. They enable individual users to anonymously sign messages on behalf of a group of users. In ring signatures, the group, i.e.\ the ring, is chosen in an ad hoc manner by the signer. In group signatures, group membership is controlled by a group manager.
Group signatures additionally enforce accountability by providing the group manager with a secret tracing key that can be used to identify the otherwise anonymous signer when needed.
Accountable ring signatures, introduced by Xu and Yung (CARDIS 2004), bridge the gap between the two notions. They provide maximal flexibility in choosing the ring, and at the same time maintain accountability by supporting a designated opener that can identify signers when needed.
We revisit accountable ring signatures and offer a formal security model for the primitive. Our model offers strong security definitions incorporating protection against maliciously chosen keys and at the same time flexibility both in the choice of the ring and the opener.
We give a generic construction using standard tools.
We give a highly efficient instantiation of our generic construction in the random oracle model by meticulously combining Camenisch's group signature scheme (CRYPTO 1997) with a generalization of the one-out-of-many proofs of knowledge by Groth and Kohlweiss (EUROCRYPT 2015). Our instantiation yields signatures of logarithmic size (in the size of the ring) while relying solely on the well-studied decisional Diffie-Hellman assumption.
In the process, we offer a number of optimizations for the recent Groth and Kohlweiss one-out-of-many proofs, which may be useful for other applications.
Accountable ring signatures imply traditional ring and group signatures. We therefore also obtain highly efficient instantiations of those primitives with signatures shorter than all existing ring signatures as well as existing group signatures relying on standard assumptions.
A New Partial Key Exposure Attack on Multi-power RSA
An important attack on multi-power RSA ($N=p^rq$) was introduced by Sarkar in 2014, by extending the small private exponent attack of Boneh and Durfee on classical RSA. In particular, he showed that $N$ can be factored efficiently for $r=2$ with private exponent $d$ satisfying $d<N^{0.395}$. In this paper, we generalize this work by introducing a new partial key exposure attack for finding small roots of polynomials using Coppersmith's algorithm and Gröbner basis computation. Our attack works for all multi-power RSA exponents $e$ (resp. $d$) when the exponent $d$ (resp. $e$) has full size bit length. The attack requires prior knowledge of least significant bits (LSBs), and has the property that the required known part of LSB becomes smaller in the size of $e$. For practical validation of our attack, we demonstrate several computer algebra experiments.
Noise-Free Symmetric Fully Homomorphic Encryption Based on Non-Commutative Rings
Uncategorized
Uncategorized
A framework of noise-free symmetric fully homomorphic encryption
(FHE) is proposed in this work. Dierent from the frameworks that are dened over non-commutative groups, our framework is constructed from matrices over noncommutative rings. The scheme is one-way secure against
chosen plaintext attacks (OW-CPA) based on the factorization
problem of matrices over noncommutative rings as well as the hardness of an overdened system of multivariate polynomial equations over the given non-commutative algebraic structure. On the basis of this framework, a verifiable FHE is proposed, where the receiver can check the validity of ciphertexts.
Very-efficient simulatable flipping of many coins into a well
Secure two-party parallel coin-flipping is a cryptographic functionality that allows two mutually distrustful parties to agree on a common random bit-string of a certain target length. In coin-flipping into-a-well, one party learns the bit-string and then decides whether to abort or to allow the other party to learn it. It is well known that this functionality can be securely achieved in the ideal/real simulation paradigm, using commitment schemes that are simultaneously extractable (X) and equivocable (Q).
This paper presents two new constant-round simulatable coin-flipping protocols, based explicitly on one or a few X-commitments of short seeds and a Q-commitment of a short hash, independently of the large target length. A pseudo-random generator and a collision-resistant hash function are used to combine the separate X and Q properties (associated with short bit-strings) into a unified X&Q property amplified to the target length, thus amortizing the cost of the base commitments. In this way, the new protocols are significantly more efficient than an obvious batching or extension of coin-flippings designed (in the same security setting) for short bit-strings and based on inefficient X&Q commitments.
The first protocol, simulatable with rewinding, deviates from the traditional coin-flipping template in order to improve simulatability in case of unknown adversarial probabilities of abort, without having to use a X&Q commitment scheme. The second protocol, one-pass simulatable, derives from a new construction of a universally composable X&Q commitment scheme for large bit-strings, achieving communication-rate asymptotically close to 1. Besides the base X and Q commitments, the new commitment scheme only requires corresponding collision-resistant hashing, pseudo-random generation and application of a threshold erasure code. Alternative constructions found in recent work with comparable communication complexity require explicit use of oblivious transfer and use different encodings of the committed value.
Last updated: 2017-05-15
Polynomial Time Reduction from Approximate Shortest Vector Problem to Principal Ideal Problem for Lattices in Some Cyclotomic Rings
Uncategorized
Uncategorized
Many cryptographic schemes have been established based on the hardness of lattice problems. For the asymptotic efficiency, ideal lattices in the ring of cyclotomic integers are suggested to be used in most such schemes. On the other hand in computational algebraic number theory one of the main problem is the principal ideal problem (PIP). Its goal is to find a generator of any principal ideal in the ring of algebraic integers in any number field. In this paper we give a polynomial time reduction from approximate shortest lattice vector problem for principal ideal lattices to their PIP's in cyclotomic integer rings of extension degrees $\phi(n)=2^{k-1}, k=2,3,\ldots$. Thus if a polynomial time quantum algorithm for PIP of arbitrary number fields could be proposed, this would implies that approximate SVP problem for principal ideal lattices within a polynomial factor in this kind cyclotomic integer rings can be solved by a polynomial time quantum algorithm.
An Efficient Many-Core Architecture for Elliptic Curve Cryptography Security Assessment
Elliptic Curve Cryptography (ECC) is a popular tool to construct public-key crypto-systems.
The security of ECC is based on the hardness of the elliptic curve discrete logarithm problem (ECDLP).
Implementing and analyzing the performance of the best known methods to solve the ECDLP is useful to assess the security of ECC and choose security parameters in practice.
We present a novel many-core hardware architecture implementing the parallel version of Pollard's rho algorithm
to solve the ECDLP. This architecture results in a speed-up of almost 300% compared to the state of the art and we use it to estimate the monetary cost of solving the Certicom ECCp-131 challenge using FPGAs.
A Novel Cyberspace-Oriented Access Control Model
Uncategorized
Uncategorized
With the developments of mobile communication, networks and information technology, many new information service patterns and dissemination modes emerge with some security and privacy threats in access control, i.e., the ownership of data is separated from the administration of them, secondary/mutiple information distribution etc. Existing access control models, which are always proposed for some specific scenarios, are hardly to achieve fine-grained and adaptive access control. In this paper, we propose a novel Cyberspace-oriented Access Control model, termed as CoAC, which avoids the aforementioned threats by comprehensively considering some vital factors, such as the access requesting entity, general tense, access point, resource, device, networks, internet-based interactive graph and chain of resource transmission. By appropriately adjusting these factors, CoAC covers most of typical access control models and fulfills the requirements of new information service patterns and dissemination modes. We also present the administrative model of our proposed CoAC model and formally describe the administrative functions and methods used in the administrative model by utilizing Z-notation. Our CoAC is flexible and scalable, it can be further refined and expanded to figure out new opportunities and challenges in the upcoming access control techniques.
On Stream Ciphers with Provable Beyond-the-Birthday-Bound Security against Time-Memory-Data Tradeoff Attacks
Uncategorized
Uncategorized
We propose and analyze the LIZARD-construction, a way to construct keystream generator (KSG) based stream ciphers with provable $\frac{2}{3} n$-security with respect to generic time-memory-data tradeoff attacks. Note that for the vast majority of known practical KSG-based stream ciphers such attacks reduce the effective key length to the birthday bound $n/2$, where $n$ denotes the inner state length of the underlying KSG. This implies that practical stream ciphers have to have a comparatively large inner state length (e.g., $n=288$ bit for Trivium and $n=160$ bit for Grain v1).
The LIZARD-construction proposes a state initialization algorithm for stream ciphers working in packet mode (like the GSM cipher A5/1 or the Bluetooth cipher $E_0$). The proposal is that for each packet $i$ the packet initial state $q^i_{init}$ is computed from the secret session key $k$ and the packet initial value $IV^{i}$ via $q^i_{init}=P(k\oplus IV^{i})\oplus k$, where $P$ denotes a state mixing algorithm. Note that the recently published cipher LIZARD (see ePrint 2016/926), a stream cipher having inner state length of only $121$ bit, is a lightweight practical instantiation of our proposal, which is competitive w.r.t. the usual hardware and power consumption metrics.
The main technical contribution of this paper is to introduce a formal ideal primitive model for KSG-based stream ciphers and to show the sharp $\frac{2}{3} n$-bound for the security of the LIZARD-construction against generic time-memory-data tradeoff attacks.
Microcash: Efficient Off-Line Small Payments
An off-line electronic cash scheme is proposed that is suitable for
small payments. The approach is innovative, in that each coin may be
efficiently verified by the same or different merchants during payment. The scheme relies on a batch signature technique to efficiently sign and verify individually spent coins; coins may also be deposited in batch manner. The scheme outlined differs considerably from conventional micropayments schemes by servicing a number of cash-like properties, such as off-line processing, detection of double spent coins, and ability to spend at different merchants. Additionally, the scheme eliminates a number of processing overheads that are apparent to some existing micropayment schemes.
Phasing: Private Set Intersection using Permutation-based Hashing
Private Set Intersection (PSI) allows two parties to compute the intersection of private sets while revealing nothing more than the intersection itself. PSI needs to be applied to large data sets in scenarios such as measurement of ad conversion rates, data sharing, or contact discovery. Existing PSI protocols do not scale up well, and therefore some applications use insecure solutions instead.
We describe a new approach for designing PSI protocols based on permutation-based hashing, which enables to reduce the length of items mapped to bins while ensuring that no collisions occur. We denote this approach as Phasing, for Permutation-based Hashing Set Intersection. Phasing can dramatically improve the performance of PSI protocols whose overhead depends on the length of the representations of input items.
We apply Phasing to design a new approach for circuit-based PSI protocols. The resulting protocol is up to 5 times faster than the previously best Sort-Compare-Shuffle circuit of Huang et al. (NDSS 2012). We also apply Phasing to the OT-based PSI protocol of Pinkas et al. (USENIX Security 2014), which is the fastest PSI protocol to date. Together with additional improvements that reduce the computation complexity by a logarithmic factor, the resulting protocol improves run-time by a factor of up to 20 and can also have similar communication overhead as the previously best PSI protocol in that respect. The new protocol is only moderately less efficient than an insecure PSI protocol that is currently used by real-world applications, and is therefore the first secure PSI protocol that is scalable to the demands and the constraints of current real-world settings.
An Efficient ID-Based Message Recoverable Privacy-Preserving Auditing Scheme
One of the most important benefits of public cloud storage is outsourcing of management and maintenance with easy accessibility and retrievability over the internet. However, outsourcing data on the cloud brings new challenges such as integrity verification and privacy of data. More concretely, once the users outsource their data on the cloud they have no longer physical control over the data and this leads to the integrity protection issue. Hence, it is crucial to guarantee proof of data storage and integrity of the outsourced data. Several pairing-based au- diting solutions have been proposed utilizing the Boneh-Lynn-Shacham (BLS) short signatures. They basically provide a desirable and efficient property of non-repudiation protocols. In this work, we propose the first ID-based privacy-preserving public auditing scheme with message recov- erable signatures. Because of message recoverable auditing scheme, the message itself is implicitly included during the verification step that was not possible in previously proposed auditing schemes. Furthermore, we point out that the algorithm suites of existing schemes is either insecure or very inefficient due to the choice of the underlying bilinear map and its baseline parameter selections. We show that our scheme is more ef- ficient than the recently proposed auditing schemes based on BLS like short signatures.
On the Impossibility of Virtual Black-Box Obfuscation in Idealized Models
Uncategorized
Uncategorized
The celebrated work of Barak et. al (Crypto'01) ruled out the possibility of virtual black-box (VBB) obfuscation for general circuits. The recent work of Canetti, Kalai, and Paneth (TCC'15) extended this impossibility to the random oracle model, assuming the existence of trapdoor permutations (TDPs). On the other hand, the works of Barak et. al (Crypto'14) and Brakerski and Rothblum (TCC'14) showed that general VBB obfuscation is indeed possible in idealized graded encoding models. The recent work of Pass and Shelat (Cryptology ePrint 2015/383) complemented this result by ruling out general VBB obfuscation in idealized graded encoding models that enable
evaluation of constant-degree polynomials in finite fields.
In this work, we extend the above two impossibility results for general VBB obfuscation in idealized models. In particular we prove the following two results both assuming the existence of trapdoor permutations:
* There is no general VBB obfuscation in the generic group model of Shoup (Eurocrypt'97) for any abelien group. By applying our techniques to the setting of Pass and Shelat we extend their result to any (even non-commutative) finite ring.
* There is no general VBB obfuscation in the random trapdoor permutation oracle model. Note that as opposed to the random oracle which is an idealized primitive for symmetric primitives, random trapdoor permutation is an idealized public-key primitive.
Accelerating Homomorphic Evaluation on Reconfigurable Hardware
Homomorphic encryption allows computation on encrypted data and makes it possible to securely outsource computational tasks to untrusted environments. However, all proposed schemes are quite inefficient and homomorphic evaluation of ciphertexts usually takes several seconds on high-end CPUs, even for evaluating simple functions. In this work we investigate the potential of FPGAs for speeding up those evaluation operations. We propose an architecture to accelerate schemes based on the ring learning with errors (RLWE) problem and specifically implemented the somewhat homomorphic encryption scheme YASHE, which was proposed by Bos, Lauter, Loftus, and Naehrig in 2013. Due to the large size of ciphertexts and evaluation keys, on-chip storage of all data is not possible and external memory is required. For efficient utilization of the external memory we propose an efficient double-buffered memory access scheme and a polynomial multiplier based on the number theoretic transform (NTT). For the parameter set (n=16384,log_2(q)=512) capable of evaluating 9 levels of multiplications, we can perform a homomorphic addition in 48.67 and a homomorphic multiplication in 0.94 ms.
Unconditionally Secure Computation with Reduced Interaction
We study the question of how much interaction is needed for unconditionally secure multiparty
computation. We first consider the number of messages that need to be sent to compute a Boolean function with semi-honest security,
where all $n$ parties learn the result.
We consider two classes of functions called $t$-difficult and $t$-very difficult functions,
here $t$ refers to the number of corrupted players. One class is contained in the other.
For instance, the AND of an input bit from
each player is $t$-very difficult while the XOR is $t$-difficult but not $t$-very difficult.
We show lower bounds on the message complexity of both types of functions,
considering two notions of message complexity
called conservative and liberal, where the conservative one is the more standard one.
In all cases the bounds are $\Omega(nt)$. We also show
upper bounds for $t=1$ and functions in deterministic log-space, as well as a stronger upper bound for the XOR function.
This matches the lower bound for conservative complexity, so we find that the
conservative message complexity of $1$-very difficult functions in deterministic log space is $2n$,
while the conservative message complexity for XOR (and $t=1$) is $2n-1$.
Next, we consider round complexity.
It is a long-standing open problem to determine whether all efficiently computable functions can also be efficiently computed in constant-round with {\em unconditional} security.
Motivated by this, we consider the question of whether we can compute any function securely, while minimizing the interaction of {\em some of} the players? And if so, how many players can this apply to? Note that we still want the standard security guarantees (correctness, privacy, termination) and we consider the standard communication model with secure point-to-point channels. We answer the questions as follows: for passive security, with $n=2t+1$ players and $t$ corruptions, up to $t$ players can have minimal interaction, i.e., they send 1 message in the first round to each of the $t+1$ remaining players and receive one message from each of them in the last round. Using our result on message complexity, we show that this is (unconditionally) optimal. For malicious security with $n=3t+1$ players and $t$ corruptions, up to $t$ players can have minimal interaction, and we show that this is also optimal.
BeleniosRF: A Non-interactive Receipt-Free Electronic Voting Scheme
We propose a new voting scheme, BeleniosRF, that offers both receipt-freeness and end-to-end verifiability. It is receipt-free in a strong sense, meaning that even dishonest voters cannot prove how they voted. We provide a game-based definition of receipt-freeness for voting protocols with non-interactive ballot casting, which we name strong receipt-freeness (sRF). To our knowledge, sRF is the first game-based definition of receipt-freeness in the literature, and it has the merit of being particularly concise and simple. Built upon the Helios protocol, BeleniosRF inherits its simplicity and does not require any anti-coercion strategy from the voters. We implement BeleniosRF and show its feasibility on a number of platforms, including desktop computers and smartphones.
An Unconditionally Hiding and Long-Term Binding Post-Quantum Commitment Scheme
Uncategorized
Uncategorized
Commitment schemes are among cryptography's most important
building blocks. Besides their basic properties, hidingness and
bindingness, for many applications it is important that the schemes applied support proofs of knowledge. However, all existing solutions which have been proven to provide these protocols are only computationally hiding or are not resistant against quantum adversaries. This is not suitable for long-lived systems, such as long-term archives, where commitments have to provide security also in the long run. Thus, in this work we present a new post-quantum unconditionally hiding commitment scheme that supports (statistical) zero-knowledge protocols and allows to refreshes the binding property over time. The bindingness of our construction relies on the approximate shortest vector problem, a lattice problem which is conjectured to be hard for polynomial approximation factors, even for a quantum adversary. Furthermore, we provide a protocol that allows the committer to prolong the bindingness property of a given commitment while showing in zero-knowledge fashion that the value committed to did not change. In addition, our construction yields two more interesting features: one is the ability to "convert" a Pedersen commitment into a lattice-based one, and the other one is the construction of a hybrid approach whose bindingness relies on the discrete logarithm and approximate shortest vector problems.
On Necessary Padding with IO
Uncategorized
Uncategorized
We show that the common proof technique of padding a circuit before IO obfuscation is sometimes necessary. That is, assuming indistinguishability obfuscation (IO) and one-way functions exist, we define samplers Sam_0, which outputs (aux_0, C_0), and Sam_1, which outputs (aux_1, C_1) such that:
- The distributions (aux_0, iO(C_0)) and (aux_1, iO(C_1)) are perfectly distinguishable.
- For padding s = poly(lambda)$, the distributions (aux_0, iO(C_0||0^s)) and (aux_1, iO(C_1||0^s)) are computationally indistinguishable.
We note this refutes the recent "Superfluous Padding Assumption" of Brzuska and Mittelbach.
Practical Round-Optimal Blind Signatures in the Standard Model
Round-optimal blind signatures are notoriously hard to construct in the standard model, especially in the malicious-signer model, where blindness must hold under adversarially chosen keys. This is substantiated by several impossibility results. The only construction that can be termed theoretically efficient, by Garg and Gupta (Eurocrypt'14), requires complexity leveraging, inducing an exponential security loss.
We present a construction of practically efficient round-optimal blind signatures in the standard model. It is conceptually simple and builds on the recent structure-preserving signatures on equivalence classes (SPS-EQ) from Asiacrypt'14. While the traditional notion of blindness follows from standard assumptions, we prove blindness under adversarially chosen keys under an interactive variant of DDH. However, we neither require non-uniform assumptions nor complexity leveraging.
We then show how to extend our construction to partially blind signatures and to blind signatures on message vectors, which yield a construction of one-show anonymous credentials à la "anonymous credentials light" (CCS'13) in the standard model.
Furthermore, we give the first SPS-EQ construction under non-interactive assumptions and show how SPS-EQ schemes imply conventional structure-preserving signatures, which allows us to apply optimality results for the latter to SPS-EQ.
Ed448-Goldilocks, a new elliptic curve
Many papers have proposed elliptic curves which are faster and easier to implement than the NIST prime-order curves. Most of these curves have had fields of size around $2^256$, and thus security estimates of around 128 bits. Recently there has been interest in a stronger curve, prompting designs such as Curve41417 and Microsoft’s pseudo-Mersenne-prime curves.
Here I report on the design of another strong curve, called Ed448-Goldilocks. Implementations of this curve can perform very well for its security level on many architectures. As of this writing, this curve is favored by IRTF CFRG for inclusion in future versions of TLS along with Curve25519.
Automated Analysis and Synthesis of Authenticated Encryption Schemes
Authenticated encryption (AE) schemes are symmetric-key encryption schemes ensuring strong notions of confidentiality and integrity. Although various AE schemes are known, there remains significant interest in developing schemes that are more efficient, meet even stronger security notions (e.g., misuse-resistance), or satisfy certain non-cryptographic properties (e.g., being patent-free).
We present an automated approach for analyzing and synthesizing blockcipher-based AE schemes, significantly extending prior work by Malozemoff et al. (CSF 2014) who synthesize encryption schemes satisfying confidentiality only. Our main insight is to restrict attention to a certain class of schemes that is expressive enough to capture several known constructions yet also admits automated reasoning about security. We use our approach to generate thousands of AE schemes with provable security guarantees, both known (e.g., variants of OCB and CCM) and new. Implementing two of these new schemes, we find their performance competitive with state-of-the-art AE schemes.
Last updated: 2017-05-23
Design, Evaluation and Optimization of Physical Unclonable Functions based on Transient Effect Ring Oscillators
This paper proposes a theoretical study and a full overview of the design, evaluation and optimization of a PUF based on transient element ring oscillators (TERO-PUF). We show how, by following some simple design rules and strategies, designers can build and optimize a TERO-PUF with state of the art PUF characteristics in a standard CMOS technology. To this end, we analyzed the uniqueness, steadiness and randomness of responses generated from 30 test chips in a CMOS 350nm process in nominal and corner voltage and temperature conditions. Response generation schemes are proposed and discussed to optimize the PUF performances and reduce its area without noticeable loss in its output quality. In particular, we show that the large area of the basic blocks in the TERO-PUF is balanced by the high level of entropy extracted in each basic block. Guidelines are provided to balance reliability and randomness of the responses and the design area.
Random Digit Representation of Integers
Uncategorized
Uncategorized
Modular exponentiation is core to today's main stream
public key cryptographic systems. In this article, we generalize the
classical fractional $w$NAF method for modular exponentiation -- the
classical method uses a digit set of the form $\{1,3,\dots,m\}$
which is extended here to any set of odd integers of the form
$\{1,d_2,\dots, d_n\}$. We give a formula for the average density of
non-zero terms in this new representation and discuss its asymptotic
behavior when those digits are randomly chosen from a given set. We
also propose a specific method for the precomputation phase of the
exponentiation algorithm.
Who watches the watchmen? : Utilizing Performance Monitors for Compromising keys of RSA on Intel Platforms
Asymmetric-key cryptographic algorithms when implemented
on systems with branch predictors, are subjected
to side-channel attacks
exploiting the deterministic branch
predictor behavior due to their key-dependent input sequences. We show that branch predictors can also
leak information through the hardware
performance monitors which are
accessible by an adversary at the
user-privilege level. This paper presents
an iterative attack which target the
key-bits of 1024 bit RSA, where in
offline phase, the system’s underlying
branch predictor is approximated
by a theoretical predictor in literature.
Subsimulations are performed
to classify the message-space into
distinct partitions based on the event
branch misprediction and the target key
bit value. In online phase, we ascertain
the secret key bit using branch mispredictions
obtained from the hardware performance
monitors which reflect the information of branch
miss due to the underlying predictor hardware.
We theoretically prove that the probability
of success of the attack is equivalent to the accurate
modelling of the theoretical predictor to the underlying system predictor. Experimentations reveal that the
success-rate increases with message-count and reaches such a significant value so as to consider side-channel
from the performance counters as a real threat
to RSA-like ciphers due
to the underlying branch predictors and
needs to be considered for developing secured-systems.
Statistical Concurrent Non-malleable Zero-knowledge from One-way Functions
Concurrent non-malleable zero-knowledge (CNMZK) protocols are zero-knowledge protocols that provides security even when adversaries interacts with multiple provers and verifiers simultaneously. It is known that CNMZK arguments for NP can be constructed in the plain model. Furthermore, it was recently shown that statistical CNMZK arguments for NP can also be constructed in the plain model. However, although the former requires only the existence of one-way functions, the latter requires the DDH assumption.
In this paper, we construct a statistical CNMZK argument for NP assuming only the existence of one-way functions. The security is proven via black-box simulation, and the round complexity is poly(n). Furthermore, under the existence of collision-resistant hash functions, the round complexity is reduced to w(log n), which is essentially optimal for black-box concurrent zero-knowledge protocols.
Construction of Arithmetic Secret Sharing Schemes by Using Torsion Limits
Recent results of Cascudo, Cramer, and Xing on the construction of
arithmetic secret sharing schemes are improved by using some new bounds on the torsion limits of algebraic function fields. Furthermore, new bounds on the torsion limits of certain towers of function fields are given.
An Authentication Code over Galois Rings with Optimal Impersonation and Substitution Probabilities
Uncategorized
Uncategorized
Two new systematic authentication code based on the Gray map over a Galois ring are introduced. The first introduced code attains optimal impersonation and substitution probabilities. The second code improves space sizes but it does not attain optimal probabilities. Besides it is conditioned to the existence of a special class of bent maps on Galois rings
Generalised tally-based decoders for traitor tracing and group testing
Uncategorized
Uncategorized
We propose a new type of score function for Tardos traitor tracing codes. It is related to the recently introduced tally-based score function, but it utilizes more of the information available to the decoder. It does this by keeping track of sequences of symbols in the distributed codewords instead of looking at columns of the code matrix individually.
We derive our new class of score functions from a Neyman-Pearson hypothesis test and illustrate its performance with simulation results.
Finally we derive a score function for (medical) group testing applications.
The leaking battery: A privacy analysis of the HTML5 Battery Status API
Uncategorized
Uncategorized
We highlight the privacy risks associated with the HTML5 Battery Status API. We put special focus on its implementation in the Firefox browser. Our study shows that websites can discover the capacity of users’ batteries by exploiting the high precision readouts provided by Firefox on Linux. The capacity of the battery, as well as its level, expose a fingerprintable surface that can be used to track web users in short time intervals. Our analysis shows that the risk is much higher for old or used batteries with reduced capacities, as the battery capacity may potentially serve as a tracking identifier. The fingerprintable surface of the API could be drastically reduced without any loss in the API’s functionality by reducing the precision of the readings. We propose minor modifications to Battery Status API and its implementation in the Firefox browser to address the privacy issues presented in the study. Our bug report for Firefox was accepted and a fix is deployed.
Security Analysis of Niu et al. Authentication and Ownership Management Protocol
Over the past decade, besides authentication, ownership
management protocols have been suggested to transfer or
delegate the ownership of RFID tagged items. Recently, Niu et
al. have proposed an authentication and ownership management
protocol based on 16-bit pseudo random number generators and
exclusive-or operations which both can be easily implemented on
low-cost RFID passive tags in EPC global Class-1 Generation-2
standard. They claim that their protocol offers location and data
privacy and also resists against desynchronization attack. In this
paper, we analyze the security of their proposed authentication
and ownership management protocol and show that the protocol
is vulnerable to secret disclosure and desynchronization attacks.
The complexity of most of the attacks are only two runs of the
protocol and the success probability of the attacks are almost 1.
Bit Security of the Hyperelliptic Curves Diffie-Hellman Problem
The Diffie-Hellman problem as a cryptographic primitive plays an important role in modern cryptology. The Bit Security or Hard-Core Bits of Diffie-Hellman problem in arbitrary finite cyclic group is a long-standing open problem in cryptography. Until now, only few groups have been studied. Hyperelliptic curve cryptography is an alternative to elliptic curve cryptography. Due to the recent cryptanalytic results that the best known algorithms to attack hyperelliptic curve cryptosystems of genus $g<3$ are the generic methods and the recent implementation results that hyperelliptic curve cryptography in genus 2 has the potential to be competitive with its elliptic curve cryptography counterpart. In this paper, we generalize Boneh and Shparlinksi's method and result about elliptic curve to the case of Jacobians of hyperelliptic curves. We prove that the least significant bit of each coordinate of hyperelliptic curves Diffie-Hellman secret value in genus 2 is hard as the entire Diffie-Hellman value, and then we also show that any bit is hard as the entire Diffie-Hellman value. Finally, we extend our techniques and results to hyperelliptic curves of any genus.
Accountable Authority Ciphertext-Policy Attribute-Based Encryption with White-Box Traceability and Public Auditing in the Cloud
Uncategorized
Uncategorized
As a sophisticated mechanism for secure fine-grained access control, ciphertext-policy attribute-based encryption (CP-ABE) is a highly promising solution for commercial applications such as cloud computing. However, there still exists one major issue awaiting to be solved, that is, the prevention of key abuse. Most of the existing CP-ABE systems missed this critical functionality, hindering the wide utilization and commercial application of CP-ABE systems to date. In this paper, we address two practical problems about the key abuse of CP-ABE: (1) The key escrow problem of the semi-trusted authority; and, (2) The malicious key delegation problem of the users. For the semi-trusted authority, its misbehavior (i.e., illegal key (re-)distribution) should be caught and prosecuted. And for a user, his/her malicious behavior (i.e., illegal key sharing) need be traced. We affirmatively solve these two key abuse problems by proposing the first accountable authority CP-ABE with white-box traceability that supports policies expressed in any monotone access structures. Moreover, we provide an auditor to judge publicly whether a suspected user is guilty or is framed by the authority.
The Simeck Family of Lightweight Block Ciphers
Two lightweight block cipher families, SIMON and SPECK, have been proposed by researchers from the NSA recently. In this paper, we introduce Simeck, a new family of lightweight block ciphers that combines the good design components from both SIMON and SPECK, in order to devise even more compact and efficient block ciphers. For Simeck32/64, we can achieve 505 GEs (before the Place and Route phase) and 549 GEs (after the Place and Route phase), with the power consumption of 0.417 $\mu W$ in CMOS 130nm ASIC, and 454 GEs (before the Place and Route phase) and 488 GEs (after the Place and Route phase), with the power consumption of 1.292 $\mu W$ in CMOS 65nm ASIC. Furthermore, all of the instances of Simeck are smaller than the ones of hardware-optimized cipher SIMON in terms of area and power consumption in both CMOS 130nm and CMOS 65nm techniques. In addition, we also give the security evaluation of Simeck with respect to many traditional cryptanalysis methods, including differential attacks, linear attacks, impossible differential attacks, meet-in-the-middle attacks, and slide attacks. Overall, all of the instances of Simeck can satisfy the area, power, and throughput requirements in passive RFID tags.
Last updated: 2019-02-18
A Unified Security Analysis of Two-phase Key Exchange Protocols in TPM 2.0
The Trusted Platform Module (TPM) version 2.0 provides an authenticated key exchange functionality by a single key exchange primitive, which can be called to implement three key exchange protocols (denoted as two-phase key exchange protocols in TPM 2.0): the Full Unified Model, the MQV, and the SM2 key exchange protocols. However, some vulnerabilities have been found in all of these protocols. Fortunately, it seems that protections provided by the TPM can deal with vulnerabilities of these protocols. This paper investigates whether the TPM key exchange primitive provides a secure key exchange functionality under protections of the TPM. We first perform an informal analysis of the TPM key exchange primitive which helps us to model in a precise way. Then we formally analyze the TPM key exchange primitive in a security model for AKE, based on which all the protocols adopted by TPM 2.0 can be analyzed in a unified way. Our analysis indicates under what conditions the TPM 2.0 can provide a provable secure key exchange functionality. In the end, we give suggestions on how to leverage the TPM key exchange primitive properly, and suggestions on how to improve the security of current TPM key exchange primitive to enable its wide use in practice.
McBits: fast constant-time code-based cryptography
This paper presents extremely fast algorithms for code-based
public-key cryptography, including full protection against timing attacks. For example, at a 2^128 security level, this paper achieves a reciprocal decryption throughput of just 60493 cycles (plus cipher cost etc.) on a single Ivy Bridge core. These algorithms rely on an additive FFT for fast root computation, a transposed additive FFT for fast syndrome computation, and a sorting network to avoid cache-timing attacks.
Experimental Study of DIGIPASS GO3 and the Security of Authentication
Based on the analysis of $6$-digit one-time passwords(OTP) generated by DIGIPASS GO3 we were able to reconstruct the synchronisation system of the token, the OTP generating algorithm and the verification protocol in details essential for an attack. The OTPs are more predictable than expected. A forgery attack is described. We argue the attack success probability is $8^{-5}$. That is much higher than $10^{-6}$ which may be expected if all the digits are independent and uniformly distributed. Under natural assumptions even in a relatively small bank or company with $10^4$ customers the number of compromised accounts during a year may be more than $100$.
Fully Secure Functional Encryption for Inner Products, from Standard Assumptions
Uncategorized
Uncategorized
Functional encryption is a modern public-key paradigm where a master secret key can be used to derive sub-keys $SK_F$ associated with certain functions $F$ in such a way that the decryption operation reveals $F(M)$, if $M$ is the encrypted message, and nothing else. Recently, Abdalla et al. gave simple and efficient realizations of the primitive for the computation of linear functions on encrypted data: given an encryption of a vector $\vec{y}$ over some specified base ring, a secret key $SK_{\vec{x}}$ for the vector $\vec{x}$ allows computing $\langle \vec{x} ,\vec{y} \rangle$. Their technique surprisingly allows for instantiations under standard assumptions, like the hardness of the Decision Diffie-Hellman (DDH) and Learning-with-Errors (LWE) problems. Their constructions, however, are only proved secure against selective adversaries, which have to declare the challenge messages $M_0$ and $M_1$ at the outset of the game.
In this paper, we provide constructions that provably achieve security against more realistic adaptive attacks (where the messages $M_0$ and $M_1$ may be chosen in the challenge phase, based on the previously collected information) for the same inner product functionality. Our constructions are obtained from hash proof systems endowed with homomorphic properties over the key space. They are (almost) as efficient as those of Abdalla et al. and rely on the same hardness assumptions.
In addition, we obtain a solution based on Paillier's composite residuosity assumption, which was an open problem even in the case of selective adversaries. We also propose $\LWE$-based schemes that allow evaluation of inner products modulo a prime $p$, as opposed to the schemes of Abdalla et al. that are restricted to evaluations of integer inner products of short integer vectors. We finally propose a solution based on Paillier's composite residuosityassumption that enables evaluation of inner products modulo an RSA integer $N = pq$.
We demonstrate that the functionality of inner products over a prime field is powerful and can be used to construct bounded collusion FE for all circuits.
Netcoin - A Traceable P2P Electronic Cash System
This paper introduces a new P2P Electronic Cash system called Netcoin. The purpose of Netcoin is to facilitate inexpensive peer-to-peer monetary transactions on the Web. Its salient features are that it is a traceable system with an efficient mechanism for verifying transactions. Netcoins are reusable and can be easily passed from one user to another. The issuing of virtual currency and verification of transactions are performed by trusted mints, which act as the gateway between the fiat and virtual currency worlds. There is no need to maintain a public ledger, which would inhibit use on a global scale because of rapidly increasing memory and bandwidth requirements. The system is neither inflationary nor deflationary in nature and does not purport a new economic model. As a fiat-backed currency it should not suffer the volatility issues associated with Bitcoin. In this paper the two most prominent electronic payment systems of the last forty years, namely Ecash and Bitcoin, are examined. Netcoin is then introduced in detail and is designed to address the shortcomings of these payment systems.
Constructing Efficient PAKE Protocols from Identity-Based KEM/DEM
In this paper, we propose an efficient identity-based password authenticated key exchange (IBPAKE) protocol using identity-based KEM/DEM. In IBPAKE, a client conducts authentication based on a human-memorable password and a server's identity. A distinctive feature of IBPAKE protocols, compared to the well-known EKE-type PAKE protocols, is that an adversary who even acquired a user's password cannot impersonate a server to further investigate user's sensitive information. We first construct the new IBPAKE protocol using the Boneh-Franklin Identity-based encryption (IBE) scheme, and then generalize the protocol by presenting a generic method to yield an efficient IBPAKE protocol from identity-based KEM/DEM. Our fine-grained approach has concrete advantages in terms of performance. First, unnecessary parameters can be removed easily. This allows a straightforward improvement on computational cost and communication bandwidth. In addition, using the essential feature of identity-based KEM/DEM, we can construct an IBPAKE protocol which runs in a single pass. Our protocol gives better performance, compared to prior known IBPAKE protocols.
Computing Elliptic Curve Discrete Logarithms with Improved Baby-step Giant-step Algorithm
The negation map can be used to speed up the computation of elliptic curve discrete logarithms using either the baby-step giant-step algorithm (BSGS) or Pollard rho. Montgomery's simultaneous modular inversion can also be used to speed up Pollard rho when running many walks in parallel. We generalize these ideas and exploit the fact that for any two elliptic curve points $X$ and $Y$, we can efficiently get $X-Y$ when we compute $X+Y$. We apply these ideas to speed up the baby-step giant-step algorithm. Compared to the previous methods, the new methods can achieve a significant speedup for computing elliptic curve discrete logarithms in small groups or small intervals.
Another contribution of our paper is to give an analysis of the average-case running time of Bernstein and Lange's ``grumpy giants and a baby'' algorithm, and also to consider this algorithm in the case of groups with efficient inversion.
Our conclusion is that, in the fully-optimised context, both the interleaved BSGS and grumpy-giants algorithms have superior average-case running time compared with Pollard rho. Furthermore, for the discrete logarithm problem in an interval, the interleaved BSGS algorithm is considerably faster than the Pollard kangaroo or Gaudry-Schost methods.
Structure-Preserving Signatures from Standard Assumptions, Revisited
Structure-preserving signatures (SPS) are pairing-based signatures
where all the messages, signatures and public keys are group elements, with
numerous applications in public-key cryptography. We present new,
simple and improved SPS constructions under standard assumptions via a
conceptually different approach. Our constructions significantly
narrow the gap between existing constructions from standard assumptions
and optimal schemes in the generic group model.
Complementary Dual Codes for Counter-measures to Side-Channel Attacks
We recall why linear codes with complementary duals (LCD codes) play a role in counter-measures to passive and active side-channel analyses on embedded cryptosystems. The rate and the minimum distance of such LCD codes must be as large as possible. We investigate primary constructions of such codes, in particular with cyclic codes, specifically with generalized residue codes, and we study their idempotents. We study those secondary constructions which preserve the LCD property, and we characterize conditions under which codes obtained by puncturing, shortening or extending codes, or obtained by the Plotkin sum, can be LCD.
Combined Side-Channel and Fault Analysis Attack on Protected Grain Family of Stream Ciphers
In this paper, we first demonstrate a new Differential Power Analysis (DPA) attack technique against the Grain family of stream ciphers (Grain v1 and Grain-128) by resynchronizing the cipher multiple times with the same value of the secret \emph{key} and randomly generated different initialization vectors (IVs). Subsequently, we develop a combined side channel and fault analysis attack strategy targeting various fault attack countermeasures for the Grain cipher family.
We considered clock glitch induced faults occurring in practice for a hardware implementation of the cipher to devise our novel attack technique. Our proposed combined attack strategy works well even if the \emph{useful} ciphertexts are not available to the adversary.
Further, the power trace classifications of a Grain cipher implementation on SASEBO G-II standard side channel evaluation board is shown in order to validate our proposed attack against the cipher.
The captured power traces were analyzed using Least Squares Support Vector Machine (LS-SVM) learning algorithm based multiclass classifiers to classify the power traces into the respective
Hamming distance (HD) classes. To extract power samples with high information about HD classes, Signal-to-noise ratio (SNR) metric
was chosen for feature selection. The experimental results of power trace classifications of test set showed a high success rate of $98\%$ when the five largest SNR sample instants over a clock cycle were chosen as features. Our proposed attack strategy can also be extended to other stream cipher designs based on Fibonacci
configured shift registers.
A Secure Oblivious Transfer Protocol from Indistinguishing Obfuscation
Uncategorized
Uncategorized
We proposed a new secure oblivious transfer protocol from indistinguishability obfuscation in this paper. Our main technical tool
is the candidate indistinguishability obfuscation introduced in [1] and
a dual-mode cryptosystem proposed in [2]. Following their steps, we
presents a new k-out-of-l oblivious transfer protocol, its realization from
DDH is described in this paper, in which we combined indistinguishability obfuscation with the dual-mode cryptosystem. The security of our
scheme mainly relies on the indistinguishability of the obf-branches ( corresponding to the two modes in dual-mode model). Our paper explores
a new way for the application of indistinguishability obfuscation.
Predictive Models for Min-Entropy Estimation
Uncategorized
Uncategorized
Random numbers are essential for cryptography. In most real-world systems, these values come from a cryptographic pseudorandom number generator (PRNG), which in turn is seeded by an entropy source. The security of the entire cryptographic system then relies on the accuracy of the claimed amount of entropy provided by the source. If the entropy source provides less unpredictability than is expected, the security of the cryptographic mechanisms is undermined. For this reason, correctly estimating the amount of entropy available from a source is critical.
In this paper, we develop a set of tools for estimating entropy, based on mechanisms that attempt to predict the next sample in a sequence based on all previous samples.
These mechanisms are called predictors. We develop a framework for using predictors to estimate entropy, and test them experimentally against both simulated and real noise sources. For comparison, we subject the entropy estimates defined in the August 2012 draft of NIST Special Publication 800-90B to the same tests, and compare their performance.
The Chain Rule for HILL Pseudoentropy, Revisited
Computationalnotionsofentropy(a.k.a.pseudoentropy)have found many applications, including leakage-resilient cryptography, deter- ministic encryption or memory delegation. The most important tools to argue about pseudoentropy are chain rules, which quantify by how much (in terms of quantity and quality) the pseudoentropy of a given random variable X decreases when conditioned on some other variable Z (think for example of X as a secret key and Z as information leaked by a side-channel). In this paper we give a very simple and modular proof of the chain rule for HILL pseudoentropy, improving best known parameters. Our version allows for increasing the acceptable length of leakage in ap- plications up to a constant factor compared to the best previous bounds. As a contribution of independent interest, we provide a comprehensive study of all known versions of the chain rule, comparing their worst-case strength and limitations.
Combining Differential Privacy and Secure Multiparty Computation
We consider how to perform privacy-preserving analyses on private data from different data providers and containing personal information of many different individuals. We combine differential privacy and secret sharing in the same system to protect the privacy of both the data providers and the individuals. We have implemented a prototype of this combination and the overhead of adding differential privacy to secret sharing is small enough to be usable in practice.
Assessment of Hiding the Higher-Order Leakages in Hardware - what are the achievements versus overheads?
Higher-order side-channel attacks are becoming amongst the major interests of academia as well as industry sector. It is indeed being motivated by the development of countermeasures which can prevent the leakages up to certain orders. As a concrete example, threshold implementation (TI) as an efficient way to realize Boolean masking in hardware is able to avoid first-order leakages. Trivially, the attacks conducted at second (and higher) orders can exploit the corresponding leakages hence devastating the provided security. Hence, the extension of TI to higher orders was being expected which has been presented at ASIACRYPT 2014. Following its underlying univariate settings it can provide security at higher orders, and its area and time overheads naturally increase with the desired security order.
In this work we look at the feasibility of higher-order attacks on first-order TI from another perspective. Instead of increasing the order of resistance by employing higher-order TIs, we realize the first-order TI designs following the principles of a power-equalization technique dedicated to FPGA platforms, that naturally leads to hardening higher-order attacks. We show that although the first-order TI designs, which are additionally equipped by the power-equalization methodology, have significant area overhead, they can maintain the same throughput and more importantly can avoid the higher-order leakages to be practically exploitable by up to 1 billion traces.
Zeroizing Without Low-Level Zeroes: New MMAP Attacks and Their Limitations
We extend the recent zeroizing attacks of Cheon, Han, Lee, Ryu and Stehlé (Eurocrypt'15) on multilinear maps to settings where no encodings of zero below the maximal level are available. Some of the new attacks apply to the CLT13 scheme (resulting in a total break) while others apply to (a variant of) the GGH13 scheme (resulting in a weak-DL attack). We also note the limits of these zeroizing attacks.
Last updated: 2015-06-22
Differential Fault Intensity Analysis
—Recent research has demonstrated that there is
no sharp distinction between passive attacks based on sidechannel
leakage and active attacks based on fault injection.
Fault behavior can be processed as side-channel information,
offering all the benefits of Differential Power Analysis including
noise averaging and hypothesis testing by correlation. This paper
introduces Differential Fault Intensity Analysis, which combines
the principles of Differential Power Analysis and fault injection.
We observe that most faults are biased - such as single-bit,
two-bit, or three-bit errors in a byte - and that this property
can reveal the secret key through a hypothesis test. Unlike
Differential Fault Analysis, we do not require precise analysis
of the fault propagation. Unlike Fault Sensitivity Analysis, we do
not require a fault sensitivity profile for the device under attack.
We demonstrate our method on an FPGA implementation of
AES with a fault injection model. We find that with an average
of 7 fault injections, we can reconstruct a full 128-bit AES key
Disk Encryption: Do We Need to Preserve Length?
In the last one-and-a-half decade there has been a lot of activity towards development of cryptographic techniques for disk
encryption. It has been almost canonised that an encryption scheme suitable for the application of disk encryption must be
length preserving, i.e., it rules out the use of schemes like authenticated encryption where an authentication tag is also
produced as a part of the ciphertext resulting in ciphertexts being longer than the corresponding plaintexts. The notion of
a tweakable enciphering scheme (TES) has been formalised as the appropriate primitive for disk encryption and it has been argued
that they provide the maximum security possible for a tag-less scheme. On the other hand, TESs are less efficient than some
existing authenticated encryption schemes. Also TES cannot provide true authentication as they do not have authentication tags.
In this paper, we analyze the possibility of the use of encryption schemes where length expansion is produced for
the purpose of disk encryption. On the negative side, we argue that nonce based authenticated encryption schemes are not appropriate
for this application. On the positive side, we demonstrate that deterministic authenticated encryption (DAE) schemes may
have more advantages than disadvantages compared to a TES when used for disk encryption. Finally, we propose a new deterministic
authenticated encryption scheme called BCTR which is suitable for this purpose. We provide the full specification of BCTR, prove
its security and also report an efficient implementation in reconfigurable hardware. Our experiments suggests that BCTR performs
significantly better than existing TESs and existing DAE schemes.
A Physical Approach for Stochastic Modeling of TERO-based TRNG
Uncategorized
Uncategorized
Security in random number generation for cryptography is closely related to the entropy rate at the generator output. This rate has to be evaluated using an appropriate stochastic model. The stochastic model proposed in this paper is dedicated to the transition effect ring oscillator (TERO) based true random number generator (TRNG) proposed by Varchola and Drutarovsky in 2010. The advantage and originality of this model is that it is derived from a physical model based on a detailed study and on the precise electrical description of the noisy physical phenomena that contribute to the generation of random numbers. We compare the proposed electrical description with data generated in a 28 nm CMOS ASIC implementation. Our experimental results are in very good agreement with those obtained with both the physical model of TERO's noisy behavior and with the stochastic model of the TERO TRNG, which we also confirmed using the AIS 31 test suites.
Oblivion: Mitigating Privacy Leaks by Controlling the Discoverability of Online Information
Search engines are the prevalently used tools to collect information about individuals on the Internet. Search results typically comprise a variety of sources that contain personal information --- either intentionally released by the person herself, or unintentionally leaked or published by third parties without being noticed, often with detrimental effects on the individual's privacy. To grant individuals the ability to regain control over their disseminated personal information, the European Court of Justice recently ruled that EU citizens have a right to be forgotten in the sense that indexing systems, such as Google, must offer them technical means to request removal of links from search results that point to sources violating their data protection rights. As of now, these technical means consist of a web form that requires a user to manually identify all relevant links herself upfront and to insert them into the web form, followed by a manual evaluation by employees of the indexing system to assess if the request to remove those links is eligible and lawful.
In this work, we propose a universal framework Oblivion to support
the automation of the right to be forgotten in a scalable,
provable and privacy-preserving manner. First, Oblivion enables a
user to automatically find and tag her disseminated personal
information using natural language processing (NLP) and image recognition techniques and
file a request in a privacy-preserving manner. Second, Oblivion
provides indexing systems with an automated and provable eligibility
mechanism, asserting that the author of a request is indeed affected
by an online resource. The automated eligibility proof ensures censorship-resistance so that only legitimately affected
individuals can request the removal of corresponding links from
search results. We have conducted comprehensive evaluations of Oblivion, showing that the framework is capable of handling 278 removal requests per second on a standard notebook (2.5 GHz dual core), and is hence suitable for large-scale deployment.
How much randomness can be extracted from memoryless Shannon entropy sources?
We revisit the classical problem: given a memoryless source having a certain amount of Shannon Entropy, how many random bits can be extracted? This question appears in works studying random number generators built from physical entropy sources.
Some authors use a heuristic estimate obtained from the Asymptotic Equipartition Property, which yields roughly $n$ extractable bits, where $n$ is the total Shannon entropy amount. However the best known precise form gives only $n-O(\sqrt{\log(1/\epsilon) n})$, where $\epsilon$ is the distance of the extracted bits from uniform. In this paper we show a matching $ n-\Omega(\sqrt{\log(1/\epsilon) n})$ upper bound. Therefore, the loss of $\Theta(\sqrt{\log(1/\epsilon) n})$ bits is necessary. As we show, this theoretical bound is of practical relevance. Namely, applying the imprecise AEP heuristic to a mobile phone accelerometer one might overestimate extractable entropy even by $100\%$, no matter what the extractor is. Thus, the ``AEP extracting heuristic'' should not be used without taking the precise error into account.
TriviA: A Fast and Secure Authenticated Encryption Scheme
In this paper, we propose a new hardware friendly authen- ticated encryption (AE) scheme TriviA based on (i) a stream cipher for generating keys for the ciphertext and the tag, and (ii) a pairwise in- dependent hash to compute the tag. We have adopted one of the ISO- standardized stream ciphers for lightweight cryptography, namely Triv- ium, to obtain our underlying stream cipher. This new stream cipher has a state that is a little larger than the state of Trivium to accommodate a 128-bit secret key and IV. Our pairwise independent hash is also an adaptation of the EHC or “Encode-Hash-Combine” hash, that requires the optimum number of field multiplications and hence requires small hardware footprint. We have implemented the design in synthesizable RTL. Pre-layout synthesis, using 65 nm standard cell technology under typical operating conditions, reveals that TriviA is able to achieve a high throughput of 91.2 Gbps for an area of 24.4 KGE. We prove that our construction has at least 128-bit security for privacy and 124-bit security of authenticity under the assumption that the underlying stream cipher produces a pseudorandom bit stream.
Generating S-Box Multivariate Quadratic Equation Systems And Estimating Algebraic Attack Resistance Aided By SageMath
Methods are presented to derive with the aid of the computer mathematics
software system SageMath the Multivariate Quadratic equation systems (MQ) for the input and output bit variables of a cryptographic S-box starting from its algebraic expressions. Motivation to this work were the results of recent articles which we have verified and extended in an original way, to our knowledge, not yet published elsewhere. At the same time we present results contrary to the published ones which cast serious doubts on the suitability of previously presented formulas, supposed to quantify the resistance of S-boxes against algebraic attacks.
An analysis of the $C$ class of bent functions
Two (so-called $C, D$) classes of permutation-based bent Boolean functions were introduced by Carlet two decades ago, but without specifying some explicit construction methods for their construction (apart from the subclass $D_0$). In this article, we look in more detail at the $C$ class, and derive some existence and nonexistence results concerning the bent functions in the $C$ class for many of the known classes of permutations over $\mathbb F_{2^n}$. Most importantly, the existence results induce generic methods of constructing bent functions in class $C$ which possibly do not belong to the completed Maiorana-McFarland class. The question whether the specific permutations and related subspaces we identify in this article indeed give bent functions outside the completed Maiorana-McFarland class remains open.
AN ENHANCED BIOMETRIC BASED REMOTE USER AUTHENTICATION SCHEME USING SMART CARD
In remote authentication scheme, a remote user can communicate with server over open networks even though the physical distance is much far. Before interaction, they require to establish common session key by authenticating each other. Recently in 2014, Kumari et al. proposed the efficient scheme for remote user authentication. However in this paper, we show that the Kumari et al.’s scheme is vulnerably susceptible to the Insider Attack, Stolen Verifier Attack, Session Key Disclosure Attack, Password Guessing Attack, Modification Attack, User Impersonation Attack, Replay Attack, Shoulder Surfing Attack and Denial of Service Attack. Afterwards, we have proposed an improved remote user authentication scheme to deal with these attacks and other attacks.
Last updated: 2015-12-29
SCLPV: Secure Certificateless Public Verification for Cloud Storage in Cyber-physical-social System
Cyber-physical-social system (CPSS) allows individuals to share personal information collected from not only cyberspace, but also physical space. This has resulted in generating numerous data at a user's local storage. However, it is very expensive for users to store large data sets, and it also causes problems in data management. Therefore, it is of critical importance to outsource the data to cloud servers, which provides users an easy, cost-effective and flexible way to manage data. Whereas, users lose control on their data once outsourcing their data to cloud servers, which poses challenges on integrity of outsourced data. Many mechanisms have been proposed to allow a third-party auditor to verify data integrity using the public keys of users. Most of these mechanisms bear a strong assumption: the auditors are honest and reliable, and thereby are vulnerability in the case that auditors are malicious. Moreover, in most of these approaches, an auditor needs to manage users certificates to choose the correct public keys for verification.
In this paper, we propose a secure certificateless public integrity verification scheme (SCLPV). The SCLPV scheme is the first work that simultaneously supports certificateless public verification and resistance against malicious auditors to verify the integrity of outsourced data in CPSS. A formal and strict security proof proves the correctness and security of our scheme. In addition, an elaborate performance analysis demonstrates that our scheme is efficient and practical. Compared with the best of the existing certificateless public verification scheme (CLPV), the SCLPV provides stronger security guarantees in terms of remedying the security vulnerability of the CLPV and resistance against malicious auditors. At the same time, in comparison with the best of integrity verification scheme achieving resistance against malicious auditors, the communication cost between the auditor and the cloud server in the SCLPV is independent of the size of the processed data, meanwhile, the auditor in the SCLPV does not need to manage certificates.
SIMON and SPECK: Block Ciphers for the Internet of Things
The U.S. National Security Agency (NSA) developed the SIMON and SPECK families of lightweight block ciphers as an aid for securing applications in very constrained environments where AES may not be suitable. This paper summarizes the algorithms, their design rationale, along with current cryptanalysis and implementation results.
How to Securely Prolong the Computational Bindingness of Pedersen Commitments
Uncategorized
Uncategorized
Pedersen commitments are important cryptographic primitives.
They allow a prover to commit to a certain value without revealing
any information about it and without the prover being able to change its mind later on. Since the first property holds unconditionally this is an essential primitive for many schemes providing long-term confidentiality. However, the second property only holds computationally. Hence, in the long run bindingness is lost, making the primitive improper for long-lived systems. Thus in this paper, we describe a protocol that, in a sense, prolongs the bindingness of a given Pedersen commitment. More precisely, we demonstrate how to prove in perfect zero-knowledge that a new Pedersen commitment - generated with a larger security parameter - and a corresponding old commitment both commit to the same value. We stress that this is a non-trivial procedure. Up until now the only known perfect zero-knowledge proof techniques for proving message equivalence of two commitments work when both commitments use isomorphic message spaces. However, as we will show in this work, to prolong the security of Pedersen commitments we cannot tolerate this restriction. Our prolonging technique works for non-isomorphic message spaces, is efficient, can be repeated an arbitrary number of times, maintains
unconditional confidentiality, and allows to preserve the format of
the Pedersen commitments. This makes the construction presented here
an important contribution to long-lived systems. Finally, we illustrate this by discussing how commitments with prolongable bindingness can be used to allow for archiving solutions that provide not only integrity but also confidentiality in the long-term.
Secure Key Generation from Biased PUFs
PUF-based key generators have been widely considered as a root-of-trust in digital systems. They typically require an error-correcting mechanism (e.g. based on the code-offset method) for dealing with bit errors between the enrollment and reconstruction of keys. When the used PUF does not have full entropy, entropy leakage between the helper data and the device-unique key material can occur. If the entropy level of the PUF becomes too low, the PUF-derived key can be attacked through the publicly available helper data. In this work we provide several solutions for preventing this entropy leakage for PUFs suffering from bias. The methods proposed in this work pose no limit on the amount of bias that can be tolerated, which solves an important open problem for PUF-based key generation. Additionally, the solutions are all evaluated based on reliability, efficiency, leakage and reusability showing that depending on requirements for the key generator different solutions are preferable.
How Secure and Quick is QUIC? Provable Security and Performance Analyses
QUIC is a secure transport
protocol developed by Google and implemented in Chrome in 2013, currently
representing one of the most promising solutions to decreasing latency
while intending to provide security properties similar with TLS.
In this work we shed some light on QUIC's strengths and weaknesses
in terms of its provable security and performance guarantees in the presence of attackers.
We first introduce a security model for analyzing performance-driven protocols like QUIC
and prove that QUIC satisfies our definition under reasonable assumptions on the protocol's building blocks.
However, we find that QUIC does not satisfy the traditional notion of forward secrecy that is provided by some modes of TLS,
e.g., TLS-DHE.
Our analyses also reveal that with simple bit-flipping and replay attacks on some
public parameters exchanged during the handshake, an
adversary could easily prevent QUIC from achieving minimal latency
advantages either by having it fall back to TCP or by causing
the client and server to have an inconsistent view of their
handshake leading to a failure to complete the connection.
We have implemented these attacks and demonstrated that they
are practical.
Our results suggest that QUIC's security weaknesses are introduced by the very mechanisms used to reduce latency,
which highlights the seemingly inherent trade off between minimizing latency and providing `good' security guarantees.
Universal Computational Extractors and the Superfluous Padding Assumption for Indistinguishability Obfuscation
Universal Computational Extractors (UCEs), introduced by Bellare, Hoang and Keelveedhi (CRYPTO 2013), are a framework of assumptions on hash functions that allow to instantiate random oracles in a large variety of settings. Brzuska, Farshim and Mittelbach (CRYPTO 2014) showed that a large class of UCE assumptions with \emph{computationally} unpredictable sources cannot be achieved, if indistinguishability obfuscation exists. In the process of circumventing obfuscation-based attacks, new UCE notions emerged, most notably UCEs with respect to \emph{statistically} unpredictable sources that suffice for a large class of applications. However, the only standard model constructions of UCEs are for a small subclass considering only $q$-query sources which are \emph{strongly statistically} unpredictable (Brzuska, Mittelbach; Asiacrypt 2014).
The contributions of this paper are threefold:
1) We show a surprising equivalence for the notions of strong unpredictability and (plain) unpredictability thereby lifting the construction from Brzuska and Mittelbach to achieve $q$-query UCEs for statistically unpredictable sources. This yields standard model instantiations for various ($q$-query) primitives including, deterministic public-key encryption, message-locked encryption, multi-bit point obfuscation, CCA-secure encryption, and more. For some of these, our construction yields the first standard model candidate.
2) We study the blow-up that occurs in indistinguishability obfuscation proof techniques due to puncturing and state the \emph{Superfluous Padding Assumption} for indistinguishability obfuscation which allows us to lift the $q$-query restriction of our construction. We validate the assumption by showing that it holds for virtual black-box obfuscation.
3) Brzuska and Mittelbach require a strong form of point obfuscation secure in the presence of auxiliary input for their construction of UCEs. We show that this assumption is indeed necessary for the construction of injective UCEs.
Composable & Modular Anonymous Credentials: Definitions and Practical Constructions
It takes time for theoretical advances to get used in practical schemes. Anonymous credential schemes are no exception. For instance, existing schemes suited for real-world use lack formal, composable definitions, partly because they do not support straight-line extraction and rely on random oracles for their security arguments.
To address this gap, we propose unlinkable redactable signatures (URS), a new building block for privacy-enhancing protocols, which we use to construct the first efficient UC-secure anonymous credential system that supports multiple issuers, selective disclosure of attributes, and pseudonyms. Our scheme is one of the first such systems for which both the size of a credential and its presentation proof are independent of the number of attributes issued in a credential. Moreover, our new credential scheme does not rely on random oracles.
As an important intermediary step, we address the problem of building a functionality for a complex credential system that can cover many different features. Namely, we design a core building block for a single issuer that supports credential issuance and presentation with respect to pseudonyms and then show how to construct a full-fledged credential system with multiple issuers in a modular way. We expect this flexible definitional approach to be of independent interest.
A Simple Proof of a Distinguishing Bound of Iterated Uniform Random Permutation
Let P be chosen uniformly from the set P := Perm(S), the set of all permutations over a set S of size N. In Crypto 2015, Minaud and Seurin proved that for any unbounded time adversary A, making at most q queries, the distinguishing advantage between P^r (after sampling P, compose it for r times) and P, denoted Delta(P^r ; P), is at most (2r + 1)q/N. In this paper we provide an alternative simple proof of this result for an upper bound 2q(r+1)^2/N by using well known coefficient H-technique.
Tampering with the Delivery of Blocks and Transactions in Bitcoin
Given the increasing adoption of Bitcoin, the number of transactions and the block sizes within the system are only expected to increase. To sustain its correct operation in spite of its ever-increasing use, Bitcoin implements a number of necessary optimizations and scalability measures. These measures limit the amount of information broadcast in the system to the minimum necessary.
In this paper, we show that current scalability measures adopted by Bitcoin come at odds with the security of the system. More specifically, we show that an adversary can exploit these measures in order to effectively delay the propagation of transactions and blocks to specific nodes—without causing a network partitioning in the system. We show that this allows the adversary to easily mount Denial-of-Service attacks, considerably increase its mining advantage in the network, and double-spend transactions in spite of the current countermeasures adopted by Bitcoin. Based on our results, we propose a number of countermeasures in order to enhance the security of Bitcoin without deteriorating its scalability.
Twist Insecurity
Several authors suggest that the use of twist secure Elliptic Curves automatically leads to secure implementations. We argue that even for twist secure
curves a point validation has to be performed.
We illustrate this with examples where the security of EC-algorithms is strongly degraded, even for twist secure curves.
We show that the usual blindig countermeasures against SCA are insufficient
(actually they introduce weaknesses)
if no point validation is performed,
or if an attacker has access to certain intermediate points.
In this case the overall security of the system is reduced to the length of the blinding parameter. We emphazise that our methods work even in the case of a very high identification error rate during the SCA-phase.
The Carnac protocol -- or how to read the contents of a sealed envelope
Johnny Carson as long time host of the Tonight show often appeared in the spoof role of Carnac the Magnificent, a mentalist who could magically read the contents of a sealed envelope. This is in fact a well known stock-in-trade trick of the mentalist's craft, known as ``billet reading''. Here we propose a cryptographic solution to the problem of billet reading, apparently allowing a cipher-text to be decrypted without direct knowledge of the cipher-text, and present both a compelling use case and a practical implementation.
Known-key Distinguisher on Full PRESENT
In this article, we analyse the known-key security of the standardized PRESENT lightweight block cipher. Namely, we propose a known-key distinguisher on the full PRESENT, both 80- and 128-bit key versions. We first leverage the very latest advances in differential cryptanalysis on PRESENT, which are as strong as the best linear cryptanalysis in terms of number of attacked rounds. Differential properties are much easier to handle for a known-key distinguisher than linear properties, and we use a bias on the number of collisions on some predetermined input/output bits as distinguishing property. In order to reach the full PRESENT, we eventually introduce a new meet-in-the-middle layer to propagate the differential properties as far as possible. Our techniques have been implemented and verified on the small scale variant of PRESENT. While the known-key security model is very generous with the attacker, it makes sense in practice since PRESENT has been proposed as basic building block to design lightweight hash functions, where no secret is manipulated. Our distinguisher can for example apply to the compression function obtained by placing PRESENT in a Davies-Meyer mode. We emphasize that this is the very first attack that can reach the full number of rounds of the PRESENT block cipher.
Fair and Robust Multi-Party Computation using a Global Transaction Ledger
Classical results on secure multi-party computation (MPC) imply that fully
secure computation, including fairness (either all parties get output or none)
and robustness (output delivery is guaranteed), is impossible unless a
majority of the parties is honest.
Recently, cryptocurrencies like Bitcoin where utilized to leverage the
fairness loss in MPC against a dishonest majority. The idea is that when the
protocol aborts in an unfair manner (i.e., after the adversary
receives output) then honest parties get compensated by
the adversarially controlled parties.
Our contribution is three-fold.
First, we put forth a new formal model of secure MPC with compensation and we show
how the introduction of suitable ledger and synchronization
functionalities makes it possible to express completely such protocols using standard
interactive Turing machines (ITM) circumventing the need for the use of extra features
that are outside the standard model as in previous works.
Second, our model, is expressed in the universal composition setting with global setup and is equipped
with a composition theorem that enables the design of protocols that compose safely
with each other and within larger environments where other protocols with compensation
take place; a composition theorem for MPC protocols with compensation was not known before.
Third, we introduce the first robust MPC protocol with compensation, i.e., an MPC protocol
where not only fairness is guaranteed (via compensation) but additionally the protocol is
guaranteed to deliver output to the parties that get engaged and therefore the adversary,
after an initial round of deposits, is not even able to mount a denial of service attack without having to suffer a monetary penalty.
Importantly, our robust MPC protocol requires only a {\em constant } number of
(coin-transfer and communication) rounds.
Last fall degree, HFE, and Weil descent attacks on ECDLP
Uncategorized
Uncategorized
Weil descent methods have recently been applied to attack the Hidden Field Equation (HFE) public key systems and solve the elliptic curve discrete logarithm problem (ECDLP) in small characteristic. However the claims of quasi-polynomial time attacks on the HFE systems and the subexponential time algorithm for the ECDLP depend on various heuristic assumptions.
In this paper we introduce the notion of the last fall degree of a polynomial system, which is independent of choice of a monomial order. We then develop complexity bounds on solving polynomial systems based on this last fall degree.
We prove that HFE systems have a small last fall degree, by showing that one can do division with remainder after Weil descent. This allows us to solve HFE systems unconditionally in polynomial time if the degree of the defining polynomial and the cardinality of the base field are fixed.
For the ECDLP over a finite field of characteristic 2, we provide computational evidence that raises doubt on the validity of the first fall degree assumption, which was widely adopted in earlier works and which promises sub-exponential algorithms for ECDLP. In addition, we construct a Weil descent system from a set of summation polynomials in which the first fall degree assumption is unlikely to hold. These examples suggest that greater care needs to be exercised when applying this heuristic assumption to arrive at complexity estimates.
These results taken together underscore the importance of rigorously bounding last fall degrees of Weil descent systems, which remains an interesting but challenging open problem.
On Public Key Encryption from Noisy Codewords
Several well-known public key encryption schemes, including those of Alekhnovich (FOCS 2003), Regev (STOC 2005), and Gentry, Peikert and Vaikuntanathan (STOC 2008), rely on the conjectured intractability of inverting noisy linear encodings. These schemes are limited in that they either require the underlying field to grow with the security parameter, or alternatively they can work over the binary field but have a low noise entropy that gives rise to sub-exponential attacks.
Motivated by the goal of efficient public key cryptography, we study the possibility of obtaining improved security over the binary field by using different noise distributions.
Inspired by an abstract encryption scheme of Micciancio (PKC 2010), we consider an abstract encryption scheme that unifies all the three schemes mentioned above and allows for arbitrary choices of the underlying field and noise distributions.
Our main result establishes an unexpected connection between the power of such encryption schemes and additive combinatorics. Concretely, we show that under the ``approximate duality conjecture" from additive combinatorics (Ben-Sasson and Zewi, STOC 2011), every instance of the abstract encryption scheme over the binary field can be attacked in time $2^{O(\sqrt{n})}$, where $n$ is the maximum of the ciphertext size and the public key size (and where the latter excludes public randomness used for specifying the code).
On the flip side, counter examples to the above conjecture (if false) may lead to candidate public key encryption schemes with improved security guarantees.
We also show, using a simple argument that relies on agnostic learning of parities (Kalai, Mansour and Verbin, STOC 2008), that any such encryption scheme can be {\em unconditionally} attacked in time $2^{O(n/\log n)}$, where $n$ is the ciphertext size.
Combining this attack with the security proof of Regev's cryptosystem, we immediately obtain an algorithm that solves the {\em learning parity with noise (LPN)} problem in time $2^{O(n/\log \log n)}$ using only $n^{1+\epsilon}$ samples, reproducing the result of Lyubashevsky (Random 2005) in a conceptually different way.
Finally, we study the possibility of instantiating the abstract encryption scheme over constant-size rings to yield encryption schemes with no decryption error. We show that over the binary field decryption errors are inherent. On the positive side, building on the construction of matching vector families
(Grolmusz, Combinatorica 2000; Efremenko, STOC 2009; Dvir, Gopalan and Yekhanin, FOCS 2010),
we suggest plausible candidates for secure instances of the framework over constant-size rings that can offer perfectly correct decryption.
Robust and One-Pass Parallel Computation of Correlation-Based Attacks at Arbitrary Order - Extended Version
The protection of cryptographic implementations against higher-order attacks has risen to an important topic in the side-channel community after the advent of enhanced measurement equipment that enables the capture of millions of power traces in reasonably short time. However, the preprocessing of multi-million traces for such an attack is still challenging, in particular when in the case of (multivariate) higher-order attacks all traces need to be parsed at least two times. Even worse, partitioning the captured traces into smaller groups to parallelize computations is hardly possible with current techniques.
In this work we introduce procedures that allow iterative computation of correlation in a side-channel analysis attack at any arbitrary order in both univariate and multivariate settings. The advantages of our proposed solutions are manifold: i) they provide stable results, i.e., by increasing the number of used traces high accuracy of the estimations is still maintained, ii) each trace needs to be processed only once and at any time the result of the attack can be obtained (without requiring to reparse the whole trace pull when adding more traces), and iii) the computations can be efficiently parallelized, e.g., by splitting the trace pull into smaller subsets and processing each by a single thread on a multi-threading or cloud-computing platform. In short, our constructions allow efficiently performing higher-order side-channel analysis attacks (e.g., on hundreds of million traces) which is of crucial importance when practical evaluation of the masking schemes need to be performed.
Constant Communication ORAM with Small Blocksize
There have been several attempts recently at using homomorphic encryption to increase the efficiency of Oblivious RAM protocols. One of the most successful has been Onion ORAM, which achieves O(1) communication overhead with polylogarithmic server computation. However, it has two drawbacks. It requires a large block size of B = Omega(log^6 N) with large constants. Moreover, while it only needs polylogarithmic computation complexity, that computation consists mostly of expensive homomorphic multiplications. In this work, we address these problems and reduce the required block size to
Omega(log^4 N). We remove most of the homomorphic multiplications while maintaining O(1) communication complexity. Our idea is to replace their homomorphic eviction routine with a new, much cheaper permute-and-merge eviction which eliminates homomorphic multiplications and maintains the same level of security. In turn, this removes the need for layered encryption that Onion ORAM relies on and reduces both the minimum block size and server computation.
Improved (Pseudo) Preimage Attacks on Reduced-Round GOST and Grøstl-256 and Studies on Several Truncation Patterns for AES-like Compression Functions (Full Version)
In this paper, we present improved preimage attacks on the reduced-round \texttt{GOST} hash function family, which serves as the new Russian hash standard, with the aid of techniques such as the rebound attack, the Meet-in-the-Middle preimage attack and the multicollisions. Firstly, the preimage attack on 5-round \texttt{GOST-256} is proposed which is the first preimage attack for \texttt{GOST-256} at the hash function level. Then we extend the (previous) attacks on 5-round \texttt{GOST-256} and 6-round \texttt{GOST-512} to 6.5 and 7.5 rounds respectively by exploiting the involution property of the \texttt{GOST} transposition operation.
Secondly, inspired by the preimage attack on \texttt{GOST-256}, we also study the impacts of four representative truncation patterns on the resistance of the Meet-in-the-Middle preimage attack against \texttt{AES}-like compression functions, and propose two stronger truncation patterns which make it more difficult to launch this type of attack. Based on our investigations, we are able to slightly improve the previous pseudo preimage attacks on reduced-round \texttt{Grøstl-256}.
Cryptanalysis of Reduced-Round Whirlwind (Full Version)
The \texttt{Whirlwind} hash function, which outputs a 512-bit digest, was designed by Barreto $et\ al.$ and published by \textit{Design, Codes and Cryptography} in 2010. In this paper, we provide a thorough cryptanalysis on \texttt{Whirlwind}. Firstly, we focus on security properties at the hash function level by presenting (second) preimage, collision and distinguishing attacks on reduced-round \texttt{Whirlwind}. In order to launch the preimage attack, we have to slightly tweak the original Meet-in-the-Middle preimage attack framework on \texttt{AES}-like compression functions by partially fixing the values of the state. Based on this slightly tweaked framework, we are able to construct several new and interesting preimage attacks on reduced-round \texttt{Whirlpool} and \texttt{AES} hashing modes as well. Secondly, we investigate security properties of the reduced-round components of \texttt{Whirlwind}, including semi-free-start and free-start (near) collision attacks on the compression function, and a limited-birthday distinguisher on the inner permutation. As far as we know, our results are currently the best cryptanalysis on \texttt{Whirlwind}.
Key-Recovery Attack on the ASASA Cryptosystem with Expanding S-boxes
We present a cryptanalysis of the ASASA public key cipher
introduced at Asiacrypt 2014.
This scheme alternates three layers of affine transformations A
with two layers of quadratic substitutions S.
We show that the partial derivatives of the public key polynomials
contain information about the intermediate layer.
This enables us to present a very simple distinguisher
between an ASASA public key and random polynomials.
We then expand upon the ideas of the distinguisher
to achieve a full secret key recovery.
This method uses only linear algebra and has a complexity
dominated by the cost of computing
the kernels of $2^{26}$ small matrices with entries
in $\mathbb F_{16}$.
A Framework for Identity-Based Encryption with Almost Tight Security
We show a framework for constructing identity-based encryption (IBE) schemes that are (almost) tightly secure in the multi-challenge and multi-instance setting. In particular, we formalize a new notion called broadcast encoding, analogously to encoding notions by Attrapadung (Eurocrypt '14) and Wee (TCC '14). We then show that it can be converted into such an IBE. By instantiating the framework using several encoding schemes (new or known ones), we obtain the following:
- We obtain (almost) tightly secure IBE in the multi-challenge, multi-instance setting, both in composite and prime-order groups. The latter resolves the open problem posed by Hofheinz et al (PKC '15).
- We obtain the first (almost) tightly secure IBE with sub-linear size public parameters (master public keys). In particular, we can set the size of the public parameters to constant at the cost of longer ciphertexts. This gives a partial solution to the open problem
posed by Chen and Wee (Crypto '13).
By applying (a variant of) the Canetti-Halevi-Katz transformation to our schemes, we obtain several CCA-secure PKE schemes with tight security in the multi-challenge, multi-instance setting. One of our schemes achieves very small ciphertext overhead, consisting of less than 12 group elements. This significantly improves the state-of-the-art construction by Libert et al.~(in ePrint Archive) which requires 47 group elements. Furthermore, by modifying one of our IBE schemes obtained above, we can make it anonymous. This gives the first anonymous IBE whose security is almost tightly shown in the multi-challenge setting.
FourQ: four-dimensional decompositions on a Q-curve over the Mersenne prime
Uncategorized
Uncategorized
We introduce FourQ, a high-security, high-performance elliptic curve that targets the 128-bit security level. At the highest arithmetic level, cryptographic scalar multiplications on FourQ can use a four-dimensional Gallant-Lambert-Vanstone decomposition to minimize the total number of elliptic curve group operations. At the group arithmetic level, FourQ admits the use of extended twisted Edwards coordinates and can therefore exploit the fastest known elliptic curve addition formulas over large prime characteristic fields. Finally, at the finite field level, arithmetic is performed modulo the extremely fast Mersenne prime $p=2^{127}-1$. We show that this powerful combination facilitates scalar multiplications that are significantly faster than all prior works. On Intel's Broadwell, Haswell, Ivy Bridge and Sandy Bridge architectures, our software computes a variable-base scalar multiplication in 50,000, 56,000, 69,000 cycles and 72,000 cycles, respectively; and, on the same platforms, our software computes a Diffie-Hellman shared secret in 80,000, 88,000, 104,000 cycles and 112,000 cycles, respectively. These results show that, in practice, FourQ is around four to five times faster than the original NIST P-256 curve and between two and three times faster than curves that are currently under consideration as NIST alternatives, such as Curve25519.
Sanctum: Minimal Hardware Extensions for Strong Software Isolation
Sanctum offers the same promise as SGX, namely strong provable isolation of software modules running concurrently and sharing resources, but protects against an important class of additional software attacks that infer private information from a program's memory access patterns. We follow a principled approach to eliminating entire attack surfaces through isolation, rather than plugging attack-specific privacy leaks.
Sanctum demonstrates that strong software isolation is achievable with a surprisingly small set of minimally invasive hardware changes, and a very reasonable overhead. Sanctum does not change any major CPU building block. Instead, we add hardware at the interfaces between building blocks, without impacting cycle time.
Our prototype shows a 2% area increase in a Rocket RISC-V core. Over a set of benchmarks, Sanctum's worst observed overhead for isolated execution is 15.1% over an idealized insecure baseline, and 2.7% average overhead over a representative insecure baseline.
Privacy in the Genomic Era
Genome sequencing technology has advanced at a rapid pace and it is now possible to generate highly-detailed genotypes inexpensively. The collection and analysis of such data has the potential to support various applications, including personalized medical services. While the benefits of the genomics revolution are trumpeted by the biomedical community, the increased availability of such data has major implications for personal privacy; notably because the genome has certain essential features, which include (but are not limited to) (i) an association with traits and certain diseases, (ii) identification capability (e.g., forensics), and (iii) revelation of family relationships. Moreover, direct-to-consumer DNA testing increases the likelihood that genome data will be made available in less regulated environments, such as the Internet and for-profit companies. The problem of genome data privacy thus resides at the crossroads of computer science, medicine, and public policy. While the computer scientists have addressed data privacy for various data types, there has been less attention dedicated to genomic data. Thus, the goal of this paper is to provide a systematization of knowledge for the computer science community. In doing so, we address some of the (sometimes erroneous) beliefs of this field and we report on a survey we conducted about genome data privacy with biomedical specialists. Then, after characterizing the genome privacy problem, we review the state-of-the-art regarding privacy attacks on genomic data and strategies for mitigating such attacks, as well as contextualizing these attacks from the perspective of medicine and public policy. This paper concludes with an enumeration of the challenges for genome data privacy and presents a framework to systematize the analysis of threats and the design of countermeasures as the field moves forward.
PUDA – Privacy and Unforgeability for Data Aggregation
Uncategorized
Uncategorized
Existing work on data collection and analysis for aggregation is mainly
focused on confidentiality issues. That is, the untrusted Aggregator learns only
the aggregation result without divulging individual data inputs. In this paper we
extend the existing models with stronger security requirements. Apart from the
privacy requirements with respect to the individual inputs, we ask for unforge-
ability for the aggregate result. We first define the new security requirements of
the model. We also instantiate a protocol for private and unforgeable aggregation
for multiple independent users. I.e, multiple unsynchronized users owing to per-
sonal sensitive information without interacting with each other, contribute their
values in a secure way: The Aggregator learns the result of a function without
learning individual values, and moreover, it constructs a proof that is forwarded
to a verifier that will convince the latter for the correctness of the computation.
Our protocol is provably secure in the random oracle model.
SoC it to EM: electromagnetic side-channel attacks on a complex system-on-chip
Uncategorized
Uncategorized
Increased complexity in modern embedded systems has presented various important challenges with regard to side-channel attacks. In particular, it is common to deploy SoC-based target devices with high clock frequencies in security-critical scenarios; understanding how such features align with techniques more often deployed against simpler devices is vital from both destructive (i.e., attack) and constructive (i.e., evaluation and/or countermeasure) perspectives. In this paper, we investigate electromagnetic-based leakage from three different means of executing cryptographic workloads (including the general purpose ARM core, an on-chip co-processor, and the NEON core) on the AM335x SoC. Our conclusion is that addressing challenges of the type above {\em is} feasible, and that key recovery attacks can be conducted with modest resources.
Generic Construction of UC-Secure Oblivious Transfer
We show how to construct a completely generic UC-secure oblivious transfer scheme from a collision-resistant chameleon hash scheme (CH) and a CCA encryption scheme accepting a smooth projective hash function (SPHF). Our work is based on the work of Abdalla et al. at Asiacrypt 2013, where the authors formalize the notion of SPHF-friendly commitments, i.e. accepting an SPHF on the language of valid commitments (to allow implicit decommitment), and show how to construct from them a UC-secure oblivious transfer in a generic way. But Abdalla et al. only gave a DDH-based construction of SPHF-friendly commitment schemes, furthermore highly relying on pairings. In this work, we show how to generically construct an SPHF-friendly commitment scheme from a collision-resistant CH scheme and an SPHF-friendly CCA encryption scheme. This allows us to propose an instantiation of our schemes based on the DDH, as efficient as that of Abdalla et al., but without requiring any pairing. Interestingly, our generic framework also allows us to propose an instantiation based on the learning with errors (LWE) assumption. For the record, we finally propose a last instantiation based on the decisional composite residuosity (DCR) assumption.
Concurrent Secure Computation with Optimal Query Complexity
The multiple ideal query (MIQ) model [Goyal, Jain, and Ostrovsky, Crypto'10] offers a relaxed notion of security for concurrent secure computation, where the simulator is allowed to query the ideal functionality multiple times per session (as opposed to just once in the standard definition). The model provides a quantitative measure for the degradation in security under concurrent self-composition, where the degradation is measured by the number of ideal queries. However, to date, all known MIQ-secure protocols guarantee only an overall average bound on the number of queries per session throughout the execution, thus allowing the adversary to potentially fully compromise some sessions of its choice. Furthermore, [Goyal and Jain, Eurocrypt'13] rule out protocols where the simulator makes only an adversary-independent constant number of ideal queries per session.
We show the first MIQ-secure protocol with worst-case per-session guarantee. Specifically, we show a protocol for any functionality that matches the [GJ13] bound: The simulator makes only a constant number of ideal queries in every session. The constant depends on the adversary but is independent of the security parameter.
As an immediate corollary of our main result, we obtain the first password authenticated key exchange (PAKE) protocol for the fully concurrent, multiple password setting in the standard model with no set-up assumptions.
Efficiency Evaluation of Cryptographic Protocols for Boardroom Voting
Efficiency is the bottleneck of many cryptographic protocols towards their practical application in different contexts. This holds true also in the context of electronic voting, where cryptographic protocols are used to ensure a diversity of security requirements, e.g. secrecy and integrity of cast votes. A new and promising application area of electronic voting is boardroom voting, which in practice takes place very frequently and often on simple issues such as approving or refusing a budget. Hence, it is not a surprise that a number of cryptographic protocols for boardroom voting have been already proposed.
In this work, we introduce a security model adequate for the boardroom voting context. Further, we evaluate the efficiency of four boardroom voting protocols, which to best of our knowledge are the only boardroom voting protocols that satisfy our security model. Finally, we compare the performance of these protocols in different election settings.