All papers in 2014 (Page 9 of 1029 results)

Last updated:  2014-07-07
Investigating the Feasibility of LEAP+ in ZigBee Specification
Mohammad Rezaeirad, Muhammad Aamir Iqbal, Dmitri Perkins, Magdy Bayoumi
The ZigBee specification is an emerging wireless technology designed to address the specific needs of low-cost, low-power wireless sensor networks and is built upon the physical and medium access control layers defined in IEEE 802.15.4 standard for wireless personal area networks (WPANs). A key component for the wide-spread success and applicability of ZigBee-based networking solutions will be its ability to provide enhanced security mechanisms that can scale to hundreds of nodes. Currently, however, an area of concern is the ZigBee key management scheme, which uses a centralized approach that introduces well-known issues of limited scalability and a single point of vulnerability. Moreover, ZigBee key management uses a public key infrastructure. Due to these limitations, we suggest replacing ZigBee key management with a better candidate scheme that is decentralized, symmetric, and scalable while addressing security requirements. In this work, we investigate the feasibility of implementing Localized Encryption and Authentication Protocol (LEAP+), a distributed symmetric based key management. LEAP+ is designed to support multiple types of keys based on the message type that is being exchanged. In this paper, we first conduct a qualitative security analysis of LEAP+ and the current ZigBee key management scheme. Using the QualNet 5.0.2 simulator, we implement LEAP+ on the ZigBee platform for the very first time. Experimental results show that a distributed key management scheme such as LEAP+ provides improved security and offers good scalability.
Last updated:  2015-05-26
Cryptanalysis of SP Networks with Partial Non-Linear Layers
Uncategorized
Achiya Bar-On, Itai Dinur, Orr Dunkelman, Virginie Lallemand, Nathan Keller, Boaz Tsaban
Show abstract
Uncategorized
Design of SP networks in which the non-linear layer is applied to only a part of the state in each round was suggested by Gérard et al.~at CHES 2013. Besides performance advantage on certain platforms, such a design allows for more efficient masking techniques that can mitigate side-channel attacks with a small performance overhead. In this paper we present generic techniques for differential and linear cryptanalysis of SP networks with partial non-linear layers, including an automated characteristic search tool and dedicated key-recovery algorithms. Our techniques can be used both for cryptanalysis of such schemes and for proving their security with respect to basic differential and linear cryptanalysis, succeeding where previous automated analysis tools seem to fail. We first apply our techniques to the block cipher Zorro (designed by Gérard et al.~following their methodology), obtaining practical attacks on the cipher which where fully simulated on a single desktop PC in a few days. Then, we propose a mild change to Zorro, and formally prove its security against basic differential and linear cryptanalysis. We conclude that there is no inherent flaw in the design strategy of Gérard et al., and it can be used in future designs, where our tools should prove useful.
Last updated:  2014-07-07
CKEF: A Cluster-based Key Establishment Framework for homogenous mobile and static wireless sensor networks
Mohammad Rezaeirad, Sahar Mazloom, Mahdi Orooji, Miao Jin, Magdy Bayoumi
Mission critical applications on homogenous mobile wireless sensor networks (HMWSNs) mandate new sets of security appliances to be friendly with existing resource constrained hardware platforms. To deliver a promising security, particularly in military deployments, mechanisms have to build upon an efficient key management that compensates HMWSNs constraints. Cluster-based key establishment is being the prime focus among the recent works in key establishment due to its significant improvement on network efficiency, security, scalability and flexibility. Therefore, we propose a Cluster-based framework to support pre-distribution key establishment schemes for HMWSNs. The proposed framework is compatible with most of pre-distribution schemes, and two instantiations are provided in this work to support our claim that the proposed framework improves security and scalability of the adopted schemes. We develop analytical models and conduct extensive simulations to evaluate the security and performance of the proposed framework, and the network connectivity under different scenarios.
Last updated:  2014-04-09
Weak-Key Analysis of POET
Mohamed Ahmed Abdelraheem, Andrey Bogdanov, Elmar Tischhauser
We evaluate the security of the recently proposed authenticated encryption scheme POET with regard to weak keys when its universal hash functions are instantiated with finite field multiplications. We give explicit constructions for weak key classes not covered by POET's weak key testing strategy, and demonstrate how to leverage them to obtain universal forgeries.
Last updated:  2024-05-21
Adaptively Secure Functional Encryption for Finite Languages from DLIN Assumption
Tapas Pandit and Rana Barua
In this paper, we present Functional Encryption (FE) schemes for finite languages from standard static assumption, viz., \textit{Decisional Linear} (DLIN) assumption. These finite languages are described by deterministic finite automata. Our first scheme is ciphertext-policy functional encryption (CP-FE), where a key $\sk_w$ is labeled with a string $w$ over a fixed alphabet $\Sigma$ and a ciphertext $\cipher_\amn$ is associated with a deterministic finite Automaton (DFA) $\amn$ over the same alphabet $\Sigma$. The key $\sk_w$ can extract the message from the ciphertext $\cipher_\amn$ if the DFA $\amn$ accepts the string $w$. This CP-FE scheme is constructed based on attribute-based encryption (ABE) structure of Okamoto-Takashima in Asiacrypt, 2012. To achieve the adaptive security, we put bounds on number of occurrences of any symbol in a string and in the set of transition tuples of a DFA. Due to this restriction, the size of key space (where the keys are indexed with strings) is reduced to finite. Hence, the functional scope of any DFA in our system can capture only finite language. Similarly, we obtain our second adaptively secure FE scheme in key-policy flavor from DLIN assumption. Both the schemes are shown to be secure in the standard model.
Last updated:  2014-11-19
Whitewash: Outsourcing Garbled Circuit Generation for Mobile Devices
Henry Carter, Charles Lever, Patrick Traynor
Garbled circuits offer a powerful primitive for computation on a user’s personal data while keeping that data private. Despite recent improvements, constructing and evaluating circuits of any useful size remains expensive on the limited hardware resources of a smartphone, the primary computational device available to most users around the world. In this work, we develop a new technique for securely outsourcing the generation of garbled circuits to a Cloud provider. By outsourcing the circuit generation, we are able to eliminate the most costly operations from the mobile device, including oblivious transfers. After proving the security of our techniques in the malicious model, we experimentally demonstrate that our new protocol, built on this role reversal, decreases execution time by 98% and reduces network costs by as much as 92% compared to previous outsourcing protocols. In so doing, we demonstrate that the use of garbled circuits on mobile devices can be made nearly as practical as it is becoming for server-class machines.
Last updated:  2014-03-28
Collision Spectrum, Entropy Loss, T-Sponges, and Cryptanalysis of GLUON-64
Léo Perrin, Dmitry Khovratovich
In this paper, we investigate the properties of iterative non-injective functions and the security of primitives where they are used. First, we introduce the Collision Probability Spectrum (CPS) parameter to quantify how far from a permutation a function is. In particular, we show that the output size decreases linearly with the number of iterations whereas the collision trees grow quadratically. Secondly, we investigate the T-sponge construction and show how certain CPS and rate values lead to an improved preimage attack on long messages. As an example, we find collisions for the GLUON-64 internal function, approximate its CPS, and show an attack that violates the security claims. For instance, if a message ends with a sequence of 1~Mb (respectively 1~Gb) of zeros, then our preimage search takes time $2^{115.3}$ (respectively $2^{105.3}$) instead of $2^{128}$.
Last updated:  2014-09-29
Optimizing Obfuscation: Avoiding Barrington's Theorem
Uncategorized
Prabhanjan Ananth, Divya Gupta, Yuval Ishai, Amit Sahai
Show abstract
Uncategorized
In this work, we seek to optimize the efficiency of secure general-purpose obfuscation schemes. We focus on the problem of optimizing the obfuscation of Boolean formulas and branching programs -- this corresponds to optimizing the "core obfuscator" from the work of Garg, Gentry, Halevi, Raykova, Sahai, and Waters (FOCS 2013), and all subsequent works constructing general-purpose obfuscators. This core obfuscator builds upon approximate multilinear maps, where efficiency in proposed instantiations is closely tied to the maximum number of "levels" of multilinearity required. The most efficient previous construction of a core obfuscator, due to Barak, Garg, Kalai, Paneth, and Sahai (Eurocrypt 2014), required the maximum number of levels of multilinearity to be O(\ell s^{3.64}), where 's' is the size of the Boolean formula to be obfuscated, and \ell is the number of input bits to the formula. In contrast, our construction only requires the maximum number of levels of multilinearity to be roughly \ell s, or only s when considering a keyed family of formulas, namely a class of functions of the form f_z(x)=\phi(z,x) where \phi is a formula of size s. This results in significant improvements in both the total size of the obfuscation and the running time of evaluating an obfuscated formula. Our efficiency improvement is obtained by generalizing the class of branching programs that can be directly obfuscated. This generalization allows us to achieve a simple simulation of formulas by branching programs while avoiding the use of Barrington's theorem, on which all previous constructions relied. Furthermore, the ability to directly obfuscate general branching programs (without bootstrapping) allows us to efficiently apply our construction to natural function classes that are not known to have polynomial-size formulas.
Last updated:  2014-12-23
Hybrid Model of Fixed and Floating Point Numbers in Secure Multiparty Computations
Uncategorized
Toomas Krips, Jan Willemson
Show abstract
Uncategorized
This paper develops a new hybrid model of floating point numbers suitable for operations in secure multi-party computations. The basic idea is to consider the significand of the floating point number as a fixed point number and implement elementary function applications separately of the significand. This gives the greatest performance gain for the power functions (e.g. inverse and square root), with computation speeds improving up to 18 times in certain configurations. Also other functions (like exponent and Gaussian error function) allow for the corresponding optimisation. We have proposed new polynomials for approximation, and implemented and benchmarked all our algorithms on the Sharemind secure multi-party computation framework.
Last updated:  2016-08-11
Total Break of Zorro using Linear and Differential Attacks
Uncategorized
Shahram Rasoolzadeh, Zahra Ahmadian, Mahmoud Salmasizadeh, Mohammad Reza Aref
Show abstract
Uncategorized
An AES-like lightweight block cipher, namely Zorro, was proposed in CHES 2013. While it has a 16-byte state, it uses only 4 S-Boxes per round. This weak nonlinearity was widely criticized, insofar as it has been directly exploited in all the attacks on Zorro reported by now, including the weak key, reduced round, and even full round attacks. In this paper, using some properties discovered by Wang et al., we present new differential and linear attacks on Zorro, both of which recover the full secret key with practical complexities. These attacks are based on very efficient distinguishers that have only two active S-Boxes per four rounds. The time complexity of our differential and linear attacks are $2^{56.76}$ and $2^{45.50}$ and the data complexity are $2^{56.73}$ chosen plaintexts and $2^{45.44}$ known plaintexts, respectively. The results clearly show that the block cipher Zorro does not have enough security against differential and linear attacks.
Last updated:  2014-03-27
Dynamic Searchable Encryption via Blind Storage
Uncategorized
Muhammad Naveed, Manoj Prabhakaran, Carl A. Gunter
Show abstract
Uncategorized
Dynamic Searchable Symmetric Encryption allows a client to store a dynamic collection of encrypted documents with a server, and later quickly carry out keyword searches on these encrypted documents, while revealing minimal information to the server. In this paper we present a new dynamic SSE scheme that is simpler and more efficient than existing schemes while revealing less information to the server than prior schemes, achieving fully adaptive security against honest-but-curious servers. We implemented a prototype of our scheme and demonstrated its efficiency on datasets from prior work. Apart from its concrete efficiency, our scheme is also simpler: in particular, it does not require the server to support any operation other than upload and download of data. Thus the server in our scheme can be based solely on a cloud storage service, rather than a cloud computation service as well, as in prior work. In building our dynamic SSE scheme, we introduce a new primitive called Blind Storage, which allows a client to store a set of files on a remote server in such a way that the server does not learn how many files are stored, or the lengths of the individual files; as each file is retrieved, the server learns about its existence (and can notice the same file being downloaded subsequently), but the file’s name and contents are not revealed. This is a primitive with several applications other than SSE, and is of independent interest.
Last updated:  2014-03-24
A Practical Universal Forgery Attack against PAES-8
Yu Sasaki, Lei Wang
\paes~is an authenticated encryption scheme designed by Ye {\it et al.}, and submitted to the CAESAR competition. The designers claim that \paese, which is one of the designs of the \paes-family, provides 128-bit security in the nonce misuse model. In this note, we show our forgery attack against \paese. Our attack works in the nonce misuse model. The attack exploits the slow propagation of message differences. The attack is very close to the universal forgery attack. As long as the target message is not too short, {\it e.g.} more than 10 blocks (160 bytes), a tag is forged only with $2^{11}$ encryption oracle calls, $2^{11}$ computational cost, and negligible memory.
Last updated:  2014-03-24
A Forgery Attack against PANDA-s
Yu Sasaki, Lei Wang
\panda~is an authenticated encryption scheme designed by Ye {\it et al.}, and submitted to the CAESAR competition. The designers claim that \pandas, which is one of the designs of the \panda-family, provides 128-bit security in the nonce misuse model. In this note, we describe our forgery attack against \pandas. Our attack works in the nonce misuse model. It exploits the fact that the message processing function and the finalization function are identical, and thus a variant of the length-extension attack can be applied. We can find a tag for a pre-specified formatted message with 2 encryption oracle calls, $2^{64}$ computational cost, and negligible memory.
Last updated:  2015-08-26
Implementation and Improvement of the Partial Sum Attack on 6-round AES
Francesco Aldà, Riccardo Aragona, Lorenzo Nicolodi, Massimiliano Sala
The Partial Sum Attack is one of the most powerful attacks, independent of the key schedule, developed in the last 15 years against reduced-round versions of AES. In this paper, we introduce a slight improvement to the basic attack which lowers the number of chosen plaintexts needed to successfully mount it. Our version of the attack on 6-round AES can be carried out completely in practice, as we demonstrate providing a full implementation. We also detail the structure of our implementation, showing the performances we achieve.
Last updated:  2014-03-24
Attack On the Markov Problem
James L. Adams
In 2000 Ko gave potential hard problem is proposed called the Markov problem. We give an algorithm, for certain parameters, for solution of the Markov problem. The Markov problem is related to the knot recognition problem. Hence we also a new algorithm the knot recognition problem. This knot recognition algorithm may be used for previously proposed cryptosystem that uses knots.
Last updated:  2014-03-24
Squaring Algorithms with Delayed Carry Method and Efficient Parallelization
Vladislav Kovtun, Andrew Okhrimenko
Increasing amounts of information that needs to be protecting put in claims specific requirements for information security systems. The main goal of this paper is to find ways to increase performance of cryptographic transformation with public key by increasing performance of integers squaring. Authors use delayed carry mechanism and approaches of effective parallelization for Comba multiplication algorithm, which was previously proposing by authors. They use the idea of carries accumulation by addition products of multiplying the relevant machine words in columns. As a result, it became possible to perform addition of such products in the column independently of each other. However, independent accumulation of products and carries require correction of the intermediate results to account for the accumulated carries. Due to the independence of accumulation in the columns, it became possible to parallelize the process of products accumulation that allowed formulating several approaches. In this paper received theoretical estimates of the computational complexity for proposed squaring algorithms. Software implementations of algorithms in C++ allowed receiving practical results of the performance, which are not contrary to theoretical estimates. The authors first proposed applying the method of delayed carry and parallelization techniques for squaring algorithms, which was previously proposing for integer multiplication.
Last updated:  2015-05-31
Secret-Sharing for NP
Ilan Komargodski, Moni Naor, Eylon Yogev
A computational secret-sharing scheme is a method that enables a dealer, that has a secret, to distribute this secret among a set of parties such that a "qualified" subset of parties can efficiently reconstruct the secret while any "unqualified" subset of parties cannot efficiently learn anything about the secret. The collection of "qualified" subsets is defined by a Boolean function. It has been a major open problem to understand which (monotone) functions can be realized by a computational secret-sharing schemes. Yao suggested a method for secret-sharing for any function that has a polynomial-size monotone circuit (a class which is strictly smaller than the class of monotone functions in P). Around 1990 Rudich raised the possibility of obtaining secret-sharing for all monotone functions in NP: In order to reconstruct the secret a set of parties must be "qualified" and provide a witness attesting to this fact. Recently, Garg et al. (STOC 2013) put forward the concept of witness encryption, where the goal is to encrypt a message relative to a statement "x in L" for a language L in NP such that anyone holding a witness to the statement can decrypt the message, however, if x is not in L, then it is computationally hard to decrypt. Garg et al. showed how to construct several cryptographic primitives from witness encryption and gave a candidate construction. One can show that computational secret-sharing implies witness encryption for the same language. Our main result is the converse: we give a construction of a computational secret-sharing scheme for any monotone function in NP assuming witness encryption for NP and one-way functions. As a consequence we get a completeness theorem for secret-sharing: computational secret-sharing scheme for any single monotone NP-complete function implies a computational secret-sharing scheme for every monotone function in NP.
Last updated:  2014-03-26
Remarks on the Pocklington and Padró-Sáez Cube Root Algorithm in $\mathbb F_q$
Geon Heo, Seokhwan Choi, Kwang Ho Lee, Namhun Koo, Soonhak Kwon
We clarify and generalize a cube root algorithm in $\mathbb F_q$ proposed by Pocklington, and later rediscovered by Padró and Sáez. We correct some mistakes in the result of Padró and Sáez and give a full generalization of their result. We also give the comparison of the implementation of our proposed algorithm with two most popular cube root algorithms, namely the Adleman-Manders-Miller algorithm and the Cipolla-Lehmer algorithm. To the authors' knowledge, our comparison is the first one which compares three fundamental algorithms together.
Last updated:  2014-07-22
Some Randomness Experiments on TRIVIUM
Uncategorized
Subhabrata Samajder, Palash Sarkar
Show abstract
Uncategorized
The first output bit of TRIVIUM can be considered to be a boolean function of 80 key and 80 IV variables. Choose $n$ ($n\leq 30$) of the key variables and set the other variables to constant values. This gives an $n$-variable boolean function. In this work, we experimentally find examples of such boolean functions which deviate from a uniform random $n$-variable boolean function in a statistically significant manner. This improves upon the previously reported experimental `non-randomness' result using the cube testing methodology by Aumasson et al in 2009 for TRIVIUM restricted to 885 rounds. In contrast, we work with full TRIVIUM and instead of using the cube methodology we directly find the algebraic normal form of the restricted version of the first output bit of TRIVIUM. We note, however, that our work does not indicate any weakness of TRIVIUM. On the other hand, the kind of experiments that we conduct for TRIVIUM can also be conducted for other ciphers.
Last updated:  2014-03-22
Structural Cryptanalysis of McEliece Schemes with Compact Keys
Uncategorized
Jean-Charles Faugère, Ayoub Otmani, Ludovic Perret, Frédéric de Portzamparc, Jean-Pierre Tillich
Show abstract
Uncategorized
A very popular trend in code-based cryptography is to decrease the public-key size by focusing on subclasses of alternant/Goppa codes which admit a very compact public matrix, typically quasi-cyclic (QC), quasi-dyadic (QD), or quasi-monoidic (QM) matrices. We show that the very same reason which allows to construct a compact public-key makes the key-recovery problem intrinsically much easier. The gain on the public-key size induces an important security drop, which is as large as the compression factor $p$ on the public-key. The fundamental remark is that from the $k\times n$ public generator matrix of a compact McEliece, one can construct a $k/p \times n/p$ generator matrix which is -- from an attacker point of view -- as good as the initial public-key. We call this new smaller code the {\it folded code}. Any key-recovery attack can be deployed equivalently on this smaller generator matrix. To mount the key-recovery in practice, we also improve the algebraic technique of Faugère, Otmani, Perret and Tillich (FOPT). In particular, we introduce new algebraic equations allowing to include codes defined over any prime field in the scope of our attack. We describe a so-called ``structural elimination'' which is a new algebraic manipulation which simplifies the key-recovery system. As a proof of concept, we report successful attacks on many cryptographic parameters available in the literature. All the parameters of CFS-signatures based on QD/QM codes that have been proposed can be broken by this approach. In most cases, our attack takes few seconds (the harder case requires less than $2$ hours). In the encryption case, the algebraic systems are harder to solve in practice. Still, our attack succeeds against r cryptographic challenges proposed for QD and QM encryption schemes, but there are still some parameters that have been proposed which are out of reach of the methods given here. However, regardless of the key-recovery attack used against the folded code, there is an inherent weakness arising from Goppa codes with QM or QD symmetries. It is possible to derive from the public key a much smaller public key corresponding to the folding of the original QM or QD code, where the reduction factor of the code length is precisely the order of the QM or QD group used for reducing the key size. To summarize, the security of such schemes are not relying on the bigger compact public matrix but on the small folded code which can be efficiently broken in practice with an algebraic attack for a large set of parameters.
Last updated:  2014-03-22
A Little Honesty Goes a Long Way: The Two-Tier Model for Secure Multiparty Computation
Uncategorized
Juan A. Garay, Ran Gelles, David S. Johnson, Aggelos Kiayias, Moti Yung
Show abstract
Uncategorized
Secure multiparty computation (MPC) as a service is becoming a tangible reality. In such a service, a population of clients wish to utilize a set of servers to delegate privately and reliably a given computation on their inputs. MPC protocols have a number of desired properties including tolerating active misbehavior by some of the servers and guaranteed output delivery. A fundamental result is that in order to achieve the above, an honest majority among servers is necessary. There are settings, however, where this condition might be overly restrictive, making it important to investigate models where this impossibility result can be circumvented, allowing secure computation to be performed even when the number of malicious participants outweighs the number of honest participants. To this end, we introduce the two-tier model for MPC, where a set of $m$ parties that are guaranteed to be honest (the first tier) remains "hidden" within a set of $n-m$ servers which are of dubious trustworthiness (the second tier), and where the objective is to perform MPC withstanding a number of active misbehaviors that is larger than $m/2$. Indeed, assuming $\alpha n$ of the second-tier servers are dishonest (where $\alpha\in (0,1)$), we present an MPC protocol that can withstand up to $(1-\epsilon)(1-\alpha)n/2$ additional faults, for any $\epsilon>0$ and $m = \omega(\log n)$. Somewhat surprisingly, this allows the total number of faulty parties to exceed $n/2$ across both tiers. We demonstrate that the two-tier model naturally arises in various settings, as in the case, for example, of a resource-constrained service provider wishing to utilize a pre-existing set of servers.
Last updated:  2015-11-25
Offline Dictionary Attack on Password Authentication Schemes using Smart Cards
Uncategorized
Ding Wang, Ping Wang
Show abstract
Uncategorized
The design of secure and efficient smart-card-based password authentication schemes remains a challenging problem today despite two decades of intensive research in the security community, and the current crux lies in how to achieve truly two-factor security even if the smart cards can be tampered. In this paper, we analyze two recent proposals in this area, namely, Hsieh-Leu's scheme and Wang's PSCAV scheme. We demonstrate that, under their non-tamper-resistance assumption of the smart cards, both schemes are still prone to offline dictionary attack, in which an attacker can obtain the victim's password when getting temporary access to the victim's smart card. This indicates that compromising a single factor (i.e., the smart card) of these two schemes leads to the downfall of both factors (i.e., both the smart card and the password), thereby invalidating their claim of preserving two-factor security. Remarkably, our attack on the latter protocol, which is not captured in Wang's original protocol security model, reveals a new and realistic attacking scenario and gives rise to the strongest adversary model so far (Note that Wang's PSCAV scheme is secure within its own but weak security model). In addition, we make the first attempt to explain why smart cards, instead of common cheap storage devices (e.g., USB sticks), are preferred in most two-factor authentication schemes for security-critical applications.
Last updated:  2014-08-29
Expressive Attribute-Based Encryption with Constant-Size Ciphertexts from the Decisional Linear Assumption
Katsuyuki Takashima
We propose a key-policy attribute-based encryption (KP-ABE) scheme with constant-size ciphertexts, whose semi-adaptive security is proven under the decisional linear (DLIN) assumption in the standard model. The access structure is expressive, that is given by non-monotone span programs. It also has fast decryption, i.e., a decryption includes only a constant number of pairing operations. As an application of our KP-ABE construction, we also propose a fully secure attribute-based signatures with constant-size secret (signing) keys from the DLIN. For achieving the above results, we extend the sparse matrix technique on dual pairing vector spaces. In particular, several algebraic properties of an elaborately chosen sparse matrix group are applied to the dual system security proofs.
Last updated:  2014-03-20
Reconsidering Generic Composition
Chanathip Namprempre, Phillip Rogaway, Thomas Shrimpton
In the context of authenticated encryption (AE), generic composition has referred to the construction of an AE scheme by gluing together a conventional (privacy-only) encryption scheme and a MAC. Since the work of Bellare and Namprempre (2000) and then Krawczyk (2001), the conventional wisdom has become that there are three forms of generic composition, with Encrypt-then-MAC the only one that generically works. However, many caveats to this understanding have surfaced over the years. Here we explore this issue further, showing how this understanding oversimplifies the situation because it ignores the results’ sensitivity to definitional choices. When encryption is formalized differently, making it either IV-based or nonce-based, rather than probabilistic, and when the AE goal is likewise changed to take in a nonce, qualitatively different results emerge. We explore these alternatives versions of the generic-composition story. We also evidence the overreaching understanding of prior generic-composition results by pointing out that the Encrypt-then-MAC mechanism of ISO 19772 is completely wrong.
Last updated:  2014-06-04
Unified Oblivious-RAM: Improving Recursive ORAM with Locality and Pseudorandomness
Ling Ren, Christopher Fletcher, Xiangyao Yu, Albert Kwon, Marten van Dijk, Srinivas Devadas
Oblivious RAM (ORAM) is a cryptographic primitive that hides memory access patterns to untrusted storage. ORAM may be used in secure processors for encrypted computation and/or software protection. While recursive Path ORAM is currently the most practical ORAM for secure processors, it still incurs large performance and energy overhead and is the performance bottleneck of recently proposed secure processors. In this paper, we propose two optimizations to recursive Path ORAM. First, we identify a type of program locality in its operations to improve performance. Second, we use pseudorandom function to compress the position map. But applying these two techniques in recursive Path ORAM breaks ORAM security. To securely take advantage of the two ideas, we propose unified ORAM. Unified ORAM improves performance both asymptotically and empirically. Empirically, our experiments show that unified ORAM reduces data movement from ORAM by half and improves benchmark performance by 61% as compared to recursive Path ORAM.
Last updated:  2014-05-02
ChipWhisperer: An Open-Source Platform for Hardware Embedded Security Research
Colin O'Flynn, Zhizhang (David) Chen
This paper introduces a complete side channel analysis toolbox, inclusive of the analog capture hardware, target device, capture software, and analysis software. The highly modular design allows use of the hardware and software with a variety of existing systems. The hardware uses a synchronous capture method which greatly reduces the required sample rate, while also reducing the data storage requirement, and improving synchronization of traces. The synchronous nature of the hardware lends itself to fault injection, and a module to generate glitches of programmable width is also provided. The entire design (hardware and software) is open-source, and maintained in a publicly available repository. Several long example capture traces are provided for researchers looking to evaluate standard cryptographic implementations.
Last updated:  2015-07-24
Privacy-Preserving Implicit Authentication
Nashad Ahmed Safa, Reihaneh Safavi-Naini, Siamak F. Shahandashti
In an implicit authentication system, a user profile is used as an additional factor to strengthen the authentication of mobile users. The profile consists of features that are constructed using the history of user actions on her mobile device over time. The profile is stored on the server and is used to authenticate an access request originated from the device at a later time. An access request will include a vector of recent measurements of the features on the device, that will be subsequently matched against the features stored at the server, to accept or reject the request. The features however include private information such as user location or web sites that have been visited. We propose a privacy-preserving implicit authentication system that achieves implicit authentication without revealing information about the usage profiles of the users to the server. We propose an architecture, give a formal security model and a construction with provable security in two settings where: (i) the device follows the protocol, and (ii) the device is captured and behaves maliciously.
Last updated:  2015-10-20
Efficiently Verifiable Computation on Encrypted Data
Uncategorized
Dario Fiore, Rosario Gennaro, Valerio Pastro
Show abstract
Uncategorized
We study the task of efficient verifiable delegation of computation on encrypted data. First, we improve previous definitions in order to tolerate adversaries that learn whether or not clients accept the result of a delegated computation. Then, in this strong model, we show a scheme for arbitrary computations, and we propose highly efficient schemes for delegation of various classes of functions, such as linear combinations, high-degree univariate polynomials, and multivariate quadratic polynomials. Notably, the latter class includes many useful statistics. Using our solution, a client can store a large encrypted dataset with a server, query statistics over this data, and receive encrypted results that can be efficiently verified and decrypted. As a key contribution for the efficiency of our schemes, we develop a novel homomorphic hashing technique that allows us to efficiently authenticate computations, at the same cost as if the data were in the clear, avoiding a $10^4$ overhead, which would occur with a naive approach. We confirm our theoretical analysis with extensive implementation tests that show the practical feasibility of our schemes.
Last updated:  2014-05-29
From Input Private to Universally Composable Secure Multiparty Computation Primitives
Dan Bogdanov, Peeter Laud, Sven Laur, Pille Pullonen
Secure multiparty computation systems are commonly built form a small set of primitive components. Composability of security notions has a central role in the analysis of such systems, since it allows us to deduce security properties of complex protocols from the properties of its components. We show that the standard notions of universally composable security are overly restrictive in this context and can lead to protocols with sub-optimal performance. As a remedy, we introduce a weaker notion of privacy that is satisfied by simpler protocols and is preserved by composition. After that we fix a passive security model and show how to convert a private protocol into a universally composable protocol. As a result, we obtain modular security proofs without performance penalties.
Last updated:  2014-06-13
Automatic Protocol Selection in Secure Two-Party Computations
Florian Kerschbaum, Thomas Schneider, Axel Schröpfer
Performance of secure computation is still often an obstacle to its practical adaption. There are different protocols for secure computation that compete for the best performance. In this paper we propose automatic protocol selection which selects a protocol for each operation resulting in a mix with the best performance so far. Based on an elaborate performance model, we propose an optimization algorithm and an efficient heuristic for this selection problem. We show that our mixed protocols achieve the best performance on a set of use cases. Furthermore, our results underpin that the selection problem is so complicated and large in size, that a programmer is unlikely to manually make the optimal selection. Our proposed algorithms nevertheless can be integrated into a compiler in order to yield the best (or near-optimal) performance.
Last updated:  2014-07-17
Doubly Spatial Encryption from DBDH
Jie Chen, Hoeteck Wee
Functional encryption is an emerging paradigm for public-key encryption which enables fine-grained control of access to encrypted data. Doubly-spatial encryption (DSE) captures all functionalities that we know how to realize via pairings-based assumptions, including (H)IBE, IPE, NIPE, CP-ABE and KP-ABE. In this paper, we propose a construction of DSE from the decisional bilinear Diffie-Hellman (DBDH) assumption. This also yields the first non-zero inner product encryption (NIPE) scheme based on DBDH. Quite surprisingly, we know how to realize NIPE and DSE from stronger assumptions in bilinear groups but not from the basic DBDH assumption. Along the way, we present a novel algebraic characterization of *NO* instances for the DSE functionality, which we use crucially in the proof of security.
Last updated:  2014-03-17
Fast GPGPU-Based Elliptic Curve Scalar Multiplication
Eric M. Mahé, Jean-Marie Chauvet
This paper presents a fast implementation to compute the scalar multiplication of elliptic curve points based on a ``General-Purpose computing on Graphics Processing Units'' (GPGPU) approach. A GPU implementation using Dan Bernstein's Curve25519, an elliptic curve over a 255-bit prime field complying with the new 128-bit security level, computes the scalar multiplication in less than a microsecond on AMD's R9 290X GPU. The presented methods and implementation considerations can be applied to any parallel architecture.
Last updated:  2014-03-16
Breaking POET Authentication with a Single Query
Jian Guo, Jérémy Jean, Thomas Peyrin, Wang Lei
In this short article, we describe a very practical and simple attack on the authentication part of POET authenticated encryption mode proposed at FSE 2014. POET is a provably secure scheme that was designed to resist various attacks where the adversary is allowed to repeat the nonce, or even when the message is output before verifying the validity of the tag when querying the decryption oracle. However, we demonstrate that using only a single encryption query and a negligible amount of computations, even without any special misuse from the attacker, it is possible to generate many valid ciphertext/tag pairs for POET. Our work shows that one should not use POET for any application where authentication property is required. Furthermore, we propose a possible patch to overcome this particular issue, yet without backing up this patch with a security proof.
Last updated:  2014-05-19
Crypto-Multimedia
Benlcouiri Younes, Azizi Abdelmalek, Moulay Chrif Ismaili
This paper is structured on securing of storage, transmission and the traceability of digital images. It consists in the design of the cryptographic algorithms appropriate to the case of fixed and moving images. In this sense, we have introduced two approaches that is different in the synthesis of confusion and diffusion on using the principles of substitu-tion and/or transposition to secure JPEG and MPEG format.
Last updated:  2014-06-28
Low Overhead Broadcast Encryption from Multilinear Maps
Dan Boneh, Brent Waters, Mark Zhandry
We use multilinear maps to provide a solution to the long-standing problem of public-key broadcast encryption where all parameters in the system are small. In our constructions, ciphertext overhead, private key size, and public key size are all poly-logarithmic in the total number of users. The systems are fully collusion-resistant against any number of colluders. All our systems are based on an O(log N)-way multilinear map to support a broadcast system for N users. We present three constructions based on different types of multilinear maps and providing different security guarantees. Our systems naturally give identity-based broadcast systems with short parameters.
Last updated:  2014-03-13
Cryptanalysis and Security Enhancement of Two Advanced Authentication Protocols
Sai Raghu Talluri, Swapnoneel Roy
In this work we consider two protocols for performing cryptanalysis and security enhancement. The first one by Jiang et al., is a password-based authentication scheme which does not use smart cards. We note that this scheme is an improvement over Chen et al.'s scheme shown vulnerable to the off-line dictionary attack by Jiang et al. We perform a cryptanalysis on Jiang at al.'s improved protocol and observe that it is prone to the clogging attack, a kind of denial of service (DoS) attack. We then suggest an improvement on the protocol to prevent the clogging attack. The other protocol we consider for analysis is by Wang et al. This is a smart card based authentication protocol. We again perform the clogging (DoS) attack on this protocol via replay. We observe that all smart card based authentication protocols which precede the one by Wang et al., and require the server to compute the computationally intensive modular exponentiation are prone to the clogging attack. We suggest (another) improvement on the protocol to prevent the clogging attack, which also applies to the protocol by Jiang et. al.
Last updated:  2015-01-25
JHAE: A Novel Permutation-Based Authenticated Encryption Mode Based on the Hash Mode JH
Uncategorized
Javad Alizadeh, Mohammad Reza Aref, Nasour Bagheri
Show abstract
Uncategorized
In this paper JHAE, an authenticated encryption (AE) mode, was presented based on the JH hash mode. JHAE is an on-line and single-pass dedicated AE mode based on permutation that supports optional associated data (AD). It was proved that this mode, based on ideal permutation, achieved privacy and integrity up to O(2n=2) queries where the length of the used permutation was 2n. To decrypt, JHAE did not require the inverse of its underlying permutation and therefore saved area space. JHAE has been used by Artemia, one of the CAESAR candidates.
Last updated:  2014-03-12
Two-sources Randomness Extractors for Elliptic Curves
Uncategorized
Abdoul Aziz Ciss
Show abstract
Uncategorized
This paper studies the task of two-sources randomness extractors for elliptic curves defined over finite fields $K$, where $K$ can be a prime or a binary field. In fact, we introduce new constructions of functions over elliptic curves which take in input two random points from two differents subgroups. In other words, for a ginven elliptic curve $E$ defined over a finite field $\mathbb{F}_q$ and two random points $P \in \mathcal{P}$ and $Q\in \mathcal{Q}$, where $\mathcal{P}$ and $\mathcal{Q}$ are two subgroups of $E(\mathbb{F}_q)$, our function extracts the least significant bits of the abscissa of the point $P\oplus Q$ when $q$ is a large prime, and the $k$-first $\mathbb{F}_p$ coefficients of the asbcissa of the point $P\oplus Q$ when $q = p^n$, where $p$ is a prime greater than $5$. We show that the extracted bits are close to uniform. Our construction extends some interesting randomness extractors for elliptic curves, namely those defined in \cite{op} and \cite{ciss1,ciss2}, when $\mathcal{P} = \mathcal{Q}$. The proposed constructions can be used in any cryptographic schemes which require extraction of random bits from two sources over elliptic curves, namely in key exchange protole, design of strong pseudo-random number generators, etc.
Last updated:  2014-09-22
Side-Channel Analysis on Blinded Regular Scalar Multiplications
Uncategorized
Benoit Feix, Mylène Roussellet, Alexandre Venelli
Show abstract
Uncategorized
We present a new side-channel attack path threatening state-of-the-art protected implementations of elliptic curves embedded scalar multiplications. Regular algorithms such as the double-and-add-always and the Montgomery ladder are commonly used to protect the scalar multiplication from simple side-channel analysis. Combining such algorithms with scalar and/or point blinding countermeasures lead to scalar multiplications protected from all known attacks. Scalar randomization, which consists in adding a random multiple of the group order to the scalar value, is a popular countermeasure due to its efficiency. Amongst the several curves defined for usage in elliptic curves products, the most used are those standardized by the NIST. As observed in several previous publications, the modulus, hence the orders, of these curves are sparse, primarily for efficiency reasons. In this paper, we take advantage of this specificity to present new attack paths which combine vertical and horizontal side-channel attacks to recover the entire secret scalar in state-of-the-art protected elliptic curve implementations
Last updated:  2014-03-27
The Temperature Side Channel and Heating Fault Attacks
Michael Hutter, Jörn-Marc Schmidt
In this paper, we present practical results of data leakages of CMOS devices via the temperature side channel---a side channel that has been widely cited in literature but not well characterized yet. We investigate the leakage of processed data by passively measuring the dissipated heat of the devices. The temperature leakage is thereby linearly correlated with the power leakage model but is limited by the physical properties of thermal conductivity and capacitance. We further present heating faults by operating the devices beyond their specified temperature ratings. The efficiency of this kind of attack is shown by a practical attack on an RSA implementation. Finally, we introduce data remanence attacks on AVR microcontrollers that exploit the Negative Bias Temperature Instability (NBTI) property of internal SRAM cells. We show how to recover parts of the internal memory and present first results on an ATmega162. The work encourages the awareness of temperature-based attacks that are known for years now but not well described in literature. It also serves as a starting point for further research investigations.
Last updated:  2014-03-12
Practical Receipt-Free Sealed-Bid Auction in the Coercive Environment
Jaydeep Howlader, Sanjit Kumar Roy, Ashis Kumar Mal
Sealed-Bid auction is an efficient and rational method to establish the price in open market. However sealed-bid auctions are sub- ject to bid-rigging attack. Receipt-free mechanisms were proposed to prevent bid-rigging. The prior receipt-free mechanisms are based on two assumptions; firstly, existence of untappable channel between bidders and auction authorities. Secondly, mechanisms assume the authorities to be honest (not colluding). Moreover the bandwidth required to com- municate the receipt-free bids is huge. This paper presents a sealed-bid auction mechanism to resist bid-rigging. The proposed method does not assume untappable channel nor consider the authorities to be necessarily honest. The proposed mechanism also manages the bandwidth efficiently, and improves the performance of the system.
Last updated:  2014-03-12
A Second Look at Fischlin's Transformation
Özgür Dagdelen, Daniele Venturi
Fischlin’s transformation is an alternative to the standard Fiat-Shamir transform to turn a certain class of public key identification schemes into digital signatures (in the random oracle model). We show that signatures obtained via Fischlin’s transformation are existentially unforgeable even in case the adversary is allowed to get arbitrary (yet bounded) information on the entire state of the signer (including the signing key and the random coins used to generate signatures). A similar fact was already known for the Fiat-Shamir transform, however, Fischlin’s transformation allows for a significantly higher leakage parameter than Fiat-Shamir. Moreover, in contrast to signatures obtained via Fiat-Shamir, signatures obtained via Fischlin enjoy a tight reduction to the underlying hard problem. We use this observation to show (via simulations) that Fischlin’s transformation, usually considered less efficient, outperforms the Fiat-Shamir transform in verification time for a reasonable choice of parameters. In terms of signing Fiat-Shamir is faster for equal signature sizes. Nonetheless, our experiments show that the signing time of Fischlin’s transformation becomes, e.g., 22% of the one via Fiat-Shamir if one allows the signature size to be doubled.
Last updated:  2014-03-11
FFT-Based Key Recovery for the Integral Attack
Yosuke Todo
The integral attack is one of the most powerful attack against block ciphers. In this paper, we propose two new techniques for the integral attack, the FFT technique and the key concealment technique. The FFT technique is useful for the integral attack with enormous chosen plaintexts. As the previous result using FFT, Collard et al. showed a new technique which reduces the complexity for the linear attack. In this paper, we review the result of Collard et al. to estimate the complexity in detail, and we show the complexity can be estimated from the number of times using the addition of integers. Moreover, we show that attacks using FFT can be applied to the integral attack. As applications, we show integral attacks against AES and CLEFIA. For AES, we show that 6-round AES can be attacked with about $2^{51.7} additions. For CLEFIA, we show that 12-round CLEFIA can be attacked with about $2^{86.9}$ additions.
Last updated:  2014-06-03
AES-Based Authenticated Encryption Modes in Parallel High-Performance Software
Andrey Bogdanov, Martin M. Lauridsen, Elmar Tischhauser
Authenticated encryption (AE) has recently gained renewed interest due to the ongoing CAESAR competition. This paper deals with the performance of block cipher modes of operation for AE in parallel software. We consider the example of the AES on Intel's new Haswell microarchitecture that has improved instructions for AES and finite field multiplication. As opposed to most previous high-performance software implementations of operation modes -- that have considered the encryption of single messages -- we propose to process multiple messages in parallel. We demonstrate that this message scheduling is of significant advantage for most modes. As a baseline for longer messages, the performance of AES-CBC encryption on a single core increases by factor 6.8 when adopting this approach. For the first time, we report optimized AES-NI implementations of the novel AE modes OTR, CLOC, COBRA, SILC, McOE-G, POET and Julius -- both with single and multiple messages. For almost all AE modes considered, we obtain a consistent speed-up when processing multiple messages in parallel. Notably, among the nonce-based modes, CCM, CLOC and SILC get by factor 3.7 faster, achieving a performance comparable to GCM (the latter, however, possessing classes of weak keys), with OCB3 still performing at only 0.77 cpb. Among the nonce-misuse resistant modes, McOE-G receives a speed-up by more than factor 4 with a performance of about 1.62 cpb, with COPA consistently performing best at 1.45 cpb.
Last updated:  2015-01-24
Oblivious Data Structures
Xiao Shaun Wang, Kartik Nayak, Chang Liu, T-H. Hubert Chan, Elaine Shi, Emil Stefanov, Yan Huang
Oblivious RAMs (ORAMs) have traditionally been measured by their bandwidth overhead and client storage. We observe that when using ORAMs to build secure computation protocols for RAM programs, the size of the ORAM circuits is more relevant to the performance. We therefore embark on a study of the circuit-complexity of several recently proposed ORAM constructions. Our careful implementation and experiments show that asymptotic analysis is not indicative of the true performance of ORAM in secure computation protocols with practical data sizes. We then present SCORAM, a heuristic compact ORAM design optimized for secure computation protocols. Our new design is almost 10x smaller in circuit size and also faster than all other designs we have tested for realistic settings (i.e., memory sizes between 4MB and 2GB, constrained by 2^{-80} failure probability). SCORAM makes it feasible to perform secure computations on gigabyte-sized data sets.
Last updated:  2014-06-03
SETUP in Secret Sharing Schemes using Random Values
Uncategorized
Ruxandra F. Olimid
Show abstract
Uncategorized
Secret sharing schemes divide a secret among multiple participants so that only authorized subsets of parties can reconstruct it. We show that SETUP (Secretly Embedded Trapdoor with Universal Protection) attack can be embedded in secret sharing schemes that employ enough randomness to give the attacker an overwhelming advantage to access the secret. In case of ideal schemes, a coalition of a few participants (within at least one is the attacker) can succeed the attack, while in case of non-ideal schemes the attacker's knowledge can be enough to reveal the secret. We exemplify the attack against Shamir's threshold scheme, which is the most well-known and used secret sharing scheme. Finally, we consider some prevention techniques against the proposed attack.
Last updated:  2014-03-09
Impact of ANSI X9.24-1:2009 Key Check Value on ISO/IEC 9797-1:2011 MACs
Tetsu Iwata, Lei Wang
ANSI X9.24-1:2009 specifies the key check value, which is used to verify the integrity of the blockcipher key. This value is defined as the most significant bits of the ciphertext of the zero block, and is assumed to be publicly known data for verification. ISO/IEC 9797-1:2011 illustrates a total of ten CBC MACs, where one of these MACs, the basic CBC MAC, is widely known to be insecure. In this paper, we consider the remaining nine CBC MACs and derive the quantitative security impact of using the key check value. We first show attacks against five MACs by taking advantage of the knowledge of the key check value. We then prove that the analysis is tight, in a concrete security paradigm. For the remaining four MACs, we prove that the standard birthday bound still holds even with the presence of the key check value. As a result, we obtain a complete characterization of the impact of using ANSI X9.24-1 key check value with the ISO/IEC 9797-1 MACs.
Last updated:  2014-08-12
Proving the TLS Handshake Secure (as it is)
Uncategorized
Karthikeyan Bhargavan, Cédric Fournet, Markulf Kohlweiss, Alfredo Pironti, Pierre-Yves Strub, Santiago Zanella-Béguelin
Show abstract
Uncategorized
The TLS Internet Standard features a mixed bag of cryptographic algorithms and constructions, letting clients and servers negotiate their use for each run of the handshake. Although many ciphersuites are now well-understood in isolation, their composition remains problematic, and yet it is critical to obtain practical security guarantees for TLS. We experimentally confirm that all mainstream implementations of TLS share key materials between different algorithms, some of them of dubious strength. We outline attacks in their handling of resumption and renegotiation, stressing the need to model multiple related instances of the handshake. We study the provable security of the TLS handshake, as it is implemented and deployed. To capture the details of the standard and its main extensions, we rely on miTLS, a verified reference implementation of the protocol. miTLS inter-operates with mainstream browsers and servers for many protocol versions, configurations, and ciphersuites; and it provides application-level, provable security for some. We propose new agile security definitions and assumptions for the signatures, key encapsulation mechanisms (KEM), and key derivation algorithms used by the TLS handshake. By necessity, our definitions are stronger than those expected with simple modern protocols. To validate our model of key encapsulation, we prove that both RSA and Diffie-Hellman ciphersuites satisfy our definition for the KEM. In particular, we formalize the use of PKCS#1v1.5 encryption in TLS, including recommended countermeasures against Bleichenbacher attacks, and build a 3,000-line EasyCrypt proof of the security of the resulting master secret KEM against replayable chosen-ciphertext attacks under the assumption that ciphertexts are hard to re-randomize. Based on our new agile definitions, we construct a modular proof of security for the miTLS reference implementation of the handshake, including ciphersuite negotiation, key exchange, renegotiation, and resumption, treated as a detailed 3,600-line executable model. We present our main definitions, constructions, and proofs for an abstract model of the protocol, featuring series of related runs of the handshake with different ciphersuites. We also describe its refinement to account for the whole reference implementation, based on automated verification tools.
Last updated:  2014-03-07
A Framework and Compact Constructions for Non-monotonic Attribute-Based Encryption
Shota Yamada, Nuttapong Attrapadung, Goichiro Hanaoka, Noboru Kunihiro
In this paper, we propose new non-monotonic attribute-based encryption schemes with compact parameters. The first three schemes are key-policy attribute-based encryption (KP-ABE) and the fourth scheme is ciphertext-policy attribute-based encryption (CP-ABE) scheme. \begin{itemize} \item Our first scheme has very compact ciphertexts. The ciphertext overhead only consists of two group elements and this is the shortest in the literature. Compared to the scheme by Attrapadung et al. (PKC2011), which is the best scheme in terms of the ciphertext overhead, our scheme shortens ciphertext overhead by $33\%$. The scheme also reduces the size of the master public key to about half. \item Our second scheme is proven secure under the decisional bilinear Diffie-Hellman (DBDH) assumption, which is one of the most standard assumptions in bilinear groups. Compared to the non-monotonic KP-ABE scheme from the same assumption by Ostrovsky et al. (ACM-CCS'07), our scheme achieves more compact parameters. The master public key and the ciphertext size is about the half that of their scheme. \item Our third scheme is the first non-monotonic KP-ABE scheme that can deal with unbounded size of set and access policies. That is, there is no restriction on the size of attribute sets and the number of allowed repetition of the same attributes which appear in an access policy. The master public key of our scheme is very compact: it consists of only constant number of group elements. \item Our fourth scheme is the first non-monotonic CP-ABE scheme that can deal with unbounded size of set and access policies. The master public key of the scheme consists of only constant number of group elements. \end{itemize} We construct our KP-ABE schemes in a modular manner. We first introduce special type of predicate encryption that we call two-mode identity based broadcast encryption (TIBBE). Then, we show that any TIBBE scheme that satisfies certain condition can be generically converted into non-monotonic KP-ABE scheme. Finally, we construct efficient TIBBE schemes and apply this conversion to obtain the above new non-monotonic KP-ABE schemes.
Last updated:  2019-01-21
Improving throughput of RC4 algorithm using multithreading techniques in multicore processors
T. D. B Weerasinghe
RC4 is the most widely used stream cipher around. So, it is important that it runs cost effectively, with minimum encryption time. In other words, it should give higher throughput. In this paper, a mechanism is proposed to improve the throughput of RC4 algorithm in multicore processors using multithreading. The proposed mechanism does not parallelize RC4, instead it introduces a way that multithreading can be used in encryption when the plaintext is in the form of a text file. In this particular research, the source codes were written in Java (JDK version: 1.6.0_21) in Windows environments. Experiments to analyze the throughput were done separately in an Intel® P4 machine (O/S: Windows XP), Core 2 Duo machine (O/S: Windows XP) and Core i3 machine (O/S: Windows 7). Outcome of the research: Higher throughput of RC4 algorithm can be achieved in multicores when using the proposed mechanism in this research. Effective use of multithreading in encryption can be achieved in multicores using this technique.
Last updated:  2014-03-06
Optimal constructions for ID-based one-way-function key predistribution schemes realizing specified communication graphs
Maura B. Paterson, Douglas R. Stinson
We study a method for key predistribution in a network of $n$ users where pairwise keys are computed by hashing users' IDs along with secret information that has been (pre)distributed to the network users by a trusted entity. A communication graph $G$ can be specified to indicate which pairs of users should be able to compute keys. We determine necessary and sufficient conditions for schemes of this type to be secure. We also consider the problem of minimizing the storage requirements of such a scheme; we are interested in the total storage as well as the maximum storage required by any user. Minimizing the total storage is NP-hard, whereas minimizing the maximum storage required by a user can be computed in polynomial time.
Last updated:  2014-03-06
Verifiable Delegated Set Intersection Operations on Outsourced Encrypted Data
Qingji Zheng, Shouhuai Xu
We initiate the study of the following problem: Suppose Alice and Bob would like to outsource their encrypted private data sets to the cloud, and they also want to conduct the set intersection operation on their plaintext data sets. The straightforward solution for them is to download their outsourced ciphertexts, decrypt the ciphertexts locally, and then execute a commodity two-party set intersection protocol. Unfortunately, this solution is not practical. We therefore motivate and introduce the novel notion of {\em Verifiable Delegated Set Intersection on outsourced encrypted data} (VDSI). The basic idea is to delegate the set intersection operation to the cloud, while (i) not giving the decryption capability to the cloud, and (ii) being able to hold the misbehaving cloud accountable. We formalize security properties of VDSI and present a construction. In our solution, the computational and communication costs on the users are linear to the size of the intersection set, meaning that the efficiency is optimal up to a constant factor.
Last updated:  2014-03-06
Pragmatism vs. Elegance: comparing two approaches to Simple Power Attacks on AES
Valentina Banciu, Elisabeth Oswald
Simple side-channel attacks trade off data complexity (i.e. the number of side-channel observations needed for a successful attack) with computational complexity (i.e. the number of operations applied to the side-channel traces). In the specific example of Simple Power Analysis (SPA) attacks on the Advanced Encryption Standard (AES), two approaches can be found in the literature, one which is a pragmatic approach that involves basic techniques such as efficient enumeration of key candidates, and one that is seemingly more elegant and uses algebraic techniques. Both of these different techniques have been used in complementary settings: the pragmatic attacks were solely applied to the key schedule whereas the more elegant methods were only applied to the encryption rounds. In this article, we investigate how these methods compare in what we consider to be a more practical setting in which adversaries gain access to erroneous information about both key schedule and encryption rounds. We conclude that the pragmatic enumeration technique better copes with erroneous information which makes it more interesting in practice.
Last updated:  2014-03-06
One-Round Witness Indistinguishability from Indistinguishability Obfuscation
Qihua Niu, Hongda Li, Bei Liang, Fei Tang
In this work, we explore the connection between witness indistinguishability (WI) and indistinguishability obfuscation (iO). We construct a one-round witness indistinguishable protocol for all of NP based on the the existence of indistinguishability obfuscator (the first candidate construction of indistinguishability obfuscator was recently put forward by Garg et.al. in 2013). Based on our one-round WI, we also construct a two-round oblivious transfer (OT) protocol and by a slight modification of our OT protocol, we get a noninteractive bit commitment scheme.
Last updated:  2014-03-05
Secrecy and Performance Analysis of Symmetric Key Encryption Algorithms
T. D. B Weerasinghe
In open literature there is a lack of focus on Shannon's secrecy of ciphers as a security measurement of symmetric key encryption, hence in this research, Shannon's theories on secrecy of ciphers were used to calculate the average secrecy of each symmetric cipher used in this research. All secrecy and performance analysis were done using a newly created tool. Analysis is done based on the secrecy level and performance of the algorithm. This paper presents an analysis of some of the widely used symmetric key algorithms which fall under the categories of block and stream ciphers together with the two combined algorithms. [DES, TripleDES, AES, RC2, RC4, Hybrid1(TripleDES+RC4) and Hybrid2 (AES+RC4) are used]. Analysis is pivoted around on two measurement criteria under two circumstances which are described later in this paper. All the algorithms are implemented in Core Java using classes available in JAVA package javax.crypto. Separate classes are written to calculate the secrecy of ciphers and the encryption time. And also the tool is created using Core Java with the help of Netbeans IDE. As far as the outcome of the research is concerned, the performances of all stream ciphers are higher than that of block ciphers and the combined algorithms have similar performance level to block ciphers. Secrecy levels of block ciphers are comparatively higher than that of stream ciphers as the history says, it is further proved by Shannon's theories in this research. The combined algorithms have more stable secrecy levels.
Last updated:  2014-03-05
Analysis of a Modified RC4 Algorithm
T. D. B Weerasinghe
In this paper, analysis of a simply modified RC4 algorithm is presented. RC4 is the most widely used stream cipher and it is not considered as a cipher that is strong in security. Many alternatives have been proposed to improve RC4 key generation and pseudo random number generation but the thoughts behind this work is to try out a simple modification of RC4’s PRGA, where we can mention like this: Output = M XOR GeneratedKey XOR j After having done the modification the modified algorithm is tested for its secrecy and performance and analyzed over the variable key length with respect to those of the original RC4. The results show that the modified algorithm is better than the original RC4 in the aspects of secrecy and performance.
Last updated:  2014-03-04
Continuous Non-malleable Codes
Uncategorized
Sebastian Faust, Pratyay Mukherjee, Jesper Buus Nielsen, Daniele Venturi
Show abstract
Uncategorized
Non-malleable codes are a natural relaxation of error correcting/detecting codes that have useful applications in the context of tamper resilient cryptography. Informally, a code is non-malleable if an adversary trying to tamper with an encoding of a given message can only leave it unchanged or modify it to the encoding of a completely unrelated value. This paper introduces an extension of the standard non-malleability security notion – so-called continuous non-malleability – where we allow the adversary to tamper continuously with an encoding. This is in contrast to the standard notion of non-malleable codes where the adversary only is allowed to tamper a single time with an encoding. We show how to construct continuous non-malleable codes in the common split-state model where an encoding consist of two parts and the tampering can be arbitrary but has to be independent with both parts. Our main contributions are outlined below: 1. We propose a new uniqueness requirement of split-state codes which states that it is computationally hard to find two codewords C = (X0;X1) and C0 = (X0;X1') such that both codewords are valid, but X0 is the same in both C and C0. A simple attack shows that uniqueness is necessary to achieve continuous non-malleability in the split-state model. Moreover, we illustrate that none of the existing constructions satisfies our uniqueness property and hence is not secure in the continuous setting. 2. We construct a split-state code satisfying continuous non-malleability. Our scheme is based on the inner product function, collision-resistant hashing and non-interactive zero-knowledge proofs of knowledge and requires an untamperable common reference string. 3. We apply continuous non-malleable codes to protect arbitrary cryptographic primitives against tampering attacks. Previous applications of non-malleable codes in this setting required to perfectly erase the entire memory after each execution and and required the adversary to be restricted in memory. We show that continuous non-malleable codes avoid these restrictions.
Last updated:  2014-03-05
A novel PUF Scheme
Uncategorized
Jeroen Delvaux
Uncategorized
We present a novel PUF-based scheme.
Last updated:  2014-03-04
An Effective RC4 Stream Cipher
Uncategorized
T. D. B Weerasinghe
Show abstract
Uncategorized
RC4 is the most widely used stream cipher around. A lot of modifications of RC4 cipher can be seen in open literature. Most of them enhance the secrecy of the cipher and the security levels have been analyzed theoretically by using mathematics. In this paper, a new effective RC4 cipher is proposed and the security analysis has been done using Shannon’s Secrecy theories where numerical values are obtained to depict the secrecy. The proposed cipher is a combination of Improved RC4 cipher proposed by Jian Xie et al and modified RC4 cipher proposed by T.D.B Weerasinghe, which were published prior to this work. Combination is done in such a way that the concept used in the modified RC4 algorithm is used in the Improved RC4 cipher by Jian Xie et al. Importantly, an immense improvement of performance and secrecy are obtained by this combination. Hence this particular modification of RC4 cipher can be used in software applications where there is a need to improve the throughput as well as secrecy.
Last updated:  2014-03-04
Parallelized hashing via j-lanes and j-pointers tree modes, with applications to SHA-256
Shay Gueron
The j-lanes tree hashing is a tree mode that splits an input message to j slices, computes j independent digests of each slice, and outputs the hash value of their concatenation. The j-pointers tree hashing is a similar tree mode that receives, as input, j pointers to j messages (or slices of a single message), computes their digests and outputs the hash value of their concatenation. Such modes have parallelization capabilities on a hashing process that is serial by nature. As a result, they have performance advantage on modern processor architectures. This paper provides precise specifications for these hashing modes, proposes a setup for appropriate IV’s definition, and demonstrates their performance on the latest processors. Our hope is that it would be useful for standardization of these modes.
Last updated:  2014-03-04
Encryption Quality Analysis of the RCBC Block Cipher Compared with RC6 and RC5 Algorithms
Abdul Hamid M. Ragab, Osama S. Farag Alla, Amin Y. Noaman
In this paper, we investigate the encryption quality of the robust chaotic block cipher (RCBC) algorithm; which is based on chaotic map. In addition to visual inspection of images encryption testing, five analytical metrics are developed for analyzing the encryption quality. These metrics are used to evaluate several encrypted images factors include: maximum deviation, irregular deviation, information entropy, correlation coefficients, and avalanche effect. Comparison of the encryption quality for RCBC, RC6 and RC5 implantations to digital images are performed. In the experimental results, we have made our tests using color images Lena, Cman, and Peppers, each of size 512x512 pixels, as the original images (plain-images). Results show better quality of the RCBC.
Last updated:  2014-03-03
Privacy Failures in Encrypted Messaging Services: Apple iMessage and Beyond
Scott Coull, Kevin Dyer
Instant messaging services are quickly becoming the most dominant form of communication among consumers around the world. Apple iMessage, for example, handles over 2 billion message each day, while WhatsApp claims 16 billion messages from 400 million international users. To protect user privacy, these services typically implement end-to-end and transport layer encryption, which are meant to make eavesdropping infeasible even for the service providers themselves. In this paper, however, we show that it is possible for an eavesdropper to learn information about user actions, the language of messages, and even the length of those messages with greater than 96% accuracy despite the use of state-of-the-art encryption technologies simply by observing the sizes of encrypted packet. While our evaluation focuses on Apple iMessage, the attacks are completely generic and we show how they can be applied to many popular messaging services, including WhatsApp, Viber, and Telegram.
Last updated:  2014-06-07
How to Eat Your Entropy and Have it Too -- Optimal Recovery Strategies for Compromised RNGs
Uncategorized
Yevgeniy Dodis, Adi Shamir, Noah Stephens-Davidowitz, Daniel Wichs
Show abstract
Uncategorized
Random number generators (RNGs) play a crucial role in many cryptographic schemes and protocols, but their security proof usually assumes that their internal state is initialized with truly random seeds and remains secret at all times. However, in many practical situations these are unrealistic assumptions: The seed is often gathered after a reset/reboot from low entropy external events such as the timing of manual key presses, and the state can be compromised at unknown points in time via side channels or penetration attacks. The usual remedy (used by all the major operating systems, including Windows, Linux, FreeBSD, MacOS, iOS, etc.) is to periodically replenish the internal state through an auxiliary input with additional randomness harvested from the environment. However, recovering from such attacks in a provably correct and computationally optimal way had remained an unsolved challenge so far. In this paper we formalize the problem of designing an efficient recovery mechanism from state compromise, by considering it as an online optimization problem. If we knew the timing of the last compromise and the amount of entropy gathered since then, we could stop producing any outputs until the state becomes truly random again. However, our challenge is to recover within a time proportional to this optimal solution even in the hardest (and most realistic) case in which (a) we know nothing about the timing of the last state compromise, and the amount of new entropy injected since then into the state, and (b) any premature production of outputs leads to the total loss of all the added entropy {\em used by the RNG}, since the attacker can use brute force to enumerate all the possible low-entropy states. In other words, the challenge is to develop recovery mechanisms which are guaranteed to save the day as quickly as possible after a compromise we are not even aware of. The dilemma that we face is that any entropy used prematurely will be lost, and any entropy which is kept unused will delay the recovery. After developing our formal definitional framework for RNGs with inputs, we show how to construct a nearly optimal RNG which is secure in our model. Our technique is inspired by the design of the Fortuna RNG (which is a heuristic RNG construction that is currently used by Windows and comes without any formal analysis), but we non-trivially adapt it to our much stronger adversarial setting. Along the way, our formal treatment of Fortuna enables us to improve its entropy efficiency by almost a factor of two, and to show that our improved construction is essentially tight, by proving a rigorous lower bound on the possible efficiency of any recovery mechanism in our very general model of the problem.
Last updated:  2014-03-03
Tuple decoders for traitor tracing schemes
Uncategorized
Jan-Jaap Oosterwijk, Jeroen Doumen, Thijs Laarhoven
Show abstract
Uncategorized
In the field of collusion-resistant traitor tracing, Oosterwijk et al. recently determined the optimal suspicion function for simple decoders. Earlier, Moulin also considered another type of decoder: the generic joint decoder that compares all possible coalitions, and showed that usually the generic joint decoder outperforms the simple decoder. Both Amiri and Tardos, and Meerwald and Furon described constructions that assign suspicion levels to $c$-tuples, where $c$ is the number of colluders. We investigate a novel idea: the tuple decoder, assigning a suspicion level to tuples of a fixed size. In contrast to earlier work, we use this in a novel accusation algorithm to decide for each distinct user whether or not to accuse him. We expect such a scheme to outperform simple decoders while not being as computationally intensive as the generic joint decoder. In this paper we generalize the optimal suspicion functions to tuples, and describe a family of accusation algorithms in this setting that accuses individual users using this tuple-based information.
Last updated:  2014-03-04
A NEW SCALAR POINT MULTIPLICATION SCHEME IN ECC BASED ON ZECKENDORF REPRESENTATION AND MULTIBASE CONCEPT
Kirti Chawla, Om Pal Yadav
With the fast development of cryptography research and computer technology, the cryptosystems of RSA and Diffe-Hellman are getting more and more unsafe, and Elliptic Curve Cryptosystem is becoming the trend of public cryptography in the future. Scalar Point Multiplication Scalar multiplication is the time consuming operation in elliptic curve based cryptosystem. In this paper, Nicolas Meloni1,2 2012 springer algorithm for addition of points on elliptic curve is used along with multibase concept to improve the speed of the scalar multiplication. Comparative analysis of proposed approach and some previous approaches is also discussed in last.
Last updated:  2014-03-24
Generalized proper matrices and constructing of $m$-resilient Boolean functions with maximal nonlinearity for expanded range of parameters
Yuriy Tarannikov
Nonlinearity and resiliency are well known as some of the most important cryptographic parameters of Boolean functions, it is actual the problem of the constructing of functions that have high nonlinearity and resiliency simultaneously. In 2000 three groups of au\-thors obtained independently the upper bound $2^{n-1}-2^{m+1}$ for the nonlinearity of an $m$-resilient function of $n$ variables. It was shown that if this bound is achieved then $(n-3)/2\le m\le n-2$. Simultaneously in 2000 Tarannikov constructed functions that achieve this bound for $(2n-7)/3\le m\le n-2$. In 2001 Tarannikov constructed such functions for $0.6n-1\le m$ introducing for this aim so called proper matrices; later in 2001 Fedorova and Tarannikov constructed by means of proper matrices the functions that achieve the bound $2^{n-1}-2^{m+1}$ for $m\ge cn(1+o(1))$ where $c=1/\log_2(\sqrt{5}+1)=0.5902...$ but proved simultaneously that by means of proper matrices it is impossible to improve this result. During the period since 2001 it was not any further progress in the problem on the achievability of the bound $2^{n-1}-2^{m+1}$ in spite of this problem was well known and actual except the constructing in 2006--2007 by three groups of authors by means of a computer search concrete functions for $n=9$, $m=3$. In this paper we find the new approach that uses the generalization of the concept of proper matrices. We formulate com\-bi\-na\-to\-ri\-al problems solutions of which allow to construct generalized proper matrices with parameters impossible for old proper matrices. As a result we obtain the constructions of $m$-resilient functions of $n$ variables with maximal nonlinearity for $m\ge cn(1+o(1))$ where $c=0.5789...$, and also we demonstrate how further advance in combinatorial problems follows an additional decrease of the constant $c$.
Last updated:  2014-03-04
Improved Secure Implementation of Code-Based Signature Schemes on Embedded Devices
Uncategorized
Arnaud Dambra, Philippe Gaborit, Mylène Roussellet, Julien Schrek, Nicolas Tafforeau
Show abstract
Uncategorized
Amongst areas of cryptographic research, there has recently been a widening interest for code-based cryptosystems and their implementations. Besides the {\it a priori} resistance to quantum computer attacks, they represent a real alternative to the currently used cryptographic schemes. In this paper we consider the implementation of the Stern authentication scheme and one recent variation of this scheme by Aguilar {\it et al.}. These two schemes allow public authentication and public signature with public and private keys of only a few hundreds bits. The contributions of this paper are twofold: first, we describe how to implement a code-based signature in a constrained device through the Fiat-Shamir paradigm, in particular we show how to deal with long signatures. Second, we implement and explain new improvements for code-based zero-knowledge signature schemes. We describe implementations for these signature and authentication schemes, secured against side channel attacks, which drastically improve the previous implementation presented at Cardis 2008 by Cayrel {\it et al.}. We obtain a factor 3 reduction of speed and a factor of about 2 for the length of the signature. We also provide an extensive comparison with RSA signatures.
Last updated:  2014-03-03
TOWARD CERTIFICATELESS SIGNCRYPTION SCHEME WITHOUT RANDOM ORACLES
Hu Xiong
Signcryption is a useful paradigm which simultaneously offers both the functions of encryption and signature in a single logic step. It would be interesting to make signcryption certificateless to ease the heavy burden of certificate management in traditional public key cryptography (PKC) and solve the key escrow problem in Identity-based public key cryptography (ID-PKC). Most certificateless signcryption (CL-SC) schemes are constructed in the random oracle model instead of the standard model. By exploiting Bellare and Shoup's one-time signature, Hwang et al.'s certificateless encryption and Li et al.'s identity-based signcryption, this paper proposes a new CL-SC scheme secure in the standard model. It is proven that our CL-SC scheme satisfies semantic security and unforgeability against the outside adversary and malicious-but-passive key generation center (KGC) assuming the hardness of bilinear decision Diffie-Hellman (BDDH) and computational Diffie-Hellman (CDH) problems. Our security proofs do not depend on random oracles.
Last updated:  2014-06-02
``Ooh Aah... Just a Little Bit'' : A small amount of side channel can go a long way
Naomi Benger, Joop van de Pol, Nigel P. Smart, Yuval Yarom
We apply the Flush-Reload side-channel attack based on cache hits/misses to extract a small amount of data from OpenSSL ECDSA signature requests. We then apply a ``standard'' lattice technique to extract the private key, but unlike previous attacks we are able to make use of the side-channel information from almost all of the observed executions. This means we obtain private key recovery by observing a relatively small number of executions, and by expending a relatively small amount of post-processing via lattice reduction. We demonstrate our analysis via experiments using the curve secp256k1 used in the Bitcoin protocol. In particular we show that with as little as 200 signatures we are able to achieve a reasonable level of success in recovering the secret key for a 256-bit curve. This is significantly better than prior methods of applying lattice reduction techqniques to similar side channel information.
Last updated:  2014-08-11
TrueSet: Faster Verifiable Set Computations
Ahmed E. Kosba, Dimitrios Papadopoulos, Charalampos Papamanthou, Mahmoud F. Sayed, Elaine Shi, Nikos Triandopoulos
Verifiable computation (VC) enables thin clients to efficiently verify the computational results produced by a powerful server. Although VC was initially considered to be mainly of theoretical interest, over the last two years, impressive progress has been made on implementing VC. Specifically, we now have open-source implementations of VC systems that can handle all classes of computations expressed either as circuits or in the RAM model. However, despite this very encouraging progress, new enhancements in the design and implementation of VC protocols are required in order to achieve truly practical VC for real-world applications. In this work, we show that for functionalities that can be expressed efficiently in terms of set operations (e.g., a subset of SQL queries) VC can be enhanced to become drastically more practical: we present the design and prototype implementation of a novel VC scheme that achieves orders of magnitude speed-up in comparison with the state of the art. Specifically, we build and evaluate TRUESET, a system that can verifiably compute any polynomial-time function expressed as a circuit consisting of \set gates" such as union, intersection, difference and set cardinality. Moreover, TRUESET supports hybrid circuits consisting of both set gates and traditional arithmetic gates. Therefore, it does not lose any of the expressiveness of the previous schemes|this also allows the user to choose the most efficient way to represent different parts of a computation. By expressing set computations as polynomial operations and introducing a novel Quadratic Polynomial Program technique, TRUESET achieves prover performance speed-up ranging from 30x to 150x and yields up to 97% evaluation key size reduction.
Last updated:  2014-03-04
Weak-Key Leakage Resilient Cryptography
Zuoxia Yu, Qiuliang Xu, Yongbin Zhou, Chengyu Hu, Rupeng Yang, Guangjun Fan
In traditional cryptography, the standard way of examining the security of a scheme is to analyze it in a black-box manner, capturing no side channel attacks which exploit various forms of unintended information leakages and do threaten the practical security of the scheme. One way to protect against such attacks aforementioned is to extend the traditional models so as to capture them. Early models rely on the assumption that only computation leaks information, and are incapable of capturing memory attacks such as cold boot attacks. Thus, Akavia et al.(TCC '09) formalize the general model of key-leakage attacks to cover them. However, most key-leakage attacks in reality tend to be weak key leakage attacks which can be viewed as a nonadaptive version of the key-leakage attacks. Powerful as those may be, the existing constructions of cryptographic schemes in adaptive key-leakage attacks model still have some drawbacks such as they are quite inefficient or they can only tolerate a small amount of leakage. Therefore, we mainly consider models that cover weak key-leakage attacks and the corresponding constructions in them. We extend the transformation paradigm presented by Naor and Segev that can transform from any chosen-plaintext secure public-key encryption (PKE) scheme to a chosen-plaintext weak key-leakage secure PKE scheme. Our extensions are two-fold. Firstly, we extend the paradigm into chosen-ciphertext attack scenarios and prove that the properties of it still hold in these scenarios. We also give an instantiation based on DDH assumption in this setting. Additionally, we extend the paradigm to cover more side channel attacks under the consideration of different types of leakage functions. We further consider attacks which require the secret key still has enough min-entropy after leaking and prove the original paradigm is still applicable in this case with chosen-ciphertext attacks. Attacks that require the secret key is computationally infeasible to recover given the leakage information are taken into consideration as well. And we formalize the informal discusses by Naor and Segev in (Crypto' 09) on how to adapt the original paradigm in this new models.
Last updated:  2014-03-03
Point compression for the trace zero subgroup over a small degree extension field
Elisa Gorla, Maike Massierer
Using Semaev's summation polynomials, we derive a new equation for the $\mathbb{F}_q$-rational points of the trace zero variety of an elliptic curve defined over $\mathbb{F}_q$. Using this equation, we produce an optimal-size representation for such points. Our representation is compatible with scalar multiplication. We give a point compression algorithm to compute the representation and a decompression algorithm to recover the original point (up to some small ambiguity). The algorithms are efficient for trace zero varieties coming from small degree extension fields. We give explicit equations and discuss in detail the practically relevant cases of cubic and quintic field extensions.
Last updated:  2014-03-01
CLOC: Authenticated Encryption for Short Input
Tetsu Iwata, Kazuhiko Minematsu, Jian Guo, Sumio Morioka
We define and analyze the security of a blockcipher mode of operation, CLOC, for provably secure authenticated encryption with associated data. The design of CLOC aims at optimizing previous schemes, CCM, EAX, and EAX-prime, in terms of the implementation overhead beyond the blockcipher, the precomputation complexity, and the memory requirement. With these features, CLOC is suitable for handling short input data, say 16 bytes, without needing precomputation nor large memory. This property is especially beneficial to small microprocessors, where the word size is typically 8 bits or 16 bits, and there are significant restrictions in the size and the number of registers. CLOC uses a variant of CFB mode in its encryption part and a variant of CBC MAC in the authentication part. We introduce various design techniques in order to achieve the above mentioned design goals. We prove CLOC secure, in a reduction-based provable security paradigm, under the assumption that the blockcipher is a pseudorandom permutation. We also present our preliminary implementation results.
Last updated:  2015-09-22
Non-Malleable Extractors with Shorter Seeds and Their Applications
Uncategorized
Yanqing Yao, Zhoujun Li
Show abstract
Uncategorized
Motivated by the problem of how to communicate over a public channel with an active adversary, Dodis and Wichs (STOC’09) introduced the notion of a non-malleable extractor. A non-malleable extractor nmExt : {0, 1}^n ×{0, 1}^d \rightarrow {0, 1}^m takes two inputs, a weakly random W and a uniformly random seed S, and outputs a string which is nearly uniform, given S as well as nmExt(W,A(S)), for an arbitrary function A with A(S) = S. In this paper, by developing the combination and permutation techniques, we improve the error estimation of the extractor of Raz (STOC’05), which plays an extremely important role in the constraints of the non-malleable extractor parameters including seed length. Then we present improved explicit construction of non-malleable extractors. Though our construction is the same as that given by Cohen, Raz and Segev (CCC’12), the parameters are improved. More precisely, we construct an explicit (1016, 1/2)-non-malleable extractor nmExt : {0, 1}^n ×{0, 1}^d \rightarrow {0, 1} with n = 210 and seed length d = 19, while Cohen et al. showed that the seed length is no less than 46/63 +66. Therefore, our method beats the condition “2.01 · log n \leq d \leq n” proposed by Cohen et al., since d is just 1.9 · log n in our construction. We also improve the parameters of the general explicit construction given by Cohen et al. Finally, we give their applications to privacy amplification.
Last updated:  2014-03-01
Honey Encryption: Security Beyond the Brute-Force Bound
Ari Juels, Thomas Ristenpart
We introduce {\em honey encryption} (HE), a simple, general approach to encrypting messages using low min-entropy keys such as passwords. HE is designed to produce a ciphertext which, when decrypted with any of a number of {\em incorrect} keys, yields plausible-looking but bogus plaintexts called {\em honey messages}. A key benefit of HE is that it provides security in cases where too little entropy is available to withstand brute-force attacks that try every key; in this sense, HE provides security beyond conventional brute-force bounds. HE can also provide a hedge against partial disclosure of high min-entropy keys. HE significantly improves security in a number of practical settings. To showcase this improvement, we build concrete HE schemes for password-based encryption of RSA secret keys and credit card numbers. The key challenges are development of appropriate instances of a new type of randomized message encoding scheme called a {\em distribution-transforming encoder} (DTE), and analyses of the expected maximum loading of bins in various kinds of balls-and-bins games.
Last updated:  2014-07-15
Non-Interactive Cryptography in the RAM Model of Computation
Uncategorized
Daniel Apon, Xiong Fan, Jonathan Katz, Feng-Hao Liu, Elaine Shi, Hong-Sheng Zhou
Uncategorized
Using recently developed techniques for program obfuscation, we show several constructions of non-interactive cryptosystems in the random-access machine (RAM) model of computation that are asymptotically more efficient than what would be obtained using generic RAM-to-circuit compilation. In particular, let $T$ denote the running time and $n$ the memory size of a RAM program. We show that using differing-inputs obfuscation, functional encryption for arbitrary RAM programs can be achieved with evaluation time $\tilde{O}(T+n)$. Additionally, we provide a number of RAM-model constructions assuming the stronger notion of virtual black-box (VBB) obfuscation. We view these as initial feasibility results and leave instantiating similar protocols from weaker assumptions for future work. Specifically, using VBB obfuscation we show how to construct RAM-model functional encryption with function privacy, fully homomorphic encryption, and stateful, privacy-preserving verifiable computation in the memory-delegation model.
Last updated:  2014-03-02
Verifiable Oblivious Storage
Daniel Apon, Jonathan Katz, Elaine Shi, Aishwarya Thiruvengadam
We formalize the notion of Verifiable Oblivious Storage (VOS), where a client outsources the storage of data to a server while ensuring data confidentiality, access pattern privacy, and integrity and freshness of data accesses. VOS generalizes the notion of Oblivious RAM (ORAM) in that it allows the server to perform computation, and also explicitly considers data integrity and freshness. We show that allowing server-side computation enables us to construct asymptotically more efficient VOS schemes whose bandwidth overhead cannot be matched by any ORAM scheme, due to a known lower bound by Goldreich and Ostrovsky. Specifically, for large block sizes we can construct a VOS scheme with constant bandwidth per query; further, answering queries requires only poly-logarithmic server computation. We describe applications of VOS to Dynamic Proofs of Retrievability, and RAM-model secure multi-party computation.
Last updated:  2014-03-01
A Statistics-based Fundamental Model for Side-channel Attack Analysis
Yunsi Fei, A. Adam Ding, Jian Lao, Liwei Zhang
ide-channel attacks (SCAs) exploit leakage from the physical implementation of cryptographic algorithms to recover the otherwise secret information. In the last decade, popular SCAs like differential power analysis (DPA) and correlation power analysis (CPA) have been invented and demonstrated to be realistic threats to many critical embedded systems. However, there is still no sound and provable theoretical model that illustrates precisely what the success of these attacks depends on and how. Based on the maximum likelihood estimation (MLE) theory, this paper proposes a general statistical model for side-channel attack analysis that takes characteristics of both the physical implementation and cryptographic algorithm into consideration. The model establishes analytical relations between the success rate of attacks and the cryptographic system. For power analysis attacks, the side-channel characteristic of the physical implementation is modeled as signal-to-noise ratio (SNR), which is the ratio between the single-bit unit power consumption and the standard deviation of power distribution. The side-channel property of the cryptographic algorithm is extracted by a novel algorithmic confusion analysis. Experimental results of DPA and CPA on both DES and AES verify this model with high accuracy and demonstrate effectiveness of the algorithmic confusion analysis and SNR extraction. We expect the model to be extendable to other SCAs, like timing attacks, and would provide valuable guidelines for truly SCA-resilient system design and implementation.
Last updated:  2014-03-01
Security Analysis of Key-Alternating Feistel Ciphers
Rodolphe Lampe, Yannick Seurin
We study the security of \emph{key-alternating Feistel} ciphers, a class of key-alternating ciphers with a Feistel structure. Alternatively, this may be viewed as the study of Feistel ciphers where the pseudorandom round functions are of the form $F_i(x\oplus k_i)$, where $k_i$ is the (secret) round key and $F_i$ is a \emph{public} random function that the adversary is allowed to query in a black-box way. Interestingly, our results can be seen as a generalization of traditional results \emph{à la} Luby-Rackoff in the sense that we can derive results for this model by simply letting the number of queries of the adversary to the public random functions $F_i$ be zero in our general bounds. We make an extensive use of the coupling technique. In particular (and as a result of independent interest), we improve the analysis of the coupling probability for balanced Feistel schemes previously carried out by Hoang and Rogaway (CRYPTO 2010).
Last updated:  2014-09-15
On the Effective Prevention of TLS Man-In-The-Middle Attacks in Web Applications
Nikolaos Karapanos, Srdjan Capkun
In this paper we consider TLS Man-In-The-Middle (MITM) attacks in the context of web applications, where the attacker is able to successfully impersonate the legitimate server to the user, with the goal of impersonating the user to the server and thus compromising the user's online account and data. We describe in detail why the recently proposed client authentication protocols based on TLS Channel IDs, as well as client web authentication in general, cannot fully prevent such attacks. Nevertheless, we show that strong client authentication, such as Channel ID-based authentication, can be combined with the concept of server invariance, a weaker and easier to achieve property than server authentication, in order to protect against the considered attacks. We specifically leverage Channel ID-based authentication in combination with server invariance to create a novel mechanism that we call SISCA: Server Invariance with Strong Client Authentication. SISCA resists user impersonation via TLS MITM attacks, regardless of how the attacker is able to successfully achieve server impersonation. We analyze our proposal and show how it can be integrated in today's web infrastructure.
Last updated:  2014-03-24
Millions of Millionaires: Multiparty Computation in Large Networks
Mahdi Zamani, Mahnush Movahedi, Jared Saia
We describe a general Multi-Party Computation (MPC) protocol for arithmetic circuits that is secure against a static malicious adversary corrupting up to a 1/10 fraction of the parties. The protocol requires each party to send an average of soft-O(m/n) bits, and compute soft-O(m/n) operations in a network of size n, where m is the size of circuit. This is achieved by increasing latency from constant to O(d) , where d is the depth of the circuit. Our protocol has a setup phase that is independent of the circuit and relies on Threshold Fully Homomorphic Encryption (TFHE). The setup requires each party to send soft-O(k^2) messages and compute soft-O(k^2) operations, where k is the security parameter. We provide results from microbenchmarks conducted over a sorting network showing that our protocol may be practical for deployment in large networks. For example, we consider a network of size 2^25 (over 33 million), where each party has an input item of size 20 bytes. To securely sort the items, our protocol requires each party on average to send only 5 kilobytes per item sorted.
Last updated:  2014-08-25
Outsourcing Private RAM Computation
Craig Gentry, Shai Halevi, Mariana Raykova, Daniel Wichs
We construct the first schemes that allow a client to privately outsource arbitrary program executions to a remote server while ensuring that: (I) the client's work is small and essentially independent of the complexity of the computation being outsourced, and (II) the server's work is only proportional to the run-time of the computation on a random access machine (RAM), rather than its potentially much larger circuit size. Furthermore, our solutions are non-interactive and have the structure of reusable garbled RAM programs, addressing an open question of Lu and Ostrovsky (Eurocrypt 2013). We also construct schemes for an augmented variant of the above scenario, where the client can initially outsource a large private and persistent database to the server, and later outsource arbitrary program executions with read/write access to this database. Our solutions are built from non-reusable garbled RAM in conjunction with new types of reusable garbled circuits that are more efficient than prior solutions but only satisfy weaker security. For the basic setting without a persistent database, we can instantiate the required reusable garbled circuits using indistinguishability obfuscation. For the more complex setting with a persistent database we need stronger notions of obfuscation. Our basic solution also requires the client to perform a one-time preprocessing step to garble a program at the cost of its RAM run-time, and we can avoid this cost using stronger notions of obfuscation. It remains an open problem to instantiate these new types of reusable garbled circuits under weaker assumptions, possibly avoiding obfuscation altogether.
Last updated:  2015-10-19
The Multiple Number Field Sieve for Medium and High Characteristic > Finite Fields
Razvan Barbulescu, Cécile Pierrot
In this paper, we study the discrete logarithm problem in medium and high characteristic finite fields. We propose a variant of the Number Field Sieve (NFS) based on numerous number fields. Our improved algorithm computes discrete logarithms in $\mathbb{F}_{p^n}$ for the whole range of applicability of NFS and lowers the asymptotic complexity from $L_{p^n}(1/3, (128/9)^{1/3})$ to $L_{p^n}(1/3, (2^{13} /3^6)^{1/3})$ in the medium characteristic case, and from $L_{p^n} (1/3, (64/9)^{1/3})$ to $L_{p^n}(1/3,((92 + 26\sqrt{13})/27))^{1/3})$ in the high characteristic case. Version 2 contains an erratum.
Last updated:  2014-02-27
Untappable communication channels over optical fibers from quantum-optical noise
Geraldo A. Barbosa, Jeroen van de Graaf
Coherent light, as produced by lasers, gives rise to an intrinsic noise, known as quantum noise, optical noise or shot noise. AlphaEta is a protocol which exploits this physical phenomenon to obtain secure data encryption or key distribution over a fiber-optic channel in the presence of an eavesdropper. In this paper we focus on the cryptographic aspects of AlphaEta and its variants. Moreover, we propose a new protocol for which we can provide a rigorous proof that the eavesdropper obtains neglible information. In comparison to single-photon quantum cryptography, AlphaEta provide much higher throughputs combined with a well-known technology.
Last updated:  2014-07-20
Calculating Cryptographic Degree of an S-Box
Prasanna Raghaw Mishra
In this paper we propose an efficient technique to compute algebraic degree of an S-box (minimum of algebraic degrees of its component functions). Using our technique we have calculated algebraic degree of a $26\times 64$ S-box.
Last updated:  2020-04-01
How to Securely Release Unverified Plaintext in Authenticated Encryption
Elena Andreeva, Andrey Bogdanov, Atul Luykx, Bart Mennink, Nicky Mouha, Kan Yasuda
Scenarios in which authenticated encryption schemes output decrypted plaintext before successful verification raise many security issues. These situations are sometimes unavoidable in practice, such as when devices have insufficient memory to store an entire plaintext, or when a decrypted plaintext needs early processing due to real-time requirements. We introduce the first formalization of the releasing unverified plaintext (RUP) setting. To achieve privacy, we propose using plaintext awareness (PA) along with IND-CPA. An authenticated encryption scheme is PA if it has a plaintext extractor, which tries to fool adversaries by mimicking the decryption oracle without the secret key. Releasing unverified plaintext then becomes harmless as it is infeasible to distinguish the decryption oracle from the plaintext extractor. We introduce two notions of plaintext awareness in the symmetric-key setting, PA1 and PA2, and show that they expose a new layer of security between IND-CPA and IND-CCA. To achieve integrity of ciphertexts, INT-CTXT in the RUP setting is required, which we refer to as INT-RUP. These new security notions are used to make a classification of symmetric-key schemes in the RUP setting. Furthermore, we re-analyze existing authenticated encryption schemes, and provide solutions to fix insecure schemes.
Last updated:  2014-03-03
Statistical Concurrent Non-Malleable Zero Knowledge
Uncategorized
Claudio Orlandi, Rafail Ostrovsky, Vanishree Rao, Amit Sahai, Ivan Visconti
Show abstract
Uncategorized
The notion of Zero Knowledge introduced by Goldwasser, Micali and Rackoff in STOC 1985 is fundamental in Cryptography. Motivated by conceptual and practical reasons, this notion has been explored under stronger definitions. We will consider the following two main strengthened notions. -- Statistical Zero Knowledge: here the zero-knowledge property will last forever, even in case in future the adversary will have unlimited power. -- Concurrent Non-Malleable Zero Knowledge: here the zero-knowledge property is combined with non-transferability and the adversary fails in mounting a concurrent man-in-the-middle attack aiming at transferring zero-knowledge proofs/arguments. Besides the well-known importance of both notions, it is still unknown whether one can design a zero-knowledge protocol that satisfies both notions simultaneously. In this work we shed light on this question in a very strong sense. We show a {\em statistical concurrent non-malleable} zero-knowledge argument system for $\NP$ with a {\em black-box} simulator-extractor.
Last updated:  2014-02-27
FPGA-Based High Performance AES-GCM Using Efficient Karatsuba Ofman Algorithm
Karim M. Abdellatif, R. Chotin-Avot, H. Mehrez
AES-GCM has been utilized in various security applications. It consists of two components: an Advanced Encryption Standard (AES) engine and a Galois Hash (GHASH) core. The performance of the system is determined by the GHASH architecture because of the inherent computation feedback. This paper introduces a modification for the pipelined Karatsuba Ofman Algorithm (KOA)-based GHASH. In particular, the computation feedback is removed by analyzing the complexity of the computation process. The proposed GHASH core is evaluated with three different implementations of AES ( BRAMs-based SubBytes, composite field-based SubBytes, and LUT-based SubBytes). The presented AES-GCM architectures are implemented using Xilinx Virtex5 FPGAs. Our comparison to previous work reveals that our architectures are more performance-efficient (Thr. /Slices).
Last updated:  2015-03-26
Unrestricted Identity-Based Aggregate Signcryption in the Standard Model from Multilinear Maps
Uncategorized
Hao Wang
Uncategorized
Signcryption is a public key cryptographic method that achieves unforgeability and confidentiality simultaneously with significantly smaller overhead than that required by "digital signature followed by public key encryption". It does this by signing and encrypting a message in a single step. An aggregate signcryption scheme allows individual signcryption ciphertexts intended for the same recipient to be aggregated into a single (shorter) combined ciphertext without losing any of the security guarantees. In this paper, we present an unrestricted aggregate signcryption scheme in the identity-based setting using multilinear maps. To the best of my knowledge, our new scheme is the first identity-based aggregate signcryption scheme that admits unrestricted aggregation.
Last updated:  2014-02-27
Recovering OpenSSL ECDSA Nonces Using the FLUSH+RELOAD Cache Side-channel Attack
Yuval Yarom, Naomi Benger
We illustrate a vulnerability introduced to elliptic curve cryptographic protocols when implemented using a function of the OpenSSL cryptographic library. For the given implementation using an elliptic curve E over a binary field with a point G \in E, our attack recovers the majority of the bits of a scalar k when kG is computed using the OpenSSL implementation of the Montgomery ladder. For the Elliptic Curve Digital Signature Algorithm (ECDSA) the scalar k is intended to remain secret. Our attack recovers the scalar k and thus the secret key of the signer and would therefore allow unlimited forgeries. This is possible from snooping on only one signing process and requires computation of less than one second on a quad core desktop when the scalar k (and secret key) is around 571 bits.
Last updated:  2014-02-25
On the Phase Space of Block-Hiding Strategies
Assaf Shomer
We calculate the probability of success of block-hiding mining strategies in bitcoin-like networks. These strategies involve building a secret branch of the block-tree and publishing it opportunistically, aiming to replace the top of the main branch and rip the reward associated with the secretly mined blocks. We identify two types of block-hiding strategies and chart the parameter space where those are more beneficial than the standard mining strategy described in Nakamoto's paper. Our analysis suggests a generalization of the notion of the relative hashing power as a measure for a miner's influence on the network. Block-hiding strategies are beneficial only when this measure of influence exceeds a certain threshold.
Last updated:  2014-03-12
Short Signatures from Diffie-Hellman, Revisited: Sublinear Public Key, CMA Security, and Tighter Reduction
Jae Hong Seo
Designing efficient signature scheme based on the standard assumption such as the Computational Diffie-Hellman (CDH) assumption is important both from a practical and a theoretical point of view. Currently, there are only three standard model CDH-based signature schemes with short signatures due to Waters (EUROCRYPT 2005), and Seo and Böhl et al. (the merged paper in EUROCRYPT 2013). The Waters signature scheme achieves the {\em Existentail UnForgeability against Chosen Message Attack (EUF-CMA)} with nearly optimal reduction. However, this scheme suffers from large public keys. To shorten public key size, Seo and Böhl et al. proposed new approaches, respectively, but each approach has a weak point rather than the Waters signature scheme; Seo's approach could prove only a rather weak security, called the bounded CMA security, and Böhl et al.'s approach inherently accompanies a loose reduction. In this paper, we aim at stepping towards efficient CDH-based EUF-CMA secure signature scheme with tighter reduction. To this end, we revisit the Seo signature scheme and devise an alternative security proof. The resulting security proof leads \item {\em asymptotically} (almost) compact parameters; short signatures (two group elements and one exponent) and $\omega(1)$ public keys (e.g., $\log\log\lambda$), where $\lambda$ is the security parameter, and \item the standard EUF-CMA security with tighter reduction; $O(\lambda q)$ reduction loss, when ignoring negligible factors, which is less than $O(\sqrt{\frac{\lambda}{\log}}\lambda q)$ of the original security proof and almost the same as that of the Water signature scheme.
Last updated:  2014-08-15
Efficient, Oblivious Data Structures for MPC
Marcel Keller, Peter Scholl
We present oblivious implementations of several data structures for secure multiparty computation (MPC) such as arrays, dictionaries, and priority queues. The resulting oblivious data structures have only polylogarithmic overhead compared with their classical counterparts. To achieve this, we give secure multiparty protocols for the ORAM of Shi et al. (Asiacrypt `11) and the Path ORAM scheme of van Dijk et al. (CCS `13), and we compare the resulting implementations. We subsequently use our oblivious priority queue for secure computation of Dijkstra's shortest path algorithm on general graphs, where the graph structure is secret. To the best of our knowledge, this is the first implementation of a non-trivial graph algorithm in multiparty computation with polylogarithmic overhead. We implemented and benchmarked all of our protocols using the SPDZ protocol of Damgaard et al. (Crypto `12), which works in the preprocessing model and ensures active security against an adversary corrupting all but one players. For two parties, the access time for an oblivious array of size 1 million is under 250 ms.
Last updated:  2014-02-24
Isolated Execution on Many-core Architectures
Uncategorized
Ramya Jayaram Masti, Devendra Rai, Claudio Marforio, Srdjan Capkun
Show abstract
Uncategorized
We explore how many-core platforms can be used to enhance the security of future systems and to support important security properties such as runtime isolation using a small Trusted Computing Base (TCB). We focus on the Intel Single-chip Cloud Computer (SCC) to show that such properties can be implemented in current systems. We design a system called \archname{} which offers strong security properties while maintaining high performance and flexibility enabled by a small centralized security kernel. We further implement and evaluate the feasibility of our design. Currently, our prototype security kernel is able to execute applications in isolation and accommodate dynamic resource requests from them. We show that, with minor modifications, many-core architectures can offer some unique security properties, not supported by existing single- and multi-core architectures, such as application context awareness. Context awareness, a new security property that we define and explore in this work, allows each application to discover, without any interaction with the security kernel, which other parts of the system are allowed to interact with it and access its resources. We also discuss how an application can use context awareness to defend itself from an unlikely, yet potentially compromised security kernel.
Last updated:  2014-09-11
Anonymous Two-Factor Authentication in Distributed Systems: Certain Goals Are Beyond Attainment
Ding Wang, Debiao He, Ping Wang, Chao-Hsien Chu
Despite two decades of intensive research, it remains a challenge to design a practical anonymous two-factor authentication scheme, for the designers are confronted with an impressive list of security requirements (e.g., resistance to smart card loss attack) and desirable attributes (e.g., local password update). Numerous solutions have been proposed, yet most of them are shortly found either unable to satisfy some critical security requirements or short of a few important features. To overcome this unsatisfactory situation, researchers often work around it in hopes of a new proposal (but no one has succeeded so far), while paying little attention to the fundamental question: whether or not there are inherent limitations that prevent us from designing an ``ideal'' scheme that satisfies all the desirable goals? In this work, we aim to provide a definite answer to this question. We first revisit two foremost proposals, i.e. Tsai et al.'s scheme and Li's scheme, revealing some subtleties and challenges in designing such schemes. Then, we systematically explore the inherent conflicts and unavoidable trade-offs among the design criteria. Our results indicate that, under the current widely accepted adversarial model, certain goals are beyond attainment. This also suggests a negative answer to the open problem left by Huang et al. in 2014. To the best of knowledge, the present study makes the first step towards understanding the underlying evaluation metric for anonymous two-factor authentication, which we believe will facilitate better design of anonymous two-factor protocols that offer acceptable trade-offs among usability, security and privacy.
Last updated:  2014-10-28
Kummer strikes back: new DH speed records
Daniel J. Bernstein, Chitchanok Chuengsatiansup, Tanja Lange, Peter Schwabe
This paper sets new speed records for high-security constant-time variable-base-point Diffie--Hellman software: 305395 Cortex-A8-slow cycles; 273349 Cortex-A8-fast cycles; 88916 Sandy Bridge cycles; 88448 Ivy Bridge cycles; 54389 Haswell cycles. There are no higher speeds in the literature for any of these platforms. The new speeds rely on a synergy between (1) state-of-the-art formulas for genus-2 hyperelliptic curves and (2) a modern trend towards vectorization in CPUs. The paper introduces several new techniques for efficient vectorization of Kummer-surface computations.
Last updated:  2014-08-30
Efficient Secure and Verifiable Outsourcing of Matrix Multiplications
Yihua Zhang, Marina Blanton
With the emergence of cloud computing services, a resource-constrained client can outsource its computationally-heavy tasks to cloud providers. Because such service providers might not be fully trusted by the client, the need to verify integrity of the returned computation result arises. The ability to do so is called verifiable delegation or verifiable outsourcing. Furthermore, the data used in the computation may be sensitive and it is often desired to protect it from the cloud throughout the computation. In this work, we put forward solutions for verifiable outsourcing of matrix multiplications that favorably compare with the state of the art. The cost of verifying the result of computation consists of a single modulo exponentiation and can be further reduced if the cloud is rational (or lazy). A lazy cloud tries to minimize its work by skipping the computation, but does not maliciously corrupt the data. Our solutions achieve several desired features such as data protection, public verifiability, and computation chaining.
Last updated:  2016-09-30
Efficient Revocable Identity-Based Encryption via Subset Difference Methods
Kwangsu Lee, Dong Hoon Lee, Jong Hwan Park
Providing an efficient revocation mechanism for identity-based encryption (IBE) is very important since a user's credential (or private key) can be expired or revealed. Revocable IBE (RIBE) is an extension of IBE that provides an efficient revocation mechanism. Previous RIBE schemes essentially use the complete subtree (CS) scheme of Naor, Naor and Lotspiech (CRYPTO 2001) for key revocation. In this paper, we present a new technique for RIBE that uses the efficient subset difference (SD) scheme of Naor et al. instead of using the CS scheme to improve the size of update keys. Following our new technique, we first propose an efficient RIBE scheme in prime-order bilinear groups by combining the IBE scheme of Boneh and Boyen and the SD scheme and prove its selective security under the standard assumption. Our RIBE scheme is the first RIBE scheme in bilinear groups that has $O(r)$ number of group elements in an update key where $r$ is the number of revoked users. Next, we also propose another RIBE scheme in composite-order bilinear groups and prove its full security under static assumptions. Our RIBE schemes also can be integrated with the layered subset difference (LSD) scheme of Halevy and Shamir (CRYPTO 2002) to reduce the size of a private key.
Last updated:  2014-02-27
Modelling After-the-fact Leakage for Key Exchange
Janaka Alawatugoda, Douglas Stebila, Colin Boyd
Security models for two-party authenticated key exchange (AKE) protocols have developed over time to prove the security of AKE protocols even when the adversary learns certain secret values. In this work, we address more granular leakage: partial leakage of long-term secrets of protocol principals, even after the session key is established. We introduce a generic key exchange security model, which can be instantiated allowing bounded or continuous leakage, even when the adversary learns certain ephemeral secrets or session keys. Our model is the strongest known partial-leakage-based security model for key exchange protocols. We propose a generic construction of a two-pass leakage-resilient key exchange protocol that is secure in the proposed model, by introducing a new concept: the leakage-resilient NAXOS trick. We identify a special property for public-key cryptosystems: pair generation indistinguishability, and show how to obtain the leakage-resilient NAXOS trick from a pair generation indistinguishable leakage-resilient public-key cryptosystem.
Last updated:  2015-04-24
Selecting Elliptic Curves for Cryptography: An Efficiency and Security Analysis
Joppe W. Bos, Craig Costello, Patrick Longa, Michael Naehrig
We select a set of elliptic curves for cryptography and analyze our selection from a performance and security perspective. This analysis complements recent curve proposals that suggest (twisted) Edwards curves by also considering the Weierstrass model. Working with both Montgomery-friendly and pseudo-Mersenne primes allows us to consider more possibilities which help to improve the overall efficiency of base field arithmetic. Our Weierstrass curves are backwards compatible with current implementations of prime order NIST curves, while providing improved efficiency and stronger security properties. We choose algorithms and explicit formulas to demonstrate that our curves support constant-time, exception-free scalar multiplications, thereby offering high practical security in cryptographic applications. Our implementation shows that variable-base scalar multiplication on the new Weierstrass curves at the 128-bit security level is about 1.4 times faster than the recent implementation record on the corresponding NIST curve. For practitioners who are willing to use a different curve model and sacrifice a few bits of security, we present a collection of twisted Edwards curves with particularly efficient arithmetic that are up to 1.42, 1.26 and 1.24 times faster than the new Weierstrass curves at the 128-, 192- and 256-bit security levels, respectively. Finally, we discuss how these curves behave in a real-world protocol by considering different scalar multiplication scenarios in the transport layer security (TLS) protocol. The proposed curves and the results of the analysis are intended to contribute to the recent efforts towards recommending new elliptic curves for Internet standards.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.