All papers in 2009 (Page 7 of 638 results)

Last updated:  2009-01-25
On Algebraic Relations of Serpent S-Boxes
Bhupendra Singh, Lexy Alexander, Sanjay Burman
Serpent is a 128-bit block cipher designed by Ross Anderson, Eli Biham and Lars Knudsen as a candidate for the Advanced Encryption Standard (AES). It was a finalist in the AES competition. The winner, Rijndael, got 86 votes at the last AES conference while Serpent got 59 votes [1]. The designers of Serpent claim that Serpent is more secure than Rijndael.In this paper we have observed that the nonlinear order of all output bits of serpent S-boxes are not 3 as it is claimed by the designers.
Last updated:  2009-01-25
Common Modulus Attacks on Small Private Exponent RSA and Some Fast Variants (in Practice)
M. Jason Hinek, Charles C. Y. Lam
In this work we re-examine two common modulus attacks on RSA. First, we show that Guo's continued fraction attack works much better in practice than previously expected. Given three instances of RSA with a common modulus $N$ and private exponents each smaller than $N^{0.33}$ the attack can factor the modulus about $93\%$ of the time in practice. The success rate of the attack can be increased up to almost $100\%$ by including a relatively small exhaustive search. Next, we consider Howgrave-Graham and Seifert's lattice-based attack and show that a second necessary condition for the attack exists that limits the bounds (beyond the original bounds) once $n \geq 7$ instances of RSA are used. In particular, by construction, the attack can only succeed when the private exponents are each smaller than $N^{0.5-\epsilon}$, given sufficiently many instances, instead of the original bound of $N^{1-\epsilon}$. In addition, we also consider the effectiveness of the attacks when mounted against multi-prime RSA and Tagaki's variant of RSA. For multi-prime RSA, we show three (or more) instances with a common modulus and private exponents smaller than $N^{1/3-\epsilon}$ is unsafe. For Takagi's variant, we show that three or more instances with a common modulus $N=p^rq$ is unsafe when all the private exponents are smaller than $N^{2/(3(r+1))-\epsilon}$. The results, for both variants, is obtained using Guo's method and are successful almost always with the inclusion of a small exhaustive search. When only two instances are available, Howgrave-Graham and Seifert's attack can be mounted on multi-prime RSA when the private exponents are smaller than $N^{(3+r)/7r-\epsilon}$ when there are $r$ primes in the modulus.
Last updated:  2009-03-08
Constructions of Truly Practical Secure Protocols using Standard Smartcards
Carmit Hazay, Yehuda Lindell
In this paper we show that using standard smartcards it is possible to construct truly practical secure protocols for a variety of tasks. Our protocols achieve full \emph{simulation-based security} in the presence of \emph{malicious adversaries}, and can be run on very large inputs. We present protocols for secure set intersection, oblivious database search and more. We have also implemented our set intersection protocol in order to show that it is truly practical: on sets of size 30,000 elements takes 20 seconds for one party and 30 minutes for the other. This demonstrates that in settings where physical smartcards can be sent between parties (as in the case of private data mining tasks between security and governmental agencies), it is possible to use secure protocols with proven simulation-based security.
Last updated:  2009-08-13
Key-Exposure Free Chameleon Hashing and Signatures Based on Discrete Logarithm Systems
Xiaofeng Chen, Fangguo Zhang, Haibo Tian, Baodian Wei, Kwangjo Kim
Chameleon signatures simultaneously provide the properties of non-repudiation and non-transferability for the signed message. However, the initial constructions of chameleon signatures suffer from the problem of key exposure. This creates a strong disincentive for the recipient to forge signatures, partially undermining the concept of non-transferability. Recently, some specific constructions of discrete logarithm based chameleon hashing and signatures without key exposure are presented, while in the setting of gap Diffile-Hellman groups with pairings. \indent \,\, In this paper, we propose the first key-exposure free chameleon hash and signature scheme based on discrete logarithm systems, without using the gap Diffile-Hellman groups. This provides more flexible constructions of efficient key-exposure free chameleon hash and signature schemes. Moreover, one distinguishing advantage of the resulting chameleon signature scheme is that the property of ``message hiding" or ``message recovery" can be achieved freely by the signer, $i.e.,$ the signer can efficiently prove which message was the original one if he desires.
Last updated:  2009-01-17
On a Conditional Collision Attack on NaSHA-512
Uncategorized
S. Markovski, A. Mileva, V. Dimitrova, D. Gligoroski
Show abstract
Uncategorized
A collision attack on NaSHA-512 was proposed by L. Ji et al. The claimed complexity of the attack is $2^{192}$. The proposed attack is realized by using a suitable differential pattern. In this note we show that the correct result that can be inferred from their differential pattern is in fact a conditional one. It can be stated correctly as follows: A collision attack on NaSHA-512 of complexity $k=1,2,\dots,2^{320}$ can be performed with an unknown probability of success $p_k$, where $ 0\le p_1\le p_2\le p_{2^{320}}\le 1$. Consequently, the attack proposed by L. Ji et al. can be considered only as a direction how a possible collision attack on NaSHA-512 could be realized. The birthday attack remains the best possible attack on NaSHA-512.
Last updated:  2009-05-02
NESHA-256, NEw 256-bit Secure Hash Algorithm (Extended Abstract)
Uncategorized
Yaser Esmaeili Salehani, Amir Tabatabaei, Mohammad Reza Sohizadeh Abyaneh, Mehdi Mohammad Hassanzadeh
Show abstract
Uncategorized
In this paper, we introduce a new dedicated 256-bit hash function: NESHA-256. The recently contest for hash functions held by NIST, motivates us to design the new hash function which has a parallel structure. Advantages of parallel structures and also using some ideas from the designing procedure of block-cipher-based hash functions strengthen our proposed hash function both in security and in efficiency. NESHA-256 is designed not only to have higher security but also to be faster than SHA-256: the performance of NESHA-256 is at least 38% better than that of SHA-256 in software. We give security proofs supporting our design, against existing known cryptographic attacks on hash functions.
Last updated:  2009-01-17
A Fast Implementation of $\eta_T$ Pairing in Characteristic Three on Intel Core 2 Duo Processor
MITSUNARI Shigeo
We present an efficient implementation of $\eta_T$ pairing on Intel Core 2 Duo processor. The processing speed of our implementation achieves 92 $\mu$sec over ${\mathbb F}_3^{97}$ and 553 $\mu$sec over ${\mathbb F}_3^{193}$ on 2.6GHz processor.
Last updated:  2013-12-25
Adaptively Secure Two-Party Computation with Erasures
Yehuda Lindell
In the setting of multiparty computation a set of parties with private inputs wish to compute some joint function of their inputs, whilst preserving certain security properties (like privacy and correctness). An adaptively secure protocol is one in which the security properties are preserved even if an adversary can adaptively and dynamically corrupt parties during a computation. This provides a high level of security, that is arguably necessary in today's world of active computer break-ins. Until now, the work on adaptively secure multiparty computation has focused almost exclusively on the setting of an honest majority, and very few works have considered the honest minority and two-party cases. In addition, significant computational and communication costs are incurred by most protocols that achieve adaptive security. In this work, we consider the two-party setting and assume that honest parties may \emph{erase} data. We show that in this model it is possible to securely compute any two-party functionality in the presence of \emph{adaptive semi-honest adversaries}. Furthermore, our protocol remains secure under concurrent general composition (meaning that it remains secure irrespective of the other protocols running together with it). Our protocol is based on Yao's garbled-circuit construction and, importantly, is as efficient as the analogous protocol for static corruptions. We argue that the model of adaptive corruptions with erasures has been unjustifiably neglected and that it deserves much more attention.
Last updated:  2009-07-21
An efficient fuzzy extractor for limited noise
Uncategorized
B. Skoric, P. Tuyls
Show abstract
Uncategorized
A fuzzy extractor is a security primitive that allows for reproducible extraction of an almost uniform key from a noisy non-uniform source. We analyze a fuzzy extractor scheme that uses universal hash functions for both information reconciliation and privacy amplification. This is a useful scheme when the number of error patterns likely to occur is limited, regardless of the error probabilities. We derive a sharp bound on the uniformity of the extracted key, making use of the concatenation property of universal hash functions and a recent tight formulation of the leftover hash lemma.
Last updated:  2009-05-18
Nofish - A new stream cipher
Uncategorized
Marius Oliver Gheorghita
Show abstract
Uncategorized
The proposed algorithm is a synchronous stream cipher, more precisely a binary additive stream cipher because it using the XOR function to encrypt the plaintext. The design is based on HENKOS stream cipher (http://eprint.iacr.org/2004/080.pdf), the functions used in the internal state are kept, the initialization and mixing key part being modified with respect to its revealed weaknesses. This stream cipher uses a named key of 64 bytes (512 bits) as a secret key and no initialization vector. Nofish is free to use for any non-commercial purposes, and the reference source code can be found in the appendix.
Last updated:  2009-06-14
Realizing Hash-and-Sign Signatures under Standard Assumptions
Susan Hohenberger, Brent Waters
Currently, there are relatively few instances of ``hash-and-sign'' signatures in the standard model. Moreover, most current instances rely on strong and less studied assumptions such as the Strong RSA and q-Strong Diffie-Hellman assumptions. In this paper, we present a new approach for realizing hash-and-sign signatures in the standard model. In our approach, a signer associates each signature with an index i that represents how many signatures that signer has issued up to that point. Then, to make use of this association, we create simple and efficient techniques that restrict an adversary which makes q signature requests to forge on an index no greater than 2q. Finally, we develop methods for dealing with this restricted adversary. Our approach requires that a signer maintains a small amount of state --- a counter of the number of signatures issued. We achieve two new realizations for hash-and-sign signatures respectively based on the RSA assumption and the Computational Diffie-Hellman assumption in bilinear groups.
Last updated:  2009-02-22
Security of Verifiably Encrypted Signatures
Markus Rückert, Dominique Schröder
In a verifiably encrypted signature scheme, signers encrypt their signature under the public key of a trusted third party and prove that they did so correctly. The security properties are unforgeability and opacity. Unforgeability states that a malicious signer should not be able to forge verifiably encrypted signatures and opacity prevents extraction from an encrypted signature. This paper proposes two novel fundamental requirements for verifiably encrypted signatures, called \emph{extractability} and \emph{abuse-freeness}, and analyze its effects on the security model of Boneh et al. Extractability ensures that the trusted third party is always able to extract a valid signature from a valid verifiably encrypted signature and abuse-freeness guarantees that a malicious signer, who cooperates with the trusted party, is not able to forge a verifiably encrypted signature. We further show that both properties are not covered by the model of Boneh et al., introduced at Eurocrypt 2003.
Last updated:  2009-06-16
Collision Attacks on NaSHA-384/512
Uncategorized
Zhimin Li, Licheng Wang, Daofeng Li, Yixian Yang
Show abstract
Uncategorized
NaSHA is a family of hash functions submitted by Markovski and Mileva as a SHA-3 candidate. In this paper, we present a collision attack on the hash function NaSHA for the output sizes 384-bit and 512-bit. This attack is based on the the weakness in the generate course of the state words and the fact that the quasigroup operation used in the compression function is only determined by partial state words. Its time complexity is about $2^{128}$ with negligible memory and its probability is more than $(1- \frac{2}{{2^{64} - 1}})^2$ ($\gg \frac{1}{2}$). This is currently by far the best known cryptanalysis result on this SHA-3 candidate.
Last updated:  2009-05-26
Short Redactable Signatures Using Random Trees
Ee-Chien Chang, Chee Liang Lim, Jia Xu
A redactable signature scheme for a string of objects supports verification even if multiple substrings are removed from the original string. It is important that the redacted string and its signature do not reveal anything about the content of the removed substrings. Existing schemes completely or partially leak a piece of information: the lengths of the removed substrings. Such length information could be crucial for many applications, especially when the removed substring has low entropy. We propose a scheme that can hide the length. Our scheme consists of two components. The first component $\mathcal{H}$, which is a ``collision resistant'' hash, maps a string to an unordered set, whereby existing schemes on unordered sets can then be applied. However, a sequence of random numbers has to be explicitly stored and thus it produces a large signature of size at least $(m k)$-bits where $m$ is the number of objects and $k$ is the size of a key sufficiently large for cryptographic operations. The second component uses RGGM tree, a variant of GGM tree, to generate the pseudo random numbers from a short seed, expected to be of size $O(k+ tk\log m)$ where $t$ is the number of removed substrings. Unlike GGM tree, the structure of the proposed RGGM tree is random. By an intriguing statistical property of the random tree, the redacted tree does not reveal the lengths of the substrings removed. The hash function $\mathcal{H}$ and the RGGM tree can be of independent interests.
Last updated:  2009-06-10
On Second-Order Fault Analysis Resistance for CRT-RSA Implementations
Emmanuelle Dottax, Christophe Giraud, Matthieu Rivain, Yannick Sierra
Since their publication in 1996, Fault Attacks have been widely studied from both theoretical and practical points of view and most of cryptographic systems have been shown vulnerable to this kind of attacks. Until recently, most of the theoretical fault attacks and countermeasures used a fault model which assumes that the attacker is able to disturb the execution of a cryptographic algorithm only once. However, this approach seems too restrictive since the publication in 2007 of the successful experiment of an attack based on the injection of two faults, namely a second-order fault attack. Amongst the few papers dealing with second-order fault analysis, three countermeasures were published at WISTP'07 and FDTC'07 to protect the RSA cryptosystem using the CRT mode. In this paper, we analyse the security of these countermeasures with respect to the second-order fault model considered by their authors. We show that these countermeasures are not intrinsically resistant and we propose a new method allowing us to implement a CRT-RSA that resists to this kind of second-order fault attack.
Last updated:  2009-01-13
Polynomial Runtime and Composability
Dennis Hofheinz, Dominique Unruh, Jörn Müller-Quade
To prove security of a multi-party cryptographic protocol, one often reduces attacks on the protocol to attacks on a suitable computational problem. Thus, if the computational problem is hard, then the protocol is secure. But to allow for a security reduction, the protocol itself and the attack on the protocol must be efficient, i.e., polynomial-time. Of course, the obvious way to enforce an overall polynomial runtime of the protocol is to require each individual protocol machine and adversarial entity to be polynomial-time. However, as the specific case of zero-knowledge protocols demonstrates, an a priori polynomial-time bound on all entities may not be an optimal choice because the running time of some machines needs to depend on that of others. As we want to be able to model arbitrary protocol tasks, we work in the Universal Composability framework (UC). This framework additionally provides strong composability guarantees. We will point out that in the UC setting, finding a useful notion of polynomial-time for the analysis of general protocols is a highly non-trivial task. Our goal in this work is to find a good and useful definition of polynomial-time for multi-party protocols in the UC setting that matches the intuition of what is feasible. A good definition should have the following properties: * Flexibility: All "intuitively feasible" protocols and protocol tasks should be considered polynomial-time. * Soundness: All "intuitively feasible" attacks (i.e., adversaries) should be considered polynomial-time. * Completeness: Only "intuitively feasible" attacks should be considered polynomial-time. In particular, this implies that the security of protocols can be reduced to computational hardness assumptions. * Composability: The induced security notion should support secure (universal) composition of protocols. * Simplicity: The notion should be easy to formulate, and for all practical cases, it should be easy to decide whether a protocol or attack runs in polynomial time. The problem of finding a good definition of polynomial time in the UC framework has been considered in a number of works, but no definition satisfying the five above criteria had been found so far. This seemingly simple problem is surprisingly elusive and it is hard to come up with a definition that does not involve many technical artifacts. In this contribution, we give a definition of polynomial time for cryptographic protocols in the UC model, called reactively polynomial, that satisfies all five properties. Our notion is simple and easy to verify. We argue for its flexibility, completeness and soundness with practical examples that are problematic with previous approaches. We give a very general composition theorem for reactively polynomial protocols. The theorem states that arbitrarily many instances of a secure protocol can be used in any larger protocol without sacrificing security. Our proof is technically different from and substantially more involved than proofs for previous protocol composition theorems (for previous definitions of polynomial runtime). We believe that it is precisely this additional proof complexity, which appears only once and for all in the proof of the composition theorem, that makes a useful definition as simple as ours possible.
Last updated:  2009-01-13
Correctness of Li Generalization of RSA Cryptosystem
Roman Popovych
For given N=pq with p and q different odd primes and natural m Li introduced the public key cryptosystem. In the case m=1 the system is just the famous RSA system. We answer the Li question about correctness of the system.
Last updated:  2009-01-13
Comparing With RSA
Julien Cathalo, David Naccache, Jean-Jacques Quisquater
A multi-set (MS) is a set where an element can occur more than once. MS hash functions (MSHFs) map MSs of arbitrary cardinality to fixed-length strings. This paper introduces a new RSA-based MSHF. The new function is efficient and produces small hashes. We prove that the proposed MSHF is collision-resistant under the assumption of unforgeability of deterministic RSA signatures. In many practical applications, programmers need to compare two (unordered) sets of integers. A trivial solution consists in sorting both sets ($\mathcal{O}(n \log n)$) and comparing them linearly. We show how MS hash functions can be turned into a linear-time, constant-space integer set equality test.
Last updated:  2009-01-13
Applying Time-Memory-Data Trade-Off to Meet-in-the-Middle Attack
Jiali Choy, Khoongming Khoo, Chuan-Wen Loe
In this paper, we present several new attacks on multiple encryption block ciphers based on the meet-in-the-middle attack. In the first attack (GDD-MTM), we guess a certain number of secret key bits and apply the meet-in-the-middle attack on multiple ciphertexts. The second attack (TMTO-MTM) is derived from applying the time-memory trade-off attack to the meet-in-the-middle attack on a single ciphertext. We may also use rainbow chains in the table construction to get the Rainbow-MTM attack. The fourth attack (BS-MTM) is defined by combining the time-memory-data trade-off attack proposed by Biryukov and Shamir to the meet-in-the-middle attack on multiple ciphertexts. Lastly, for the final attack (TMD-MTM), we apply the TMTO-Data curve, which demonstrates the general methodology for multiple data trade-offs, to the meet-in-the-middle attack. GDD-MTM requires no pre-processing, but the attack complexity is high while memory requirement is low. In the last four attacks, pre-processing is required but we can achieve lower (faster) online attack complexity at the expense of more memory in comparison with the GDD-MTM attack. To illustrate how the attacks may be used, we applied them in the cryptanalysis of triple DES. In particular, for the BS-MTM attack, we managed to achieve pre-computation and data complexity which are much lower while maintaining almost the same memory and online attack complexity, as compared to a time-memory-data trade-off attack by Biryukov et al. at SAC 2005. In all, our new methodologies offer viable alternatives and provide more flexibility in achieving time-memory-data trade-offs.
Last updated:  2009-01-13
Communication-Efficient Private Protocols for Longest Common Subsequence
Matthew Franklin, Mark Gondree, Payman Mohassel
We design communication efficient two-party and multi-party protocols for the longest common subsequence (LCS) and related problems. Our protocols achieve privacy with respect to passive adversaries, under reasonable cryptographic assumptions. We benefit from the somewhat surprising interplay of an efficient block-retrieval PIR (Gentry-Ramzan, ICALP 2005) with the classic “four Russians” algorithmic design. This result is the first improvement to the communication complexity for this application over generic results (such as Yao’s garbled circuit protocol) and, as such, is interesting as a contribution to the theory of communication efficiency for secure two-party and multiparty applications.
Last updated:  2009-01-13
Huge 2ndpreimages and collisions of khichidi-1
Uncategorized
prasanth Kumar Thandra, S. A. V. Satya Murty
Show abstract
Uncategorized
Khichidi-1 is a contestant of Sha-3. In this paper we exploited a weakness in the design of the algorithm. This allowed us to propose two kind of attacks 1) 2ndpreimage attack, 2) Collision attack. Our attacks are applicable to all the versions 224, 256, 384 and 512 and it is potentially strong.
Last updated:  2009-01-13
Anonymous signature scheme
Chunbo Ma, Jun Ao
In order to hide the identity of a signer, an anonymous signaure scheme is presented in this paper. In this scheme, a signer located in a specified group produces a signautre on behalf of the group. The recipient can verify whether the signature is valid and comes from the specified group, while tracing the signature to its source is impossible. The proposed anonymous signature is similarly to ring signature in some respects, for example, there is no manager, and no revocation mechanism against signer's anonymity. The most different between these two kinds of signatures is that the group in ring signature is adaptively constructed by the signer, while the group in our scheme is fixed.
Last updated:  2009-04-01
Fast elliptic-curve cryptography on the Cell Broadband Engine
Neil Costigan, Peter Schwabe
This paper is the first to investigate the power of the Cell Broadband Engine for state-of-the-art public-key cryptography. We pre- sent a high-speed implementation of elliptic-curve Diffie-Hellman (ECDH) key exchange for this processor, which needs 697080 cycles on one Syn- ergistic Processor Unit for a scalar multiplication on a 255-bit elliptic curve, including the costs for key verification and key compression. This cycle count is independent of inputs therefore protecting against timing attacks. This speed relies on a new representation of elements of the underlying finite field suited for the unconventional instruction set of this architec- ture. Furthermore we demonstrate that an implementation based on the multi- precision integer arithmetic functions provided by IBM's multi-precision math (MPM) library would take at least 2227040 cycles. Comparison with implementations of the same function for other archi- tectures shows that the Cell Broadband Engine is competitive in terms of cost-performance ratio to other recent processors such as the Intel Core 2 for public-key cryptography. Specifically, the state-of-the-art Galbraith-Lin-Scott ECDH software per- forms 27370 scalar multiplications per second using all four cores of a 2.5GHz Intel Core 2 Quad Q9300 inside a $296 computer, while the new software reported in this paper performs 27474 scalar multiplications per second on a Playstation 3 that costs just $221. Both of these speed reports are for high-security 256-bit elliptic-curve cryptography.
Last updated:  2011-04-04
Cube Attacks on Trivium
Uncategorized
S S Bedi, N Rajesh Pillai
Show abstract
Uncategorized
This paper discusses the Cube attacks proposed by Dinur and Shamir applied to Trivium. Independent verification of the equations given in Dinur and Shamir's paper were carried out. Experimentation showed that the precomputed equations were not general. They are correct when applied to the class of IVs for which they were computed - where IV bits at locations other than those corresponding to the cube are fixed at 0. When these IV bits are fixed at some other values, the relations do not hold. The probable cause for this is given and an extra step to the method for equation generation is suggested to take care of such cases.
Last updated:  2009-01-12
Key Predistribution Techniques for Grid-Based Wireless Sensor Networks
Simon R. Blackburn, Tuvi Etzion, Keith M. Martin, Maura B. Paterson
We consider symmetric key predistribution in grid-based wireless sensor networks. Networks consisting of wireless sensor nodes arranged in a grid pattern have many useful applications, including environmental monitoring and agribusiness. The structured physical distribution of nodes in such networks facilitates efficient distribution of keys to the nodes prior to deployment. It has been shown that combinatorial objects known as distinct-difference configurations (DDCs) can be used to construct effective key predistribution schemes (KPSs) for grid-based networks. In this paper we observe that the regular topology of a grid-based network enables an efficient trade-off between the connectivity, resilience and storage requirements of a KPS, and we discuss the balancing of these properties to suit application requirements. We then show how recent results on the construction of DDCs can be used to produce KPSs that achieve the desired balance, and we provide explicit algorithms for the instantiation of these schemes.
Last updated:  2009-01-12
Comparison-Based Key Exchange and the Security of the Numeric Comparison Mode in Bluetooth v2.1
Yehuda Lindell
In this paper we study key exchange protocols in a model where the key exchange takes place between devices with limited displays that can be compared by a human user. If the devices display the same value then the human user is convinced that the key exchange terminated successfully and securely, and if they do not then the user knows that it came under attack. The main result of this paper is a rigorous proof that the numeric comparison mode for device pairing in Bluetooth version 2.1 is secure, under appropriate assumptions regarding the cryptographic functions used. Our proof is in the standard model and in particular does not model any of the functions as random oracles. In order to prove our main result, we present formal definitions for key exchange in this model and show our definition to be equivalent to a simpler definition. This is a useful result of independent interest that facilitates an easier security analysis of protocols in this model.
Last updated:  2009-01-15
Avoid Mask Re-use in Masked Galois Multipliers
D. Canright
This work examines a weakness in re-using masks for masked Galois inversion, specifically in the masked Galois multipliers. Here we show that the mask re-use scheme included in our work[1] cannot result in "perfect masking," regardless of the order in which the terms are added; explicit distributions are derived for each step. The same problem requires new masks in the subfield calculations, not included in [1]. Hence, for resistance to first-order differential attacks, the masked S-box must use distinct, independent masks for input and output bytes of the masked inverter, and new masks in the subfields, resulting in a larger size. Ref[1]: Canright, D., Batina, L.: A Very Compact "Perfectly Masked" S-Box for AES. In ACNS2008, LNCS 5037, Springer-Verlag (2008), 446-459
Last updated:  2009-01-15
A Very Compact "Perfectly Masked" S-Box for AES (corrected)
D. Canright, Lejla Batina
Implementations of the Advanced Encryption Standard (AES), including hardware applications with limited resources (e.g., smart cards), may be vulnerable to "side-channel attacks" such as differential power analysis. One countermeasure against such attacks is adding a random mask to the data; this randomizes the statistics of the calculation at the cost of computing "mask corrections." The single nonlinear step in each AES round is the "S-box" (involving a Galois inversion), which incurs the majority of the cost for mask corrections. Oswald et al. showed how the "tower field" representation allows maintaining an additive mask throughout the Galois inverse calculation. This work applies a similar masking strategy to the most compact (unmasked) S-box to date. The result is the most compact masked S-box so far, with "perfect masking" (by the definition of Blomer) giving suitable implementations immunity to first-order differential side-channel attacks.
Last updated:  2010-03-12
Optimal Multicast Group Communication
Zhibin Zhou, Dijiang Huang
Many IP multicast based applications, such as Pay-TV, Multiplayer games, require controlling the group memberships of senders and receivers. One common solution is to encrypt the data with a session key shared with all authorized senders/receivers. To efficiently update the session key in the event of member removal, many rooted-tree based group key distribution schemes have been proposed. However, most of the existing rooted-tree based schemes are not optimal. In other words, given the O(log N) storage overhead, the communication overhead is not minimized. On the other hand, although Flat Table scheme achieves optimality , it is rather dismissed due to the vulnerability to collusion attacks. In this paper, we propose a key distribution scheme -- EGK that attains the same optimality as Flat Table without collusion vulnerability. EGK also support dynamic subgroup communication initialized by each group members (imagine a virtual chat room in the multicast group). Additionally, EGK provides constant message size and requires O(log N) storage overhead at the group controller, which makes EGK suitable for applications containing a large number of multicasting group members. Moreover, adding members in EGK requires just one multicasting message. EGK is the first work with such features and out-performs all existing schemes.
Last updated:  2011-03-02
Hybrid-Secure MPC: Trading Information-Theoretic Robustness for Computational Privacy
Uncategorized
Christoph Lucas, Dominik Raub, Ueli Maurer
Show abstract
Uncategorized
Most protocols for distributed, fault-tolerant computation, or multi-party computation (MPC), provide security guarantees in an all-or-nothing fashion: If the number of corrupted parties is below a certain threshold, these protocols provide all specified security guarantees. Beyond this threshold, they provide no security guarantees at all. Furthermore, most previous protocols achieve either information-theoretic (IT) security, in which case this threshold is low, or they achieve only computational security, in which case this threshold is high. In contrast, a hybrid-secure protocol provides different security guarantees depending on the set of corrupted parties and the computational power of the adversary, without being aware of the actual adversarial setting. Thus, hybrid-secure MPC protocols allow for graceful degradation of security. We present a hybrid-secure MPC protocol that provides an optimal trade-off between IT robustness and computational privacy: For any robustness parameter r<n/2, we obtain one MPC protocol that is simultaneously IT secure with robustness for up to t<=r actively corrupted parties, IT secure with fairness (no robustness) for up to t<n/2, and computationally secure with agreement on abort (privacy and correctness only) for up to t<n-r. Our construction is secure in the universal composability (UC) framework (based on a network of secure channels, a broadcast channel, and a common reference string). It achieves the bound on the trade-off between robustness and privacy shown by Ishai et al. [CRYPTO'06] and Katz [STOC'07], the bound on fairness shown by Cleve [STOC'86], and the bound on IT security shown by Kilian [STOC'00], and is the first protocol that achieves all these bounds simultaneously. For example, in the special case r=0 our protocol simultaneously achieves non-robust MPC for up to t<n corrupted parties in the computational setting (like Goldreich et al. [STOC'87]), while providing security with fairness in the IT setting for up to t<n/2 corrupted parties (like Rabin and Ben-Or [STOC'89] though without robustness).
Last updated:  2009-01-05
A note on Agrawal conjecture
Roman Popovych
We prove that Lenstra proposition suggesting existence of many counterexamples to Agrawal conjecture is true in a more general case. At the same time we obtain a strictly ascending chain of subgroups of the group (Zp[X]/(Cr(X)))* and state the modified conjecture that the set {X-1, X+2} generate big enough subgroup of this group.
Last updated:  2009-01-04
Homomorphic Trapdoor Commitments to Group Elements
Jens Groth
We present a homomorphic trapdoor commitment to group elements. In contrast, previous homomorphic trapdoor commitment schemes only allow the messages to be exponents. Our commitment scheme is length-reducing, we can make a short commitment to many group elements at once, and it is perfectly hiding and computationally binding. The construction is based on groups with a bilinear map and the binding property follows from the simultaneous triple pairing assumption. While the simultaneous triple pairing assumption is new, we demonstrate that it is implied by the well-known decision linear assumption.
Last updated:  2009-01-04
Huge Multicollisions and Multipreimages of Hash Functions BLENDER-n
Vlastimil Klima
In this paper we present a multicollision and multipreimage attack on the hash function Blender-n [1] for all output sizes n = 224, 256, 384 and 512. The complexity and memory requirements for finding 2^{2n} multipreimages (multicollisions) of Blender-n is roughly 10 times more than finding a collision for n/2-bit random hash function. All previous attacks were based on the trick by Joux [2] using many messages. Our attacks are based on one message with several fixpoints. The state register has eight words. By properly choosing message words we force half of the register to go to the original state. Then we will find a collision in the rest with complexity 2^{n/4}. The collision creates a fix point in the sequence of states of the state register. We use 10 such fix points. Previously known attacks [4, 5] on Blender-n have the complexity at least 2^{n/2}. Our 2^{2n}-multicollision and multipreimage attacks have a complexity 10*2^{n/4}.
Last updated:  2009-01-04
Impossible Differential Cryptanalysis of Pelican, MT-MAC-AES and PC-MAC-AES
Wei Wang, Xiaoyun Wang, Guangwu Xu
In this paper, the impossible differential cryptanalysis is extended to MAC algorithms \textsc{Pelican}, MT-MAC and PC-MAC based on AES and 4-round AES. First, we collect message pairs that produce the inner near-collision with some specific differences by the birthday attack. Then the impossible differential attack on 4-round AES is implemented using a 3-round impossible differential property. For \textsc{Pelican}, our attack can recover the internal state, which is an equivalent subkey. For MT-MAC-AES, the attack turns out to be a subkey recovery attack directly. The data complexity of the two attacks is $2^{85.5}$ chosen messages, and the time complexity is about $2^{85.5}$ queries. For PC-MAC-AES, we can recover the 256-bit key with $2^{85.5}$ chosen messages and $2^{128}$ queries.
Last updated:  2009-01-26
On Stateless Schemes for Message Authentication Using Pseudorandom Functions
Palash Sarkar
We consider the construction and analysis of pseudorandom functions (PRF) for message authentication. Earlier work due to Bernstein and Vaudenay show how to reduce the analysis of PRFs to some probability calculations. We revisit this result and use it to prove some general results on constructions which use a PRF with ``small'' domain to build a PRF with ``large'' domain. These results are then used to analyse several existing and new constructions. Important among them is a simplified proof of a bound on the PRF-property of the cipher block chaining (CBC) mode of operation of a block cipher for message authentication code (MAC). Several existing variants of CBC-MAC are analysed using our framework and new schemes are described. One of the new schemes improve upon the NIST standard CMAC scheme by reducing the number of block cipher invocations by one for messages which are longer than $n$ bits. Next, we consider parallelizable constructions. An improved version of the well known PMAC scheme is described; the improvement consists of removing the requirement of a discrete log computation in the design stage of PMAC. An earlier parallel construction called the protected counter sum (PCS) had been proposed by Bernstein. PCS uses a keyed compressing function rather than a block cipher. We describe a variant of PMAC which works with keyed compressing function and compared to PCS requires lesser number of invocations. All our constructions are in the stateless setting, i.e., a setting where the sender and the receiver do not share any state (apart from the common secret key). One of the aspects of our work is the simple and direct approach to the analysis of PRFs. In particular, we avoid the extensive and heavy machinery of game-playing technique which is used in most papers on this topic.
Last updated:  2009-11-28
Separating two roles of hashing in one-way message authentication
Uncategorized
L. H. Nguyen, A. W. Roscoe
Show abstract
Uncategorized
We analyse two new and related families of one-way authentication protocols, where a party wants to authenticate its public information to another. In the first, the objective is to do without shared passwords or a PKI, making use of low-bandwidth empirical/authentic channels where messages cannot be faked or modified. The analysis of these leads to a new security principle, termed separation of security concerns, under which protocols should be designed to tackle one-shot attacks and combinatorial search separately. This also leads us develop a new class of protocols for the case such as PKI where a relatively expensive signature mechanism exists. We demonstrate as part of this work that a popular protocol in the area, termed MANA I, neither optimises human effort nor offers as much security as had previously been believed. We offer a number of improved versions for MANA I that provides more security for half the empirical work, using a more general empirical channel.
Last updated:  2009-01-04
Thermocommunication
Julien Brouchier, Nora Dabbous, Tom Kean, Carol Marsh, David Naccache
Looking up -- you realize that one of the twelve light bulbs of your living room chandelier has to be replaced. You turn electricity off, move a table to the center of the room, put a chair on top of the table and, new bulb in hand, you climb up on top of the chair. As you reach the chandelier, you notice that\ldots all bulbs look alike and that you have forgotten which bulb needed to be changed.\smallskip Restart all over again?\smallskip Luckily, an effortless creative solution exists. By just touching the light bulbs you can determine the cold one and replace it! Put differently, information about the device's malfunction leaked-out via its temperature...
Last updated:  2009-01-04
A Hardware Analysis of Twisted Edwards Curves for an Elliptic Curve Cryptosystem
Brian Baldwin, Richard Moloney, Andrew Byrne, Gary McGuire, William P. Marnane
This paper presents implementation results of a reconfigurable elliptic curve processor defined over prime fields $GF(p)$. We use this processor to compare a new algorithm for point addition and point doubling operations on the twisted Edwards curves, against a current standard algorithm in use, namely the Double-and-Add. Secure power analysis versions of both algorithms are also examined and compared. The algorithms are implemented on an FPGA, and the speed, area and power performance of each are then evaluated for various modes of circuit operation using parallel processing. To the authors' knowledge, this work introduces the first documented FPGA implementation for computations on twisted Edwards curves over fields $GF(p)$.
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.