All papers (Page 218 of 24087 results)
Encryption Techniques for Secure Database Outsourcing
While the idea of database outsourcing is becoming increasingly popular, the associated security risks still prevent many potential users from deploying it. In particular, the need to give full access to one's data to a third party, the database service provider, remains a major obstacle. A seemingly obvious solution is to encrypt the data in such a way that the service provider retains the ability to perform relational operations on the encrypted database. In this paper we present a model and an encryption scheme that solves this problem at least partially. Our approach represents the provably secure solution to the database outsourcing problem that allows operations exact select, Cartesian product, and projection, and that guarantees the probability of erroneous answers to be negligible. Our scheme is simple and practical, and it allows effective searches on encrypted tables: For a table consisting of n tuples the scheme performs search in O(n) steps.
New Constructions for UC Secure Computation using Tamper-proof Hardware
The Universal Composability framework was introduced by Canetti to study the security of protocols which are concurrently executed with other protocols in a network environment. Unfortunately it was shown that in the so called plain model, a large class of functionalities cannot be securely realized. These severe impossibility results motivated the study of other models involving some sort of setup assumptions, where general positive results can be obtained. Until recently, all the setup assumptions which were proposed required some trusted third party (or parties).
Katz recently proposed using a \emph{physical setup} to avoid such trusted setup assumptions. In his model, the physical setup phase includes the parties exchanging tamper proof hardware tokens implementing some functionality. The tamper proof hardware is modeled so as to assume that the receiver of the token can do nothing more than observe its input/output characteristics. It is further assumed that the sender \emph{knows} the program code of the hardware token which it distributed. Based on the DDH assumption, Katz gave general positive results for universally composable multi-party computation tolerating any number of dishonest parties making this model quite attractive.
In this paper, we present new constructions for UC secure computation using tamper proof hardware (in a stronger model). Our results represent an improvement over the results of Katz in several directions using substantially different techniques. Interestingly, our security proofs do not rely on being able to rewind the hardware tokens created by malicious parties. This means that we are able to relax the assumptions that the parties \emph{know} the code of the hardware token which they distributed. This allows us to model real life attacks where, for example, a party may simply pass on the token obtained from one party to the other without actually knowing its functionality. Furthermore, our construction models the interaction with the tamper-resistant hardware as a simple request-reply protocol. Thus, we show that the hardware tokens used in our construction can be \emph{resettable}. In fact, it suffices to use token which are completely stateless (and thus cannot execute a multi-round protocol). Our protocol is also based on general assumptions (namely enhanced trapdoor permutations).
Towards Key-Dependent Message Security in the Standard Model
Uncategorized
Uncategorized
Standard security notions for encryption schemes do not guarantee any security if the encrypted messages depend on the secret key. Yet it is exactly the stronger notion of security in the presence of key-dependent messages (KDM security) that is required in a number of applications: most prominently, KDM security plays an important role in analyzing cryptographic multi-party protocols in a formal calculus. But although often assumed, the mere existence of KDM secure schemes is an open problem. The only previously known construction was proven secure in the random oracle model.
We present symmetric encryption schemes that are KDM secure in the standard model (i.e., without random oracles). The price we pay is that we achieve only a relaxed (but still useful) notion of key-dependent message security. Our work answers (at least partially) an open problem posed by Black, Rogaway, and Shrimpton. More concretely, our contributions are as follows:
- We present a (stateless) symmetric encryption scheme that is information-theoretically secure in face of a bounded number and length of encryptions for which the messages depend in an arbitrary way on the secret key.
- We present a stateful symmetric encryption scheme that is computationally secure in face of an arbitrary number of encryptions for which the messages depend only on the respective current secret state/key of the scheme. The underlying computational assumption is minimal: we assume the existence of one-way functions.
- We give evidence that the only previously known KDM secure encryption scheme cannot be proven secure in the standard model (i.e., without random oracles).
Universally Composable Multiparty Computation with Partially Isolated Parties
It is well known that universally composable multiparty computation
cannot, in general, be achieved in the standard model without setup
assumptions when the adversary can corrupt an arbitrary number of
players. One way to get around this problem is by having a
\emph{trusted third party} generate some global setup such as a
\emph{common reference string (CRS)} or a \emph{public key
infrastructure (PKI)}. The recent work of Katz shows that we may
instead rely on physical assumptions, and in particular
\emph{tamper-proof hardware tokens}. In this paper, we consider a similar but \emph{strictly weaker}
physical assumption. We assume that a player (Alice) can
\emph{partially isolate} another player (Bob) for a brief portion of
the computation and prevent Bob from communicating more than some limited number of bits with the environment.
For example, isolation might be achieved by asking Bob to put his functionality on a tamper-proof hardware token and assuming
that Alice can prevent this token from communicating to the outside world.
Alternatively, Alice may interact with Bob directly but in a special office which she administers and where there are no high-bandwidth
communication channels to the outside world. We show that, under \emph{standard} cryptographic assumptions, such physical setup can
be used to UC-realize any two party and multiparty computation in
the presence of an active and \emph{adaptive} adversary corrupting
any number of players. We also consider an alternative scenario, in
which there are some trusted third parties but no single such party
is trusted by all of the players. This compromise allows us to
significantly limit the use of the physical set-up and hence might
be preferred in practice.
Isolated Proofs of Knowledge and Isolated Zero Knowledge
We introduce a new notion called $\ell$-isolated proofs of knowledge
($\ell$-IPoK). These are proofs of knowledge where a cheating prover
is allowed to exchange up to $\ell$ bits of communication with some
external adversarial environment during the run of the proof.
Without any additional setup assumptions, no witness hiding protocol can be an $\ell$-IPoK for \emph{unbounded} values of $\ell$.
However, for any \emph{pre-defined} threshold $\ell$, and any relation in NP and we construct
an $\ell$-IPoK protocol for that relation. The resulting protocols
are zero knowledge (ZK) in the standard sense, i.e.,
w.r.t. a verifier that communicates only with the prover during the proof.
The cost of having a large threshold $\ell$ is a large communication complexity of the constructed protocol.
We analyze these costs and present a solution that is asymptotically optimal.
If a cheating verifier is allowed to communicate arbitrarily with an
external environment, it is not possible to construct an $\ell$-IPoK that
is also ZK with respect to such a verifier. As another new notion,
we define $\ell$-isolated zero knowledge ($\ell$-IZK) where the
verifier is $\ell$-isolated. For every relation in NP and
every $\ell$, we construct an $\ell$-IPoK protocol that is
also $\ell$-IZK.
We describe several applications of $\ell$-IPoK protocols
under the physical assumption that one can $\ell$-isolate a prover for
the duration of the proof phase. Firstly, we can use a witness
indistinguishable (WI) $\ell$-IPoK to prevent ``man-in-the-middle''
attacks on identification schemes. Prior results for this scenario
required all verifiers to register keys under a PKI, or the ability to fully
isolate the prover. Secondly, a partially isolated prover can register a public key and use a WI $\ell$-IPoK to prove knowledge of the corresponding secret key to another party acting as a verifier. This allows us to set up a PKI where the key registrant does not need to trust the Certificate Authority. The PKI is not perfect since the proof is only witness indistinguishable and not zero knowledge. In a companion paper, we show how to set up such a PKI and use it to implement arbitrary multiparty computation securely in the UC framework without relying on any trusted third parties.
Remote Power Analysis of {RFID} Tags
Uncategorized
Uncategorized
We describe the first power analysis attack on passive RFID tags. Compared to standard power analysis attacks, this attack is unique in that it requires no physical contact with the device under attack. The power analysis can be carried out even if both the tag and the attacker are passive and transmit no data, making the attack
very hard to detect.
As a proof of concept, we use power analysis to extract the kill passwords from Class 1 EPC tags operating in the UHF frequency range. Tags from several major vendors were successfully attacked. Our attack can be extended to HF tags and to remote fault analysis.
The main significance of our attack is not in the discovery of kill passwords but in its implications on future tag design -- any cryptographic functionality built into tags needs to be designed to be resistant to power analysis, and achieving this resistance is an undertaking which has an effect both on the price and on the
performance of tags.
(this is my Master's thesis, carried out under the supervision of Prof. Adi Shamir. It may be considered as the extended version of the article "Remote Password Extraction from RFID Tags", recently published in IEEE Transactions on Computers and indexed as http://dx.doi.org/10.1109/TC.2007.1050 or as http://ieeexplore.ieee.org/iel5/12/4288079/04288095.pdf)
A Tunable Broadcast Encryption Scheme
In this paper, we describe yet another broadcast encryption scheme
for stateless receivers. The main difference between our scheme and
the classical schemes derived from the complete subtree and its
subsequent improvements is that in our scheme the group management
is based upon a more adaptable data structure. In these classical
schemes, users must be spread on a tree structure where each
level of the tree is associated to some distinguishing property of
the users. The fact that the underlying data structure is a fixed
tree is a strong limitation for some applications where an operator
wants to select users very dynamically following criterions with
changing levels of priority. Our scheme may be thought as if in the
complete subtree it would be possible to exchange the different
level of the tree in order to make it very efficient to revoke or
select a class of users. It is also very efficient in the cases
where there exists very unbalanced groups of users.
This scheme allows one to select or revoke users by sending
ciphertexts of linear size with respect to the number of groups
which is in general far less than the number of users. Moreover, by
using a specific group repartition, it is possible to recover a tree
structure in order to apply the classical methods which guarantee
that our scheme is in general as efficient as a usual ones.
We prove that our scheme is fully collusion secure in the generic
group with pairing model.
A Tight High-Order Entropic Quantum Uncertainty Relation With Applications
We derive a new entropic quantum uncertainty relation involving min-entropy. The relation is tight and can be applied in various quantum-cryptographic settings.
Protocols for quantum 1-out-of-2 Oblivious Transfer and quantum Bit Commitment are presented and the uncertainty relation is used to prove the security of these protocols in the bounded-quantum-storage model according to new strong security definitions.
As another application, we consider the realistic setting of Quantum Key Distribution (QKD) against quantum-memory-bounded eavesdroppers. The uncertainty relation allows to prove the security of QKD protocols in this setting while tolerating considerably higher error rates compared to the standard model with unbounded adversaries. For instance, for the six-state protocol with one-way communication, a bit-flip error rate of up to 17% can be tolerated (compared to 13% in the standard model).
Our uncertainty relation also yields a lower bound on the min-entropy key uncertainty against known-plaintext attacks when quantum ciphers are composed. Previously, the key uncertainty of these ciphers was only known with respect to Shannon entropy.
Secure Identification and QKD in the Bounded-Quantum-Storage Model
We consider the problem of secure identification: user U proves to server S that he knows an agreed (possibly low-entropy) password w, while giving away as little information on w as possible, namely the adversary can exclude at most one possible password for each execution of the scheme. We propose a solution in the bounded-quantum-storage model, where U and S may exchange qubits, and a dishonest party is assumed to have limited quantum memory. No other restriction is posed upon the adversary.
An improved version of the proposed identification scheme is also secure against a man-in-the-middle attack, but requires U and S to additionally share a high-entropy key k. However, security is still guaranteed if one party loses k to the attacker but notices the loss. In both versions of the scheme, the honest participants need no quantum memory, and noise and imperfect quantum sources can be tolerated. The schemes compose sequentially, and w and k can securely be re-used.
A small modification to the identification scheme results in a quantum-key-distribution (QKD) scheme, secure in the bounded-quantum-storage model, with the same re-usability properties of the keys, and without assuming authenticated channels. This is in sharp contrast to known QKD schemes (with unbounded adversary) without authenticated channels, where authentication keys must be updated, and unsuccessful executions can cause the parties to run out of keys.
Efficient Password-based Authenticated Key Exchange without Public Information
Since the first password-based authenticated key exchange (PAKE) was
proposed, it has enjoyed a considerable amount of interest from the
cryptographic research community. To our best knowledge, most of
proposed PAKEs based on Diffie-Hellman key exchange need some public
information, such as generators of a finite cyclic group. However,
in a client-server environment, not all servers use the same public
information, which demands clients authenticate those public
information before beginning PAKE. It is cumbersome for users.
What's worse, it may bring some secure problems with PAKE, such as
substitution attack. To remove these problems, in this paper, we
present an efficient password-based authenticated key exchange
protocol without any public information. We also provide a
formal security analysis in the non-concurrent setting, including
basic security, mutual authentication, and forward secrecy, by using
the random oracle model.
Faster and Shorter Password-Authenticated Key Exchange
This paper presents an improved password-based authenticated key exchange protocols in the common reference string model. Its security proof requires no idealized assumption (such as random oracles).
The protocol is based on the GL framework introduced by Gennaro and Lindell, which generalizes the KOY key exchange protocol of Katz et al.\ Both the KOY and the GL protocols use (one-time) signatures as a
non-malleability tool in order to prevent a man-in-the-middle attack against the protocol. The efficiency of the resulting protocol is negatively affected, since if we use regular signatures, they require a large amount of computation (almost as much as the rest of the protocol) and further computational assumptions. If one-time signatures are used, they substantially increase the bandwidth requirement.
Our improvement avoids using digital signatures altogether, replacing them with faster and shorter message authentication codes. The crucial idea is to leverage as much as possible the non-malleability of the encryption scheme used in the protocol, by including various values into the ciphertexts as {\em labels}. As in the case of the GL framework, our protocol can be efficiently instantiated using either the DDH, Quadratic Residuosity or N-Residuosity Assumptions.
For typical security parameters our solution saves as much as 12 Kbytes of bandwidth if one-time signatures are implemented in \GL with
fast symmetric primitives. If we use number-theoretic signatures in the GL framework, our solution saves several large exponentiations (almost a third of the exponentiations computed in the GL protocol). The end result is that we bring provable security in the realm of password-authenticated key exchange one step closer to practical.
Towards provable security for route discovery protocols in mobile ad hoc networks
Uncategorized
Uncategorized
Mobile ad hoc networks (MANETs) are collections of wireless mobile
devices with restricted broadcast range and resources, and no fixed infrastructure. Communication is achieved by relaying data along appropriate routes, that are dynamically discovered and maintained
through collaboration between the nodes. Discovery of such routes is a major task, both from an efficiency and from a security point of view.
Recently, a security model tailored to the specific requirements of
MANETs was introduced by Acs, Buttyán, and Vajda. Among the novel characteristics of this security model is that it promises
security guarantees under concurrent executions, a feature of crucial practical implication for this type of distributed computation. A novel route discovery algorithm called endairA was
also proposed, together with a claimed security proof within the
same model. In this paper we show that the security proof for the route discovery algorithm endairA is flawed, and that moreover this
algorithm is vulnerable to a {\em hidden channel} attack. We also
analyze the security framework that was used for route discovery,
and argue that composability is an essential feature for ubiquitous
applications. We conclude by discussing some of the major security
challenges for route discovery in MANETs.
Attribute-Based Encryption with Non-Monotonic Access Structures
We construct an Attribute-Based Encryption (ABE) scheme
that allows a user's private key to be expressed in terms
of any access formula over attributes. Previous ABE
schemes were limited to expressing only monotonic access structures. We provide a proof of security for our scheme based on the
Decisional Bilinear Diffie-Hellman (BDH) assumption.
Furthermore, the performance of our new scheme compares
favorably with existing, less-expressive schemes.
Identifying Ideal Lattices
Micciancio defined a generalization of cyclic lattices, called ideal lattices. These lattices can be used in cryptosystems to decrease the number of parameters necessary to describe a lattice by a square root, making them more efficient. He proves that the computational intractability of classic lattice problems for these lattices gives rise to provably secure one-way and collision-resistant hash functions. This provable security relies on the assumption that reducing bases of ideal lattices is similar to reducing bases of random lattices. We give an indication that lattice problems in ideal lattices do not represent the general case by providing a distinguisher, which decides in time $O(n^4)$ whether a given basis of rank $n$ spans an ideal lattice or not. Using this algorithm we perform a statistical analysis for several dimensions and show that randomly generated lattices are practically never ideal.
Balanced Boolean Functions with Nonlinearity > 2^{n-1} - 2^{(n-1)/2}
Uncategorized
Uncategorized
Recently, balanced 15-variable Boolean functions with nonlinearity 16266 were obtained by suitably modifying unbalanced Patterson-Wiedemann (PW) functions, which possess nonlinearity 2^{n-1}-2^{(n-1)/2}+20 = 16276. In this short paper, we present an idempotent interpreted as rotation symmetric Boolean function) with nonlinearity 16268 having 15 many zeroes in the Walsh spectrum, within the neighborhood of PW functions. Clearly this function can be transformed to balanced functions keeping the nonlinearity and autocorrelation distribution unchanged. The nonlinearity value of 16268 is currently the best known for balanced 15-variable Boolean functions. Furthermore, we have attained several balanced 13-variable Boolean functions with nonlinearity 4036, which improves the recent result of 4034.
On the Big Gap Between $|p|$ and $|q|$ in DSA
Uncategorized
Uncategorized
We introduce a message attack against DSA and show that the security of DSA is indeed reduced to the following problem, i.e., find $\theta\in \mathbb{Z}_q^*$ such that\\
\centerline{$z=(\hat g^{\theta} \,\mbox{mod}\, p)\, \mbox{mod}\, q $}\\
where $\mbox{Ord}_p(\hat g)=q$ and
$z\in \mathbb{Z}_q^*$ is randomly chosen by the adversary.
Compared with the common key-only attack, i.e., find $x\in \mathbb{Z}_q^*$ such that\\
\centerline{$ y= g^x \,\mbox{mod}\, p$}\\ the message attack is more effective because of the big gap between
$|p|$ (1024-bit) and $|q|$ (160-bit).
A New Security Definition for Public Key Encryption Schemes and Its Applications
The strongest security definition for public key encryption (PKE)
schemes is indistinguishability against adaptive chosen ciphertext
attacks (IND-CCA). A practical IND-CCA secure PKE scheme in the
standard model is well-known to be difficult to construct given
the fact that there are only a few such kind of PKE schemes
available. From another perspective, we observe that for a large
class of PKE-based applications, although IND-CCA security is
sufficient, it is not a necessary requirement. Examples are Key
Encapsulation Mechanism (KEM), MT-authenticator, providing
pseudorandomness with a-priori information, and so on. This
observation leads us to propose a slightly weaker version of
IND-CCA, which requires ciphertexts of two randomly
selected messages are indistinguishable under chosen ciphertext
attacks. Under this new security notion, we show that highly
efficient schemes proven secure in the standard model can be built
in a straightforward way. We also demonstrate that such a security
definition is already sufficient for the applications above.
On the complexity of side-channel attacks on AES-256 -- methodology and quantitative results on cache attacks
Uncategorized
Uncategorized
Larger key lengths translate into an exponential increase in the complexity of an exhaustive search. Side-channel attacks, however, use a divide-and-conquer approach and hence it is generally assumed that increasing the key length cannot be used as mitigation. Yet, the internal round structure of AES-256 and its key-scheduling seem to hinder a direct extension of the existing attacks on AES-128 and thus challenge the proposition above. Indeed two consecutives round keys are required to infer the secret key and the MixColumns operation, not present in the last round, apparently increases the key search complexity from to 2^8 to 2^32. Additionally, it is unclear what the impact of the different round structures is on the number of required measurements. In this paper, we explore this question and show how to attack AES-256 with a key search complexity of O(2^8). This work confirms with practical experiments that AES-256 only offers a marginal increase in resistance against the attacks –both in the required number of measurements and in the required processing time. As an example, we quantify this increase for the case of cache-based side-channel attacks: AES-256 only provides an increase in complexity of 6 to 7 compared to cache-based attacks on AES-128.
Improving Upon the TET Mode of Operation
Uncategorized
Uncategorized
Naor and Reingold had proposed the construction of a strong pseudo-random
permutation (SPRP) by using a layer of ECB encryption between two layers of
invertible block-wise universal hash functions. At Crypto 2007, Halevi presented
constructions of invertible block-wise universal hash functions and a new mode
of operation (called TET) based on them. In this paper, we present a new mode
of operation
called {\heh} using the Naor-Reingold approach. This is built using a new
construction of invertible block-wise universal hash function. The new
construction improves over Halevi's construction by removing restrictions on
the hashing key. This in turn, leads to {\heh} improving
over TET by allowing more efficient encryption and decryption of variable length
messages as well as supporting better key agility. For the important application
of disk encryption, we present a variant called {\hehfp} which has better
key agility than TET.
SECURITY PROOF FOR SHENGBAO WANG’S IDENTITY-BASED ENCRYPTION SCHEME
This paper analyzes the security of an IBE scheme proposed by Wang in 2007. It is shown that under BDHP (which is polynomially time equivalent to BIDHP) assumption the scheme is secure in random oracle model.
Security under Key-Dependent Inputs
In this work we re-visit the question of building cryptographic primitives that remain secure even when queried on inputs that depend on the secret key. This was investigated by Black, Rogaway, and Shrimpton in the context of randomized encryption schemes and in the random oracle model. We extend the investigation to deterministic symmetric schemes (such as PRFs and block ciphers) and to the standard model. We term this notion "security against key-dependent-input attack", or KDI-security for short. Our motivation for studying KDI security is the existence of significant real-world implementations of deterministic encryption (in the context of storage encryption) that actually rely on their building blocks to be KDI secure.
We consider many natural constructions for PRFs, ciphers, tweakable ciphers and randomized encryption, and examine them with respect to their KDI security. We exhibit inherent limitations of this notion and show many natural constructions that fail to be KDI secure in the standard model, including some schemes that have been proven in the random oracle model. On the positive side, we demonstrate examples where some measure of KDI security can be provably achieved (in particular, we show such examples in the standard model).
Last updated: 2007-10-01
Formal Certification of Code-Based Cryptographic Proofs
As cryptographic proofs have become essentially unverifiable, cryptographers have argued in favor of systematically structuring proofs as sequences of games. Code-based techniques form an instance of this approach that takes a code-centric view of games, and that relies on programming language theory to justify steps in the proof-transitions between games. While these techniques contribute to increase confidence in the security of cryptographic systems, code-based proofs involve such a large palette of concepts from different fields that machine-verified proofs seem necessary to achieve the highest degree of confidence. Indeed, Halevi has convincingly argued that a tool assisting in the construction and verification of proofs is necessary to solve the crisis with cryptographic proofs. This article reports a first step towards the completion of Halevi's programme through the implementation of a fully formalized framework, CertiCrypt, for code-based proofs built on top of the Coq proof assistant. The framework has been used to yield machine-checked proofs of the PRP/PRF switching lemma and semantic security of ElGamal and OAEP encryption schemes.
Perfect Forward Secure Identity-Based Authenticated Key Agreement Protocol in the Escrow Mode
There are several essential features in key agreement protocols such as key escrow (essential when confidentiality, audit trail and legal interception are required) and perfect forward secrecy (i.e., the security of a session key established between two or more entities is guaranteed even when the private keys of the entities are compromised). Majority of the existing escrowable identity-based key agreement protocols, however, only provide partial forward secrecy. Therefore, such protocols are unsuitable for real-word applications that require a stronger sense of forward secrecy --- perfect forward secrecy. In this paper, we propose an efficient perfect forward secure identity-based key agreement protocol in the escrow mode. We prove the security of our protocol in the random oracle model, assuming the intractability of the Gap Bilinear Diffie-Hellman (GBDH) problem. Security proofs are invaluable tools in assuring protocol implementers about the security properties of protocols. We note, however, that many existing security proofs of previously published identity-based protocols entail lengthy and complicated mathematical proofs. In this paper, our proof adopts a modular approach and, hence, simpler to follow.
Secure Similarity Search
One of the most substantial ways to protect users' sensitive
information is encryption. This paper is about the keyword index
search system on encrypted documents. It has been thought that the
search with errors over encrypted data is impossible because 1 bit
difference over plaintexts may reduce to enormous bits difference
over cyphertexts. We propose a novel idea to deal with the search
with errors over encrypted data. We develop two similarity search
schemes, implement the prototypes and provide substantial analysis.
We define security requirements for the similarity search over
encrypted data. The first scheme can achieve perfect privacy in
similarity search but the second scheme is more efficient.
A Refined Algorithm for the $\eta_T$ Pairing Calculation in Characteristic Three
We describe further improvements of the $\eta_T$ pairing algorithm in
characteristic three. Our approach combines the loop unrolling
technique introduced by Granger {\em et. al} for the Duursma-Lee
algorithm, and a novel algorithm for multiplication over
$\mathbb{F}_{3^{6m}}$ proposed by Gorla {\em et al.} at SAC 2007. For
$m=97$, the refined algorithm reduces the number of multiplications
over $\mathbb{F}_{3^m}$ from $815$ to $692$.
A Note on Point Multiplication on Supersingular Elliptic Curves over Ternary Fields
Recently, the supersingular elliptic curves over ternary fields are widely used in pairing based crypto-applications since they achieve the best possible ratio between security level and space requirement. We propose new algorithms for projective arithmetic on the curves, where the point tripling is field multiplication free, and point addition and point doubling requires one field multiplication less than the known best algorithms, respectively. The algorithms combined with DBNS can lead to apparently speed up scalar multiplications on the curves.
Balanced Boolean Function on 13-variables having Nonlinearity strictly greater than the Bent Concatenation Bound
Very recently, Kavut and Yucel identified 9-variable Boolean functions having nonlinearity 242, which is currently the best known. However, any of these functions do not contain any zero in the Walsh spectrum and that is why they cannot be made balanced. We use these functions to construct 13-variable balanced Boolean function having nonlinearity
$2^{13-1} - 2^{\frac{13-1}{2}} + 2 = 4034$ which is strictly greater than the bent concatenation bound. This is the first demonstration of balanced Boolean functions on odd number of variables having nonlinearity strictly greater than the bent concatenation bound for number of input variables less than 15.
Generalized Rotation Symmetric and Dihedral Symmetric Boolean Functions - 9 variable Boolean Functions with Nonlinearity 242
Recently, 9-variable Boolean functions having nonlinearity 241, which is strictly greater than the bent concatenation bound of 240, have been discovered in the class of Rotation Symmetric Boolean Functions (RSBFs) by Kavut, Maitra and Yucel. In this paper, we present several 9-variable Boolean functions having nonlinearity of 242, which we obtain by suitably generalizing the classes of RSBFs and Dihedral Symmetric Boolean Functions (DSBFs).
Locally Invertible Boolean Mappings
The aim of this paper is to study a novel property of Boolean mappings called local intertibility. We focus on local invertibility of Boolean mappings which model filtering generators and study the case when filtering function is linear in the last variable.
Novel Approaches for Improving the Power Consumption Models in Correlation Analysis
Differential Power Analysis (DPA) is a powerful technique for
revealing secret data of cryptographic algorithms such as DES, AES and RSA implemented on a specific platform. In recent years, Correlation Power Analysis (CPA) allowed to better
formalize the differential approaches of DPA with the use of a power model. We propose here two methods in order to optimize the power model for the targeted bits of the analysed algorithm. We will consider that all the targeted bits do not give the same contribution to the power consumption. Our first method consists
in finding out the optimal ratio among the bits of a
specific device. The second method is based on a statistical analysis
of attack results while applying different possible ratios
among the bits. The experimental electromagnetic radiation signals
intercepted from an ASIC during DES operations show
that our proposed methods allow to improve significantly the
attack performance.
On Non-Randomness of the Permutation after RC4 Key Scheduling
Here we study a weakness of the RC4 Key Scheduling Algorithm (KSA) that has already been noted by Mantin and Mironov. Consider the RC4 permutation $S$ of $N$ (usually 256) bytes and denote it by $S_N$ after the KSA. Under reasonable assumptions we present a simple proof that each permutation byte after the KSA is significantly biased (either positive or negative) towards many values in the range $0, \ldots, N-1$. These biases are independent of the secret key and thus present an evidence that the permutation after the KSA can be distinguished from random permutation without any assumption on the secret key. We also present a detailed empirical study over Mantin's work when the theoretical formulae vary significantly from experimental results
due to repetition of short keys in RC4. Further, it is explained how these results can be used to identify new distinguishers for RC4 keystream.
A Bound on the Size of Separating Hash Families
Uncategorized
Uncategorized
The paper provides an upper bound on the size of a (generalised)
separating hash family, a notion introduced by Stinson, Wei and
Chen. The upper bound generalises and unifies several previously known
bounds which apply in special cases, namely bounds on perfect hash
families, frameproof codes, secure frameproof codes and separating
hash families of small type.
A Forward Secure Remote User Authentication Scheme
Remote user authentication schemes allow a valid user to login a
remote server. In 2000, Hwang and Li's proposed a new remote user
authentication scheme with smart cards. In the recent years,some researchers
pointed out the security weaknesses of Hwang and Li's scheme and they also
proposed some modified schemes to avoid these weaknesses. This
paper analyzes that Hwang and Li's scheme does
not satisfy some essential security requirements. Hwang and Li's scheme and all the modified
schemes do not support mutual authentication between the remote user
and the remote server
also there is no session key generation phase for secure communication. In
addition, in Hwang and Li's scheme, the remote user is not free to change his password. This
paper present an ideal remote user authentication scheme
with smart cards that not only resolves all the security problems of Hwang and Li's scheme,
but also provides all the essential security requirements and forward secrecy to the remote server.
Compression Functions Suitable for the Multi-Property-Preserving Transform
Since Bellare and Ristenpart showed a multi-property preserving domain extension transform, the problem of the construction for multi-property hash functions has been reduced to that of the construction for multi-property compression functions. However, the Davies-Meyer compression function that is widely used for standard hash functions is not a multi-property compression function. That is, in the ideal cipher model, the Davies-Meyer compression function is collision resistant, but it is not indifferentiable from a random oracle. In this paper, we show that the compression function proposed by Lai and Massey is a multi-property compression function. In addition, we show that the simplified version of the Lai-Massey compression function is also a multi-property compression function. The use of these compression functions enables us to construct multi-property hash functions by the multi-property preserving domain extension transform.
On Asymptotic Behavior of the Ratio Between the Numbers of Binary Primitive and Irreducible Polynomials
Uncategorized
Uncategorized
In this paper we study the ratio $\theta(n) = \frac{\lambda_2(n)}{\psi_2(n)}$,
where ${\lambda_2(n)}$ is the number of primitive polynomials and
${\psi_2(n)}$ is the number of irreducible polynomials in
$GF(2)[x]$ of degree $n$. %and $2n$, for an arbitrary odd number $n$.
Let $n=\prod_{i=1}^{\ell} p_i^{r_i}$ be the prime factorization of $n$, where $p_i$ are odd primes.
We show that $\theta(n)$ tends to 1 and $\theta(2n)$ is asymptotically
not less than 2/3 when $r_i$ are fixed and $p_i$ tend to infinity. We also, describe an infinite
series of values $n_{s}$ such that $\theta(n_{s})$ is strictly
less than $\frac{1}{2}$.
A Note on Automata-based Dynamic Convolutional Cryptosystems
In [1],the automata-based dynamic convolutional
cryptosystem is proposed and analyzed; the author claims that
``finding partial information about the cipher is quite easy, and
the main idea of such an attack, described in detail in Section
4.1, is based on Gaussian elimination.'' But the deduction
supporting this claim in Section 4.1 of [1] cannot
work. It seems that this cipher is not so weak so far.
Optimizing Multiprecision Multiplication for Public Key Cryptography
In this paper we recall the hybrid method of Gura et al. for multi-precision multiplication which is an improvement on the basic Comba method and which exploits the increased number of registers available on modern architectures in order to avoid duplicated loads from memory. We then show how to improve and generalise the method for application across a wide range of processor types, setting some new records in the process.
The Security of the Extended Codebook (XCB) Mode of Operation
The XCB mode of operation was outlined in 2004 as a contribution to the IEEE Security in Storage effort, but no security analysis was provided. In this paper, we provide a proof of security for XCB, and show that it is a secure tweakable (super) pseudorandom permutation. Our analysis makes several new contributions: it uses an algebraic property of XCB's internal universal hash function to simplify the proof, and it defines a nonce mode in which XCB can be securely used even when the plaintext is shorter than twice the width of the underlying block cipher. We also show minor modifications that improve the performance of XCB and make it easier to analyze. XCB is interesting because it is highly efficient in both hardware and software, it has no alignment restrictions on input lengths, it can be used in nonce mode, and it uses the internal functions of the Galois/Counter Mode (GCM) of operation, which facilitates design re-use and admits multi-purpose implementations.
Secret sharing on infinite graphs
We extend the notion of perfect secret sharing scheme for access structures with infinitely many participants. In particular we investigate cases when the participants are the vertices of an (infinite) graph, and the minimal qualified sets are the edges. The (worst case) {\it information ratio} of an access structure is the largest lower bound on the amount of information some participant must remember for each bit in the secret -- just the inverse of the information rate. We determine this value for several infinite graphs: infinite path, two-dimensional square and honeycomb lattices; and give upper and lower bounds on the ratio for the triangular lattice.
It is also shown that the information ratio is not necessarily {\em local}, i.e.~all finite spanned subgraphs have strictly smaller ratio than the whole graph. We conclude the paper by posing several open problems.
Construction of Efficient and Secure Pairing Algorithm and its Application
Uncategorized
Uncategorized
The randomized projective coordinate (RPC) method applied to a
pairing computation algorithm is a good solution that provides an
efficient countermeasure against side channel attacks. In this
study, we investigate measures for increasing the efficiency of
the RPC-based countermeasures and construct a method that provides
an efficient RPC-based countermeasure against side channel
attacks. We then apply our method to the well-known $\eta_T$
pairing algorithm over binary fields and obtain an RPC-based
countermeasure for the $\eta_T$ pairing; our method is more
efficient than the RPC method applied to the original $\eta_T$
pairing algorithm.
Linearization Attacks Against Syndrome Based Hashes
In MyCrypt 2005, Augot, Finiasz, and Sendrier proposed FSB, a family of cryptographic hash functions. The security claim of the FSB hashes is based on a coding theory problem with hard average-case complexity. In the ECRYPT 2007 Hash Function Workshop, new versions with essentially the same compression function but radically different security parameters and an additional final transformation were presented. We show that hardness of average-case complexity of the underlying problem is irrelevant in collision search by presenting a linearization method that can be used to produce collisions in a matter of seconds on a desktop PC for the variant of FSB with claimed $2^128$ security.
Improved Privacy of the Tree-Based Hash protocols using Physically Unclonable Function
Uncategorized
Uncategorized
In 2004, Molnar and Wagner introduced a very appealing scheme dedicated to the identification of RFID tags. Their protocol relies on a binary tree of secrets which are shared -- for all nodes except the leaves -- amongst the tags. Hence the compromise of one tag also has implications on the other tags with whom it shares keys. We describe a new man-in-the-middle attack against this protocol which allows to break privacy even without opening tags. Moreover, it can be applied to some other RFID protocols which use correlated keys as the one described recently by Damgard and Pedersen at CT-RSA 2008.
We introduce a modification of the initial scheme to allow us to thwart this and to strengthen RFID tags by implementing secrets with Physical Obfuscated Keys (POKs). This doing, we augment tags and scheme privacy, particularly general resistance against physical threats.
Fully Resilient Traitor Tracing Scheme using Key Update
This paper proposes fully resilient traitor tracing schemes which
have no restriction about the number of traitors. By using the
concept of key update, the schemes can make the pirate decoders
useless within some time-period, which will be called life-time of
the decoder. There is a trade-off between the size of ciphertext
and life-time of pirate decoders.
Improved security analysis of OMAC
We present an improved security analysis of OMAC, the construction
is widely used as a candidate of MAC or Pseudo Random Function (or
PRF). In this direction, the first result was given in Crypto-05
where an improved security analysis of CBC (for fixed length or for
arbitrary length prefix-free messages) had provided. Followed by
this work, improved bounds for XCBC, TMAC and PMAC were found. The
improved bounds are of the form $\mathrm{O}(\frac{Lq^2}{2^n})$ where
the original bounds are $\mathrm{O}(\frac{\sigma^2}{2^n})$ which is
roughly $\mathrm{O}(\frac{L^2q^2}{2^n})$. Here, a distinguisher can
make at most $q$ queries having at most $\sigma$ many blocks with
$L$ as the maximum block size. The original bound for OMAC was
roughly $\frac{5L^2q^2}{2^n}$ shown in FSE-03 and the next improved
bound was $\frac{4\sigma^2}{2^n}$ shown in Indocrypt-03. In this
paper we have provided an improved bound (a similar form as provided
for others) for OMAC and the bound we show is roughly
$\frac{4q\sigma}{2^n} = \mathrm{O}(\frac{Lq^2}{2^n})$.
Relations Among Notions of Plaintext Awareness
We introduce a new simplified notion of plaintext awareness, which we term PA2I, and show that this is equivalent to the standard definition of PA2 plaintext awareness for encryption schemes that satisfy certain weak security and randomness requirements. We also show that PA2 plaintext awareness is equivalent to PA2+ plaintext awareness under similar security and randomness requirements. This proves a conjecture of Dent that, for suitably random public-key encryption schemes, PA2 plaintext awareness implies PA1+ plaintext awareness.
Construction of Rotation Symmetric Boolean Functions with Maximum Algebraic Immunity on Odd Number of Variables
In this paper we present a theoretical construction of Rotation Symmetric Boolean Functions (RSBFs) on odd number of variables with maximum possible \ai and further these functions are not symmetric.
Our RSBFs are of better nonlinearity than the existing theoretical
constructions with maximum possible \ai. To get very good nonlinearity, which is important for practical cryptographic design, we generalize our construction to a construction cum search technique in the RSBF class. We find 7, 9, 11 variable RSBFs with maximum possible \ai having nonlinearities 56, 240, 984 respectively with very small amount of search after our basic construction.
Zero-Knowledge in the Applied Pi-calculus and Automated Verification of the Direct Anonymous Attestation Protocol
We devise an abstraction of zero-knowledge protocols that is
accessible to a fully mechanized analysis. The abstraction is
formalized within the applied pi-calculus using a novel equational
theory that abstractly characterizes the cryptographic semantics of
zero-knowledge proofs. We present an encoding from the equational
theory into a convergent rewriting system that is suitable for the
automated protocol verifier ProVerif. The encoding is sound and fully
automated. We successfully used ProVerif to obtain the first
mechanized analysis of the Direct Anonymous Attestation (DAA)
protocol. The analysis in particular required us to devise novel
abstractions of sophisticated cryptographic security definitions based
on interactive games.
Secure Hybrid Encryption from Weakened Key Encapsulation
We put forward a new paradigm for building hybrid encryption schemes
from constrained chosen-ciphertext secure (CCCA) key-encapsulation mechanisms (KEMs) plus authenticated symmetric encryption. Constrained chosen-ciphertext security is a new security notion for KEMs that we propose. CCCA has less demanding security requirements than standard chosen-ciphertext (CCA) security (since it requires the adversary to have a certain plaintext-knowledge when making a decapsulation query) yet we can prove that CCCA is sufficient for secure hybrid encryption.
Our notion is not only useful to express the Kurosawa-Desmedt public-key encryption scheme and its generalizations to hash-proof systems in an abstract KEM/DEM security framework. It also has a very constructive appeal, which we demonstrate with a new encryption scheme
whose security relies on a class of intractability assumptions that we show (in the generic group model) strictly weaker than the Decision Diffie-Hellman (DDH) assumption. This appears to be the first practical public-key encryption scheme in the literature from an
algebraic assumption strictly weaker than DDH.
The Effectiveness of Receipt-Based Attacks on ThreeBallot
The ThreeBallot voting system is an end-to-end (E2E) voter-verifiable voting system. Each voter fills out three ballots according to a few simple rules and takes a copy of one of them home as a receipt for verification purposes. All ballots are posted on a public bulletin board so that any voter may verify the result. In this paper we investigate the effectiveness of attacks using the voter's receipt and the bulletin board. Focusing on two-candidate races, we determine thresholds for when the voter's vote can be reconstructed from a receipt, and when a coercer can effectively verify if a voter followed instructions by looking for pre-specified patterns on the bulletin board. Combining these two results allows us to determine safe ballot sizes that resist known attacks. We also generalize a previous observation that an individual receipt can leak information about a voter's choices.
Faster addition and doubling on elliptic curves
Edwards recently introduced a new normal form for elliptic curves.
Every elliptic curve over a non-binary field
is birationally equivalent to a curve in Edwards form
over an extension of the field,
and in many cases over the original field.
This paper presents fast
explicit formulas (and register allocations)
for group operations on an Edwards curve.
The algorithm for doubling
uses only 3M+4S, i.e., 3 field multiplications and 4 field squarings.
If curve parameters are chosen to be small
then the algorithm for mixed addition uses only 9M+1S
and the algorithm for non-mixed addition uses only 10M+1S
Arbitrary Edwards curves can be handled at the cost of just one extra
multiplication by a curve parameter.
For comparison,
the fastest algorithms known for the popular ``a_4=-3 Jacobian'' form
use 3M+5S for doubling;
use 7M+4S for mixed addition;
use 11M+5S for non-mixed addition;
and use 10M+4S for non-mixed addition when one input has been added
before.
The explicit formulas for non-mixed addition on an Edwards curve
can be used for doublings at no extra cost,
simplifying protection against side-channel attacks.
Even better, many elliptic curves
(approximately 1/4 of all isomorphism classes of elliptic curves over a non-binary finite field)
are birationally equivalent---over the original field---to Edwards curves where
this addition algorithm works for all pairs of curve points,
including inverses, the neutral element, etc.
This paper contains an
extensive comparison of different forms of elliptic curves and
different coordinate systems for the basic group operations
(doubling, mixed addition, non-mixed addition, and unified addition)
as well as higher-level operations such as multi-scalar multiplication.
Solving MRHS linear equations
A new method for solving algebraic equation systems common in
cryptanalysis is proposed. Our method differs from the others in
that the equations are not represented as multivariate polynomials,
but as a system of Multiple Right Hand Sides linear equations. The
method was tested on scaled versions of the AES. The results overcome
significantly what was previously achieved with Gröbner Basis
related algorithms.
Last updated: 2007-09-23
No title
Uncategorized
Uncategorized
No Abstract
Provably Secure Framework for Information Aggregation is Sensor Networks
Information aggregation is an important operation in wireless sensor
networks executed for the purpose of monitoring and reporting of the environmental data. Due to the performance constraints of sensor nodes the in-network form of the aggregation is especially attractive since it allows to save expensive resources during the frequent network queries. Easy accessibility of networks and nodes and almost no physical protection against corruptions arise high challenges
on the security of the aggregation process. Especially, protection against attacks aiming to falsify the aggregated result is considered to be of prime importance.
In this paper we propose a novel security model for the aggregation process based on the well-established cryptographic techniques, focusing on the scenario with the single aggregator node. In order to show soundness and feasibility of our definitions we describe a generic practical approach that achieves security against node corruptions during the aggregation process in a provable cryptographic way based solely on the symmetric cryptographic primitives. To the best of our knowledge this is the first paper which aims to combine the paradigm of provable security in the cryptographic sense with the task of information aggregation in WSNs.
Analysis of countermeasures against access driven cache attacks on AES
Uncategorized
Uncategorized
Cache attacks on implementations of cryptographic algorithms have turned out to be very powerful.
Progress in processor design, e.g., like hyperthreading, requires to adapt models for tampering or side-channel attacks to cover cache attacks as well.
Hence, in this paper we present a rather general model for cache attacks.
Our model is stronger than recently used ones.
We introduce the notions of information leakage and so called resistance to analyze the security of several implementations of AES.
Furthermore, we analyze how to use random permutations to protect against cache attacks.
By providing a successful attack on an AES implementation protected by random permutations we show that random permutations used in a straightforward manner are not enough to protect against cache attacks.
Hence, to improve upon the security provided by random permutations, we describe the property a permutation must have in order to prevent the leakage of some key bits through cache attacks.
Using a permutation having this property forces an adversary to consider several rounds of the cipher.
This increases the complexity of any cache attack considerably.
We also describe how to implement our countermeasure efficiently.
The method to do so is of independent interest, since it alone can also be used to protect against cache attacks.
Moreover, combining both countermeasures allows for a trade-off between security and efficiency.
A Pollard-like pseudorandom number generator over EC
In this short paper we propose a pseudorandom number generator over EC based on Pollard-like method. In contrast to the well known Elliptic Curve Random Number Generator (see e.g. ANSI and NIST draft standards) the generator is based on a random walk over the group of EC-points like in the original Pollard’s rho algorithm and only resembles a little bit the linear congruential generator over elliptic curve. Compared to other approaches, the method allows to decrease the cost of generating pseudorandom numbers. This generator could be used in resource constrained devices like smart cards which have already been equipped with EC-based tools for other cryptographic purposes.
On solving sparse algebraic equations over finite fields II
A system of algebraic equations over a finite field is called sparse
if each equation depends on a small number of variables. Finding
efficiently solutions to the system is an underlying hard problem in
the cryptanalysis of modern ciphers. In this paper deterministic
Agreeing-Gluing algorithm introduced earlier by Raddum and Semaev for
solving such equations is studied. Its expected
running time on uniformly random instances of the problem is rigorously estimated. This estimate is at present the best theoretical
bound on the complexity of solving average instances of the above
problem. In particular, it significantly overcomes our previous results. In characteristic 2 we observe an exciting difference
with the worst case complexity provided by SAT solving algorithms.
Lossy Trapdoor Functions and Their Applications
Uncategorized
Uncategorized
We propose a new general primitive called lossy trapdoor
functions (lossy TDFs), and realize it under a variety of different
number theoretic assumptions, including hardness of the decisional
Diffie-Hellman (DDH) problem and the worst-case hardness of
standard lattice problems.
Using lossy TDFs, we develop a new approach for constructing many
important cryptographic primitives, including standard trapdoor
functions, CCA-secure cryptosystems, collision-resistant hash
functions, and more. All of our constructions are simple, efficient,
and black-box.
Taken all together, these results resolve some long-standing open
problems in cryptography. They give the first known (injective)
trapdoor functions based on problems not directly related to integer
factorization, and provide the first known CCA-secure cryptosystem
based solely on worst-case lattice assumptions.
A Framework for Iterative Hash Functions - HAIFA
Since the seminal works of Merkle and Damgard on the iteration of compression functions, hash functions were built from compression functions using the Merkle-Damgard construction. Recently, several flaws in this construction were identified, allowing for pre-image attacks and second pre-image attacks on such hash functions even when the underlying compression functions are secure.
In this paper we propose the HAsh Iterative FrAmework (HAIFA). Our framework can fix many of the flaws while supporting several additional properties such as defining families of hash functions and supporting variable hash size. HAIFA allows for an online computation of the hash function in one pass with a fixed amount of memory independently of the size of the message.
Besides our proposal, the recent attacks initiated research on the way compression functions are to be iterated. We show that most recent proposals such as randomized hashing, the enveloped Merkle-Damgard, and the RMC and ROX modes can be all be instantiated as part of the HAsh Iterative FrAmework (HAIFA).
Cryptanalysis of a class of cryptographic hash functions
Uncategorized
Uncategorized
We apply new cryptanalytical techniques to perform the generic multi-block multicollision, second preimage and herding attacks on the Damgård-Merkle hash functions with linear-XOR/additive checksums. The computational work required to perform these attacks on the Damgård-Merkle hash functions with linear-XOR/additive checksum of message blocks (GOST), intermediate states (\textbf{3C}, MAELSTROM-0, F-Hash) or both is only a little more than what is required on the Damgård-Merkle hash functions. Our generic attacks on GOST answers the open question of Hoch and Shamir at FSE 2006 on the security of the iterated hash functions with the linear mixing of message blocks.
Prolific Codes with the Identifiable Parent Property
Uncategorized
Uncategorized
Let C be a code of length n over an alphabet of size q. A word
d is a descendant of a pair of codewords x,y if
d_i lies in \{x_i ,y_i \} for 1 <= i <= n. A code C
is an identifiable parent property (IPP) code if the following
property holds. Whenever we are given C and a descendant d of
a pair of codewords in C, it is possible to determine at least one
of these codewords.
The paper introduces the notion of a prolific IPP code. An IPP code is
prolific if all q^n words are descendants. It is shown that
linear prolific IPP codes fall into three infinite (`trivial')
families, together with a single sporadic example which is ternary of
length 4. There are no known examples of prolific IPP codes which
are not equivalent to a linear example: the paper shows that for most
parameters there are no prolific IPP codes, leaving a relatively small
number of parameters unsolved. In the process the paper obtains upper
bounds on the size of a (not necessarily prolific) IPP code which are
better than previously known bounds.
`Good' Pseudo-Random Binary Sequences from Elliptic Curves
Uncategorized
Uncategorized
Some families of binary sequences are constructed from elliptic curves. Such sequences are shown to be of strong pseudorandom properties with `small' well-distribution measure and `small' correlation measure of `small' order, both of which were introduced by Mauduit and S$\acute{a}$rközy to analyze the pseudo-randomness
of binary sequences.
Group-based Proxy Re-encryption scheme
Recently, proxy re-encryption scheme received much attention. In this paper, we propose a proxy re-encryption used for divert ciphertext from one group to another. The scheme is bidirectional and any member can independently decrypt the ciphertexts encrypted to its group. We discuss the security of the proposed scheme and show that our scheme withstands chosen ciphertext attack in standard model.
Two-Tier Signatures, Strongly Unforgeable Signatures, and Fiat-Shamir without Random Oracles
We show how the Fiat-Shamir transform can be used to convert three-move identification protocols into two-tier signature schemes (a primitive we define) with a proof of security that makes a standard assumption on the hash function rather than modeling it as a random oracle. The result requires security of the starting protocol against concurrent attacks. We can show that numerous protocols have the required properties and so obtain numerous efficient two-tier schemes. Our first application is an efficient transform of any unforgeable signature scheme into a strongly unforgeable one, which uses as a tool any two-tier scheme. (This extends work of Boneh, Shen and Waters whose transform only applies to a limited class of schemes.) The second application is new one-time signature schemes that, compared to one-way function based ones of the same computational cost, have smaller key and signature sizes.
Cryptanalysis of a Hash Function Proposed at ICISC 2006
A simple method for constructing collisions for Shpilrain’s polynomial-based hash function from ICISC 2006 is presented. The attack relies on elementary linear algebra and can be considered as practical: For the parameters suggested, we give a specific collision, computed by means of a computer algebra system.
Hash Functions in the Dedicated-Key Setting: Design Choices and MPP Transforms
Uncategorized
Uncategorized
In the dedicated-key setting, one starts with a compression function f:{0,1}^k x {0,1}^{n+d} -> {0,1}^n and builds a family of hash functions H^f:K x M -> {0,1}^n indexed by a key space K. This is different from the more traditional design approach used to build hash functions such as MD5 or SHA-1, in which compression functions and hash functions do not have dedicated key inputs. We explore the benefits and drawbacks of building hash functions in the dedicated-key setting (as compared to the more traditional approach), highlighting several unique features of the former. Should one choose to build hash functions in the dedicated-key setting, we suggest utilizing multi-property-preserving (MPP) domain extension transforms. We analyze seven existing dedicated-key transforms with regard to the MPP goal and propose two simple new MPP transforms.
Secret Ballot Elections with Unconditional Integrity
This paper describes in detail a voting scheme which allows voters to be sure that whatever they see in the booth will be included correctly in the outcome. It presents a rigorous and understandable model of requirements for election systems, states formally the properties of the system, and proves them. As a step towards understanding the full 2D voting system, it also presents a simpler 1D system.
Voting with Unconditional Privacy by Merging Prêt-à-Voter and PunchScan
Uncategorized
Uncategorized
We present a detailed comparison of
the Prêt-à-Voter and Punchscan
protocols for booth voting.
We also describe a simpler variation
that keeps the ballot layout of Prêt-à-Voter but borrows
the cryptography from Punchscan,
which is based on any commitment scheme.
By using unconditionally hiding commitments
we obtain a conceptually very simple voting protocol with unconditional privacy.
Affine Precomputation with Sole Inversion in Elliptic Curve Cryptography
This paper presents a new approach to precompute all odd points $[3]P, [5]P,\ldots, [2k-1]P$, $k \geq 2$ on an elliptic curve over $\mathbb{F}_p$. Those points are required for the efficient evaluation of a scalar multiplication, the most important operation in elliptic curve cryptography. The proposed method precomputes the points in affine coordinates and needs only one single field inversion for the computation. The new method is superior to all known methods that also use one field inversion. Compared to methods that require several field inversions for the precomputation, the proposed method is faster for a broad range of ratios of field inversions and field multiplications. The proposed method benefits especially from ratios as they occur on smart cards.
%Scalar multiplications are the basic operations in elliptic curve cryptosystems. The evaluation of a scalar multiplication can be sped up by using signed representations of the scalar. In exchange for the speed up, the precomputation of a series of points is required. While a lot of research has been done in the direction of signed representations, little attention has been paid to efficient methods to precompute the required points. Such methods are important since costly field inversions are involved in the precomputation. This paper presents a new method for the precomputation that requires only one single field inversion, independent of the number of points to precompute. The points to precompute are all odd points $[3]P, [5]P,\ldots, [2k-1]P$, $k \geq 2$ on an elliptic curve over $\mathbb{F}_p$. The proposed method benefits especially from a large ratios between inversions and multiplications as they occur on smart cards.
CRUST: Cryptographic Remote Untrusted Storage without Public Keys
This paper presents CRUST, a stackable file system layer designed to provide secure file sharing over remote untrusted storage systems. CRUST is intended to be layered over insecure network file systems without changing the existing systems.
In our approach, data at rest is kept encrypted, and data integrity and access control are provided by cryptographic means. Our design completely avoids public-key cryptography operations and uses more efficient symmetric-key alternatives to achieve improved performance. As a generic and self-contained system, CRUST includes its own in-band key distribution mechanism and does not rely on any special capabilities of the server or the clients.
We have implemented CRUST as a Linux file system and shown that it performs comparably with typical underlying file systems, while providing significantly stronger security.
Filling the Gap between Voters and Cryptography in e-Voting
Cryptography is an important tool in the design and implementation of electronic voting schemes for it provides the property of verifiability, which is not provided in the traditional voting. But in the real life, neither can most voters understand the profound theory of cryptographic e-voting nor can they perform the complicated cryptographic computation. An e-voting system is presented in this paper to leverage the use of cryptography between theory and practice. It combines the advantages of Moran-Naor's voting scheme and voting schemes based on homomorphic encryption. It makes use of cryptographic techniques, but it hides the details of cryptographic computation from voters. Voters can be convinced that the ballot is cast as intended. The tally can be verified in public. Compared with Moran-Naor's voting scheme, the new system has three advantages: the ballots can be recovered when the voting machine breaks down, the costly cut-and-choose zero-knowledge proofs for shuffling votes made by the voting machine are avoided and the partial tally result in each voting machine is kept secret.
Which Languages Have 4-Round Zero-Knowledge Proofs?
We show, unconditionally, that if a language $L$ has a 4-round, black-box, computational zero-knowledge proof system with negligible soundness error, then $\bar L \in MA$. Assuming the polynomial hierarchy does not collapse, this means, in particular, that $NP$-complete languages do not have 4-round zero-knowledge proofs (at least with respect to black-box simulation).
The Power of Proofs-of-Possession: Securing Multiparty Signatures against Rogue-Key Attacks
Multiparty signature protocols need protection against rogue-key attacks, made possible whenever an adversary can choose its public key(s) arbitrarily. For many schemes, provable security has only been established under the knowledge of secret key (KOSK) assumption where the adversary is required to reveal the secret keys it utilizes. In practice, certifying authorities rarely require the strong proofs of knowledge of secret keys required to substantiate the KOSK assumption. Instead, proofs of possession (POPs) are required and can be as simple as just a signature over the certificate request message. We propose a general registered key model, within which we can model both the KOSK assumption and in-use POP protocols. We show that simple POP protocols yield provable security of Boldyreva's multisignature scheme [11], the LOSSW multisignature scheme [28], and a 2-user ring signature scheme due to Bender, Katz, and Morselli [10]. Our results are the first to provide formal evidence that POPs can stop rogue-key attacks.
Last updated: 2007-09-18
Efficiency Improvement for NTRU
Uncategorized
Uncategorized
The NTRU encryption scheme is an interesting alternative to well-established encryption schemes such as RSA, ElGamal, and ECIES. The security of NTRU relies on the hardness of computing short lattice vectors and thus is a promising candidate for being quantum computer resistant. There has been extensive research on efficient implementation of the NTRU encryption scheme. In this paper, we present a new algorithm for enhancing the performance of NTRU. The proposed method is between $11$\% and $23$\% faster on average than the best previously known method. We also present a highly efficient implementation of NTRU within the Java Cryptography Architecture.
Certificateless Public Key Encryption Secure against Malicious KGC Attacks in the Standard Model
Recently, Au et al. pointed out a seemingly neglected security concern for certificateless public key encryption (CL-PKE) scheme, where a malicious key generation center (KGC) can compromise the confidentiality of the messages by embedding extra trapdoors in the system parameter. Although some schemes are secure against such an attack, they require random oracles to prove the security. In this paper, we first show that two existing CL-PKE schemes without random oracles are not secure against malicious KGC, we then propose the first CL-PKE scheme secure against malicious KGC attack, with proof in the standard model.
New Form of Permutation Bias and Secret Key Leakage in Keystream Bytes of RC4
Consider the permutation $S$ in RC4. Roos pointed out in 1995 that after the Key Scheduling Algorithm (KSA) of RC4, each of the initial bytes of the permutation, i.e., $S[y]$ for small values of $y$, is biased towards some linear combination of the secret key bytes. In this paper, for the first time we show that the bias can be observed in $S[S[y]]$ too. Based on this new form of permutation bias after the KSA and other related results, a complete framework is presented to show that many keystream output bytes of RC4 are significantly biased towards several linear combinations of the secret key bytes. The results do not assume any condition on the secret key. We find new biases in the initial as well as in the 256-th and 257-th keystream output bytes. For the first time biases at such later stages are discovered without any knowledge of the secret key bytes. We also identify that these biases propagate further, once the information for the index $j$ is revealed.
An Efficient One-move Nominative Signature Scheme
A signer in a Nominative Signature (NS) scheme can arbitrarily choose a nominee, then jointly generate a signature in such a way that the signature can only be verified with the nominee's consent. NS is
particularly useful in user certification systems. Currently, the only secure NS scheme available requires multi-round communications between the nominator and the nominee during signature generation. This implies that an NS-based user certification system requires a certification issuer to interact with a user using a complicated multi-round protocol for certificate issuance. It remains an open problem to construct an efficient and non-interactive NS scheme. In this paper, we solve this problem by proposing the first efficient one-move (i.e. non-interactive) NS scheme. In addition, we propose an enhanced security requirement called Strong Invisibility, and prove that our scheme satisfies this strong security requirement.
Algebraic Immunity Hierarchy of Boolean Functions
Algebraic immunity of Boolean functions is a very
important concept in recently introduced algebraic attacks of
stream cipher. For a $n$-variable Boolean function $f$, the
algebraic immunity $AI_n(f)$ takes values in
$\{0,1,\ldots,\lceil\frac{n}{2}\rceil\}$. For every $k$ in this
range, denote $B_{n,k}$ the set of all $n$-variable Boolean
functions with algebraic immunity $k$, and we know that $B_{n,k}$
is always non-empty. According to the algebraic immunity, we can
form a hierarchy of Boolean functions. Trivially, $|B_{n,0}|=2$.
In general, about this integer sequence $|B_{n,k}|,\quad
k=0,1,\ldots,\lceil\frac{n}{2}\rceil,$ very few results are known.
In this paper, we show an explicit formula for $|B_{n,1}|$. That
is, we obtain an exact formula for the number of Boolean functions
with algebraic immunity one. This is the first exact formula for
the terms in the above integer sequence. We also give a tight
upper bound for nonlinearity of Boolean functions with algebraic
immunity one.
UICE: A High-Performance Cryptographic Module for SoC and RFID Applications
In order to overcome proprietary algorithms with respect to the system manufacturers, a free cryptographic module, the Universal Immobilizer Crypto Engine (UICE), will be proposed. This UICE algorithm is tailored to 8-bit microprocessor architectures and is therefore very fast in software and hardware. The dedicated hardware implementation leads to a small gate count, because the registers for input and output are shared. The important non-linear function, here an 8 x 8 S-Box, may be built as a gate array or small ROM with the advantage of flexibility. Several tests – statistical and random-number tests - have been performed in order to analyze the strength properties of the algorithm. So far no weakness was found after ten rounds of encryption.
Although this cryptographic module was intentionally developed for Radio-Frequency Identification (RFID) systems, it is a proper choice for all systems needing embedded cryptography such as SoC with bus encryption or firmware to be secured.
RFID-Systems have become commonplace in access control and security applications, the usage and importance of cryptographic co-processors in RFID transponder devices has grown significantly. Improved vehicle security systems, also known as immobilizers, are required due to increased vehicle theft worldwide. Such devices provide high security at low cost and low power.
A Forward-Secure Signature with Backward-Secure Detection
This paper enhances the security of Abdalla and Reyzin's forward-secure signature scheme with backward-secure detection. In the proposed scheme, we embeded the hash-chain into the forward-secure signature scheme. It achieves not only forward-secure but also backward-secure for the digital signature.
Aspects of Pairing Inversion
We discuss some applications of the pairing inversion problem and outline some potential approaches for solving it. Our analysis of these approaches gives further evidence that pairing inversion is a hard problem.
Last updated: 2007-06-29
Efficient Identity Based Signature in Standard Model
In this paper, we present an efficient signature scheme without random oracles using Waters private key construction. Our scheme has shorter public parameter size when compared to Kenny and Schuldt signature, the signature space of our basic scheme consists of three group elements, we further show that the signature space can be reduced to two group elements. The security of our signature scheme is proved in the standard model under adaptive identity security notion.
Last updated: 2007-10-29
Fully Secure Proxy Re-Encryption without Random Oracles
In a proxy re-encryption scheme, a semi-trusted proxy, with some
additional information, can transform a ciphertext under Alice's
public key into a new ciphertext under Bob's public key on the same
message, but cannot learn any information about the messages
encrypted under the public key of either Alice or Bob. In this
paper, we propose two new unidirectional proxy re-encryption
schemes, where a proxy can transform a ciphertext for Alice into a
new ciphertext for Bob, but not vice versa. Note that,
unidirectional proxy re-encryption is more powerful than
bidirectional one, since a bidirectional scheme can always be
implemented by an unidirectional one. Furthermore, these two schemes
can be proved \emph{in the standard model}, chosen-ciphertext secure
based on Decisional Bilinear Inverse Diffie-Hellman assumption and
master key secure based on Extended Discrete Logarithm assumption.
To our best knowledge, our proposals are the first fully secure
(CCA-secure and master key secure) proxy re-encryption schemes in
the standard model.
Choosing the correct elliptic curve in the CM method
We give an elementary way to distinguish between the twists of
an ordinary elliptic curve $E$ over $\Fp$
in order to identify the one with $p+1-2U$ points,
when $p=U^2+\d V^2$ with $2U, 2V\in \Z$ and
$E$ is constructed using the CM method
for finding elliptic curves with a prescribed number of points.
Our algorithms consist in most cases of
reading off simple congruence conditions
on $U$ and $V$ modulo $4$.
A Verifiable Voting Protocol based on Farnel
Uncategorized
Uncategorized
Farnel is a voting system proposed in 2001 in which each voter signs a ballot. It uses two ballot boxes to avoid the association between a voter and a vote. In this paper we first point out a flaw in the ThreeBallot system proposed by Rivest that seems to have gone unnoticed so far: it reveals statistical information about who is winning the election. Then, trying to resolve this and other flaws, we present a new, voter-verifiable version of the Farnel voting system in which voters retain copies of ballot IDs as receipts.
A Cryptographic Model for Branching Time Security Properties -- the Case of Contract Signing Protocols
Some cryptographic tasks, such as contract signing and
other related tasks, need to ensure complex, branching
time security properties. When defining such properties
one needs to deal with subtle problems regarding the
scheduling of non-deterministic decisions, the delivery
of messages sent on resilient (non-adversarially
controlled) channels, fair executions (executions where
no party, both honest and dishonest, is unreasonably
precluded to perform its actions), and defining
strategies of adversaries against all possible
non-deterministic choices of parties and arbitrary
delivery of messages via resilient channels. These
problems are typically not addressed in cryptographic
models and these models therefore do not suffice to
formalize branching time properties, such as those
required of contract signing protocols.
In this paper, we develop a cryptographic model that deals with
all of the above problems. One central feature of our model is a
general definition of fair scheduling which not only formalizes fair
scheduling of resilient channels but also fair scheduling of actions
of honest and dishonest principals. Based on this model and the
notion of fair scheduling, we provide a definition of a
prominent branching time property of contract signing
protocols, namely balance, and give the first
\emph{cryptographic} proof that the Asokan-Shoup-Waidner
two-party contract signing protocol is balanced.
Efficient and Provably-Secure Certificateless Short Signature Scheme from Bilinear Pairings
In this paper, we present a certificateless signature (CLS) scheme that is proved to be secure in the random oracle model under the hardness assumptions of k-CAA and Inv-CDHP. Our scheme upholds all desirable properties of previous CLS schemes, and requires general cryptographic hash functions instead of the MapToPoint hash function which is inefficient and probabilistic. Furthermore, our scheme requires less computation cost and significantly more efficient than all known CLS schemes, and the size of signatures generated by our scheme is approximate 160 bits, which is the shortest certificateless signatures so far. So it can be used widely, especially in low-bandwidth communication environments.
Randomness Extraction via Delta-Biased Masking in the Presence of a Quantum Attacker
Randomness extraction is of fundamental importance for information-theoretic cryptography. It allows to transform a raw key about which an attacker has some limited knowledge into a fully secure random key, on which the attacker has essentially no information.
We show a new randomness-extraction technique which works also in case where the attacker has quantum information on the raw key. Randomness extraction is done by XORing a so-called delta-biased mask to the raw key. Up to date, only very few techniques are known to work against a quantum attacker, much in contrast to the classical (non-quantum) setting, which is much better understood and for which a vast amount of different techniques for randomness extraction are known.
We show two applications of the new result. We first show how to encrypt a long message with a short key, information-theoretically secure against a quantum attacker, provided that the attacker has enough quantum uncertainty on the message. This generalizes the concept of entropically-secure encryption to the case of a quantum attacker.
As a second application, we show how the new randomness-extraction technique allows to do error-correction without leaking partial information to a quantum attacker. Such a technique is useful in settings where the raw key may contain errors, since standard error-correction techniques may provide the attacker with information on, say, a secret key that was used to obtain the raw key.
1. AES seems weak. 2. Linear time secure cryptography
Uncategorized
Uncategorized
We describe a new simple but more powerful form of linear cryptanalysis.
It appears to break AES (and undoubtably other cryptosystems too, e.g. SKIPJACK).
The break is ``nonconstructive,'' i.e. we make it plausible
(e.g. prove it in certain approximate probabilistic models)
that a small algorithm for quickly determining AES-256 keys from plaintext-ciphertext pairs
exists -- but without constructing the algorithm. The attack's runtime is comparable
to performing $64^w$ encryptions where $w$ is the (unknown) minimum Hamming weight in certain
binary linear error-correcting codes (BLECCs) associated with AES-256. If $w < 43$ then
our attack is faster than exhaustive key search; probably $w < 10$.
(Also there should be ciphertext-only attacks if the plaintext is natural English.)
Even if this break breaks due to the underlying models inadequately approximating the real world,
we explain how AES still could contain ``trapdoors'' which would
make cryptanalysis unexpectedly easy for anybody who knew the trapdoor.
If AES's designers had inserted such a trapdoor, it could be very
easy for them to convince us of that.
But if none exist, then it is probably infeasibly difficult for them to convince us of \emph{that}.
We then discuss how to use the theory of BLECCs to
build cryptosystems provably
1. not containing trapdoors of this sort,
2. secure against our strengthened form of linear cryptanalysis,
3. secure against ``differential'' cryptanalysis,
4. secure against D.J.Bernstein's timing attack.
Using this technique we prove a fundamental theorem:
it is possible to thus-encrypt $n$ bits with security
$2^{cn}$, via an circuit $Q_n$ containing $\le c n$
two-input logic gates
and operating in $\le c \log n$ gate-delays, where the three $c$s denote
(possibly different) positive constants and $Q_n$ is constructible
in polynomial$(n)$ time.
At the end we give tables of useful binary codes.
A Note on the Ate Pairing
Uncategorized
Uncategorized
The Ate pairing has been suggested since it can be computed efficiently on ordinary elliptic curves with small values of the traces of Frobenius $t$. However, not all pairing-friendly elliptic curves have this property. In this paper, we generalize the Ate pairing and find a series of variations of the Ate pairing. We show that the shortest Miller loop of the variations of the Ate pairing can possibly be as small as $r^{1/\varphi(k)}$ on more pairing-friendly curves generated by the method of complex multiplications, and hence speed up the pairing computation significantly.
BEDA: Button-Enabled Device Pairing
Secure initial pairing of electronic gadgets is a challenging problem, especially considering lack of any common security infrastructure. The main security issue is the threat of so-called Man-in-the-Middle (MiTM) attacks, whereby an attacker inserts itself into the pairing protocol by impersonating one of the legitimate parties. A number of interesting techniques have been proposed, all of which involve the user in the pairing process. However, they are inapplicable to many common scenarios where devices to-be-paired do not possess required interfaces, such as displays, speakers, cameras or microphones. In this paper, we introduce BEDA (Button-Enabled Device Association), a protocol suite for secure pairing devices with minimal user interfaces. The most common and minimal interface available on wide variety of devices is a single button. BEDA protocols can accommodate pairing scenarios where one (or even both) devices only have a single button as their "user interface". Our usability study demonstrates that BEDA protocols involve very little human burden and are quite suitable for ordinary users.
Incorporating Temporal Capabilities in Existing Key Management Schemes
The problem of key management in access hierarchies is how to assign keys to users and classes such that each user, after receiving her secret key(s), is able to {\em independently} compute access keys for (and thus obtain access to) the resources at her class and all descendant classes in the hierarchy. If user privileges additionally are time-based (which is likely to be the case for all of the applications listed above), the key(s) a user receives should permit access to the resources only at the appropriate times. This paper present a new, provably secure, and efficient solution that can be used to add time-based capabilities to existing hierarchical schemes. It achieves the following performance bounds: (i) to be able to obtain access to an arbitrary contiguous set of time intervals, a user is required to store at most 3 keys; (ii) the keys for a user can be computed by the system in constant time; (iii) key derivation by the user within the authorized time intervals involves a small constant number of inexpensive cryptographic operations; and (iv) if the total number of time intervals in the system is $n$, then the increase of the public storage space at the server due to our solution is only by a small asymptotic factor, e.g., $O(\log^* n \log\log n)$ with a small constant.
A Note on the Relay Attacks on e-passports: The Case of Czech e-passports
The threat of relay attacks on authentication protocols is often well recognized, especially for contactless applications like RFID chips. It is, therefore, a bit surprising to meet an implementation that actually encourages rather than eliminates these attacks. We present our experimental observations concerning Czech e-passports. These show clearly an inherent weakness rooted in lower layers of ISO 14443. As the behavior is unavoidable, it induces a question on whether the e-passport should not have used a different communication protocol or authentication scheme.
Last updated: 2007-11-08
PORs: Proofs of Retrievability for Large Files
In this paper, we define and explore the notion of a _proof of retrievability_ (POR). A POR enables an archive or back-up service (prover) to demonstrate to a user (verifier) that it has ``possession'' of a file F, that is, that the archive retains data sufficient for the user to retrieve F in its entirety.
A POR may be viewed as a kind of cryptographic proof of knowledge (POK), but one specially designed to handle a _large_ file (or bitstring) F. We explore POR protocols here in which the communication costs, number of memory accesses for the prover, and storage requirements of the user (verifier) are small parameters essentially independent of the length of $F$. In addition, in a POR, unlike a POK, neither the prover nor the verifier need actually have knowledge of F. PORs give rise to a new and unusual security definition.
We view PORs as an important tool for the management of semi-trusted online archives. Existing cryptographic tools help users ensure the privacy and integrity of their files once they are retrieved. It is also natural, however, for users to want to verify that archives do not delete or modify files while they are stored. The goal of a POR is to accomplish these checks {\em without users having to download the files themselves}. A POR can also provide quality-of-service guarantees, i.e., show that a file is retrievable within a certain time bound.
Time-Memory-Data Trade-off Attack on Stream Ciphers based on Maiorana-McFarland Functions
Uncategorized
Uncategorized
In this paper, we present the time-memory-data (TMD) trade-off attack on stream ciphers filtered by Maiorana-McFarland functions. This can be considered as a generalization of the time-memory-data trade-off attack of Mihaljevic and Imai on Toyocrypt. First, we substitute the filter function in Toyocrypt (which has the same size as the LFSR) with a general Maiorana-McFarland function. This allows us to apply the attack to a wider class of stream ciphers. Second, we highlight how the choice of different Maiorana-McFarland functions can affect the effectiveness of our attack. Third, we show that the attack can be modified to apply on filter functions which are smaller than the LFSR and on filter-combiner stream ciphers. This allows us to cryptanalyze other configurations commonly found in practice. Finally, filter functions with vector output are sometimes used in stream ciphers to improve the throughput. Therefore the case when the Maiorana-McFarland functions have vector output is investigated. We found that the extra speed comes at the price of additional weaknesses which make the attacks easier.
Attribute Based Group Signature with Revocation
In real life, one requires signatures to be from people who fulfill certain criteria, implying that they should possess specific attributes. For example, Alice might want a signature from an employee in Bob’s company who is a member in the IT staff, a senior manager within the biometrics team or at least a junior manager in the cryptography team. In such a case an Attribute Based Group Signature scheme (ABGS) could be applied. Group signature schemes are those where each member of a group can sign on behalf of the others. An ABGS scheme is a type of group signature scheme, where the signing member has to have certain attributes. In[12], the authors introduced the first ABGS but it lacked the ability to revoke. In this paper, we introduce a new scheme that will enable us to remove a member from a group or remove some of his attributes, when needed.
A Four-Component Framework for Designing and Analyzing Cryptographic Hash Algorithms
Cryptographic hash algorithms are important building blocks in cryptographic protocols, providing authentication and assurance of integrity. While many different hash algorithms are available including MD5, Tiger, and HAVAL, it is difficult to compare them since they do not
necessarily use the same techniques to achieve their security goals. This work informally describes a framework in four parts which allows different hash algorithms to be compared based on their strengths and weaknesses. By breaking down cryptographic hash algorithms into their preprocessing, postprocessing, compression function, and internal structure components, weaknesses in existing algorithms can be mitigated and new algorithms can take advantage of strong individual components.
Making Large Hash Functions From Small Compression Functions
We explore the idea of creating a hash function that produces an $s$-bit digest from a compression function with an $n$-bit output, where $s > n$. % where $s\le 2^{n/2}n$.This is accomplished by truncating a hash function with a digest size of $\ell n$-bits. Our work answers the question of how large $\ell$ can be while creating a digest of $sn$-bits securely. We prove that our construction is secure with respect to preimage resistance and collision resistance for $s \le 2^{n/2}n$.
Long-lived digital integrity using short-lived hash functions
New collision-finding attacks on widely used cryptographic hash functions raise questions about systems that depend on certain properties of these functions for their security. Even after new and presumably better hash functions are deployed, users may have digital signatures and digital time-stamp certificates that were computed with recently deprecated hash functions. Is there any way to use a new and currently unassailable hash function to buttress the security of an old signature or time-stamp certificate?
The main purpose of this note is to remind the technical community of a simple solution to this problem that was published more than a decade ago.
Forward-secure Key Evolution in Wireless Sensor Networks
We consider a key distribution scheme for securing node-to-node communication in sensor networks. While most schemes in use are based on random predistribution, we consider a system of dynamic pairwise keys based on design due to Ren, Tanmoy and Zhou. We design and analyze a variation of this scheme, in which capturing a node does not lead to security threats for the past communication.
Instead of bit-flipping, we use a cryptographic one-way function. While this immediately guarantees forward-security, it is not clear whether the pseudorandom transformation of the keys does not lead to subtle security risks due to a specific distribution of reachable keys, such as existence of small attractor subspaces. (This problem does not occur for the design of Ren, Tanmoy and Zhou.) We show, in a rigid mathematical way, that this is not the case: after a small number of steps probability distribution of keys leaves no room for potential attacks.
Certificateless Ring Signatures
Ring signature scheme is a cryptographic construct that enables a signer to sign on behalf of a group of $n$ different people such that the verifier can only ensure someone in the group signed, but not exactly whom. Ring signatures are utilized in many security applications.
It is tricky to deploy multi-user cryptographic construct due to the complexity involved by certificates. Specifically, ring signatures working under traditional public key infrastructure requires the transfer and verification of $n$ certificates, making the scheme both space and time inefficient. On the other hand, the key-escrow problem of identity-based solution makes the authenticity of the ring signature in question. This paper studies ring signature in certificateless cryptography, one with neither certificate nor key-escrow.
Designing a certificateless ring signature scheme is not entirely trivial. Many certificateless signatures require public key validity checking. In the context of ring signatures, this means both the signer and the verifier need to deal with the complexity in the verification of $n$ public keys. We propose the first certificateless ring signature scheme, without such public key validity checking.