All papers in 2016 (Page 8 of 1195 results)
Cross&Clean: Amortized Garbled Circuits with Constant Overhead
Garbled circuits (GC) are one of the main tools for secure two-party computation. One of the most promising techniques for efficiently achieving active-security in the context of GCs is the so called \emph{cut-and-choose} approach, which in the last few years has received many refinements in terms of the number of garbled circuits which need to be constructed, exchanged and evaluated.
In this paper we ask a simple question, namely \emph{how many garbled circuits are needed to achieve active security?} and we propose a novel protocol which achieves active security while using only a constant number of garbled circuits per evaluation in the amortized setting.
AEP-M: Practical Anonymous E-Payment for Mobile Devices using ARM TrustZone and Divisible E-Cash (Full Version)
Electronic payment (e-payment) has been widely applied to electronic commerce and has especially attracted a large number of mobile users. However, current solutions often focus on protecting users' money security without concerning the issue of users' privacy leakage. In this paper, we propose AEP-M, a practical anonymous e-payment scheme specifically designed for mobile devices using TrustZone. On account of the limited resources on mobile devices and time constraints of electronic transactions, we construct our scheme based on efficient divisible e-cash system. Precisely, AEP-M allows users to withdraw a large coin of value $2^{n}$ at once, and then spend it in several times by dividing it without revealing users' identities to others, including banks and merchants. Users' payments cannot be linked either. AEP-M utilizes bit-decomposition technique and pre-computation to further increase the flexibility and efficiency of spending phase for mobile users. As a consequence, the frequent online spending process just needs at most $n$ exponentiations on elliptic curve on mobile devices. Moreover, we elaborately adapt AEP-M to TrustZone architecture for the sake of protecting users' money and critical data. The methods about key derivation and sensitive data management relying on a root of trust from SRAM Physical Unclonable Function (PUF) are presented. We implement a prototype system and evaluate AEP-M using Barreto-Naehrig (BN) curve with 128-bit security level. The security analysis and experimental results indicate that our scheme could meet the practical requirement of mobile users in respects of security and efficiency.
Partition-Based Trapdoor Ciphers
This paper deals with block ciphers embedding a trapdoor which consists to map a partition of the plaintext space to a partition of the ciphertext space.
In a first part, this issue is reduced to the study of the S-boxes of the cipher satisfying a few criteria.
Then, differential and linear properties of such S-boxes are assessed and an algorithm to build optimal S-boxes is provided.
Finally, these primitives are used to design a small trapdoor cipher resistant to linear and differential cryptanalysis.
This trapdoor allows to recover the $\kappa$-bit master key with only one plaintext/ciphertext pair and an effort of $2^{\frac{\kappa}{2}}$ encryptions.
MiMC: Efficient Encryption and Cryptographic Hashing with Minimal Multiplicative Complexity
Uncategorized
Uncategorized
We explore cryptographic primitives with low multiplicative complexity. This is motivated by recent progress in practical applications of secure multi-party computation (MPC), fully homomorphic encryption (FHE), and zero-knowledge proofs (ZK) where primitives from symmetric cryptography are needed and where linear computations are, compared to non-linear operations, essentially ``free''. Starting with the cipher design strategy ``LowMC'' from Eurocrypt 2015, a number of bit-oriented proposals have been put forward, focusing on applications where the multiplicative depth of the circuit describing the cipher is the most important optimization goal.
Surprisingly, albeit many MPC/FHE/ZK-protocols natively support operations in \GF{p} for large $p$, very few primitives, even considering all of symmetric cryptography, natively work in such fields. To that end, our proposal for both block ciphers and cryptographic hash functions is to reconsider and simplify the round function of the Knudsen-Nyberg cipher from 1995. The mapping $F(x) := x^3$ is used as the main component there and is also the main component of our family of proposals called ``MiMC''. We study various attack vectors for this construction and give a new attack vector that outperforms others in relevant settings.
Due to its very low number of multiplications, the design lends itself
well to a large class of new applications, especially when the depth does not matter but the total number of multiplications in the circuit
dominates all aspects of the implementation. With a number of rounds which we deem secure based on our security analysis, we report on significant performance improvements in a representative use-case involving SNARKs.
Characterisation and Estimation of the Key Rank Distribution in the Context of Side Channel Evaluations
Quantifying the side channel security of implementations has been a significant research question for several years in academia but also among real world side channel practitioners. As part of security evaluations, efficient key rank estimation algorithms were devised, which in contrast to analyses based on subkey recovery, give a holistic picture of the security level after a side channel attack. However, it has been observed that outcomes of rank estimations show a huge spread in precisely the range of key ranks where enumeration could lead to key recovery. These observations raise the question whether this is because of insufficient rank estimation procedures, or, if this is an inherent property of the key rank. Furthermore, if this was inherent, how could key rank outcomes be translated into practically meaningful figures, suitable to analysing the risk that real world side channel attacks pose? This paper is a direct response to these questions. We experimentally identify the key rank distribution and show that it is independent of different distinguishers and signal-to-noise ratios. Then we offer a theoretical explanation for the observed key rank distribution and determine how many samples thereof are required for a robust estimation of some key parameters. We discuss how this can be naturally integrated into real world side channel evaluation practices. We conclude our research by connecting non-parametric order statistics, in particular percentiles, in a practically meaningful way with business goals.
Truncated, Impossible, and Improbable Differential Analysis of Ascon
Ascon is an authenticated encryption algorithm which is recently qualified for the second-round of the Competition for Authenticated Encryption: Security, Applicability, and Robustness. So far, successful differential, differential-linear, and cube-like attacks on the reduced-round Ascon are provided. In this work, we provide the inverse of Ascon's linear layer in terms of rotations which can be used for constructing impossible differentials. We show that Ascon's S-box contains 35 undisturbed bits and we use them to construct 4 and 5-round truncated, impossible, and improbable differential distinguishers. Our results include practical 4-round truncated, impossible, and improbable differential attacks on Ascon. Our best attacks using these techniques break 5 out of 12 rounds. These are the first successful truncated, impossible, and improbable differential attacks on the reduced-round Ascon.
Two Cents for Strong Anonymity: The Anonymous Post-office Protocol
Uncategorized
Uncategorized
We introduce the {\em Anonymous Post-office Protocol (AnonPoP)}, a practical strongly-anonymous messaging system. AnonPoP offers anonymity against globally eavesdropping adversaries that control a majority of AnonPoP's servers.
AnonPoP design combines effectively known techniques such as (synchronous) mix-cascade and constant sending rate, with several new techniques including {\em request-pool}, {\em bad-server isolation} and {\em per-epoch mailboxes}.
\newline
AnonPoP is {\em affordable}, with monthly costs of $2$\textcent\ per client, and {\em efficient} with respect to latency, communication, and energy, making it suitable for mobile clients. We developed an API that allows other applications to use AnonPoP for adding strong anonymity. We validated the system and its usability by experiments in cloud-based deployment and simulations, including a POC Android messaging application and a `double-blinded' usability study.
Efficient Homomorphic Integer Polynomial Evaluation based on GSW FHE
Uncategorized
Uncategorized
We introduce new methods to evaluate integer polynomials with GSW FHE, which has much slower noise growth and per integer multiplication cost $O((\log k/k)^{4.7454}/n)$ times the original GSW, where $k$ is the input plaintext width, $n$ is the LWE dimention parameter. Basically we reduce the integer multiplication noise by performing the evaluation between two kinds of ciphertexts, one in $\mathbb{Z}_q$ and another in $\mathbb{F}_2^{\lceil \log q \rceil}$. The conversion between two ciphertexts can be achieved by the integer bootstrapping. We also propose to solve the ciphertext expansion problem by symmetric encryption with stream ciphers.
A Systolic Hardware Architectures of Montgomery Modular Multiplication for Public Key Cryptosystems
Uncategorized
Uncategorized
The arithmetic in a finite field constitutes the core of Public Key Cryptography like RSA, ECC or pairing-based cryptography. This paper discusses an efficient hardware implementation of the Coarsely Integrated Operand Scanning method (CIOS) of Montgomery modular multiplication combined with an effective systolic architecture designed with a Two-dimensional array of Processing Elements. The systolic architecture increases the speed of calculation by combining the concepts of pipelining and the parallel processing into a single concept. We propose the CIOS method for the Montgomery multiplication using a systolic architecture. As far as we know this is the first implementation of such design. The proposed architectures are designed for Field Programmable Gate Array platforms. They targeted to reduce the number of clock cycles of the modular multiplication. The presented implementation results of the CIOS algorithms focuses on different security levels useful in cryptography. This architecture have been designed in order to use the flexible DSP48 on Xilinx FPGAs. Our architecture is scalable and depends only on the number and size of words. For instance, we provide results of implementation for 8, 16, 32 and 64 bit long words in 33, 66, 132 and 264 clock cycles. We highlight the fact that for a given number of word, the number of clock cycles is constant.
Domain-Oriented Masking: Compact Masked Hardware Implementations with Arbitrary Protection Order
Uncategorized
Uncategorized
Passive physical attacks, like power analysis, pose a serious threat to the security of embedded systems and corresponding countermeasures need to be implemented. In this work, we demonstrate how the costs for protecting digital circuits against passive physical attacks can be lowered significantly. We introduce a novel masking approach called domain-oriented masking (DOM). Our approach provides the same level of security as threshold implementations (TI), while it requires less chip area and less randomness. DOM can also be scaled easily to arbitrary protection orders for any circuit.
To demonstrate the flexibility of our scheme, we apply DOM to a hardware design of the Advanced Encryption Standard (AES). The presented AES implementation is built in a way that it can be synthesized for any protection order. Although the design is scalable, it leads to the smallest (7.1 kGE), fastest, and least randomness demanding (18 bits) first-order secure AES implementation. The gap between DOM and TI increases with the protection order. Our second-order secure AES S-box implementation, for example, has a hardware footprint that is half the size of the smallest existing second-order TI of the S-box. This paper includes synthesis results of our AES implementation up to the 15th protection order.
A General Polynomial Selection Method and New Asymptotic Complexities for the Tower Number Field Sieve Algorithm
In a recent work, Kim and Barbulescu had extended the tower number field sieve algorithm to obtain improved asymptotic complexities in
the medium prime case for the discrete logarithm problem on $\mathbb{F}_{p^n}$ where $n$ is not a prime power. Their method does not work
when $n$ is a composite prime power. For this case, we obtain new asymptotic complexities, e.g., $L_{p^n}(1/3,(64/9)^{1/3})$ (resp.
$L_{p^n}(1/3,1.88)$ for the multiple number field variation) when $n$ is composite and a power of 2; the previously best known complexity for this
case is $L_{p^n}(1/3,(96/9)^{1/3})$ (resp. $L_{p^n}(1/3,2.12)$). These complexities may have consequences to the selection of key sizes for
pairing based cryptography. The new complexities are achieved through a general polynomial selection method.
This method, which we call Algorithm-$\mathcal{C}$, extends a previous polynomial selection method proposed at Eurocrypt 2016 to the
tower number field case. As special cases, it is possible to obtain the generalised Joux-Lercier and the Conjugation method of
polynomial selection proposed at Eurocrypt 2015 and the extension of these methods to the tower number field scenario by Kim and Barbulescu.
A thorough analysis of the new algorithm is carried out in both concrete and asymptotic terms.
Ghostshell: Secure Biometric Authentication using Integrity-based Homomorphic Evaluations
Biometric authentication methods are gaining popularity due to their convenience.
For an authentication without relying on trusted hardwares,
biometrics or their hashed values should be stored in the server.
Storing biometrics in the clear or in an encrypted form, however,
raises a grave concern about biometric theft through hacking or man-in-the middle attack.
Unlike ID and password, once lost biometrics cannot practically be replaced.
Encryption can be a tool for protecting them from theft, but encrypted biometrics should be recovered for comparison.
In this work, we propose a secure biometric authentication scheme, named Ghostshell,
in which an encrypted template is stored in the server and then compared
with an encrypted attempt \emph{without} decryption.
The decryption key is stored only in a user's device and so biometrics can be kept secret even against a compromised server.
Our solution relies on a somewhat homomorphic encryption (SHE) and a message authentication code (MAC).
Because known techniques for SHE is computationally expensive,
we develop a more practical scheme by devising a significantly efficient matching function exploiting SIMD operations and
a one-time MAC chosen for efficient homomorphic evaluations (of multiplication depth 2).
When applied to Hamming distance matching on 2400-bit irises,
our implementation shows that the computation time is approximately 0.47 and 0.1 seconds
for the server and the user, respectively.
Proofs of Knowledge on Monotone Predicates and its Application to Attribute-Based Identifications and Signatures
Uncategorized
Uncategorized
We propose a concrete procedure of the $\Sigma$-protocol introduced by Cramer, Damgård and Schoenmakers at CRYPTO '94, which is for proving knowledge that a set of witnesses satisfies a monotone predicate in witness-indistinguishable way; that is, hiding the assignment of truth in the predicate. We provide a detailed procedure by extending the so-called OR-proof.
Functional Encryption: Deterministic to Randomized Functions from Simple Assumptions
Functional encryption (FE) enables fine-grained control of sensitive data by allowing users to only compute certain functions for which they have a key. The vast majority of work in FE has focused on deterministic functions, but for several applications such as privacy-aware auditing, differentially-private data release, proxy re-encryption, and more, the functionality of interest is more naturally captured by a randomized function. Recently, Goyal et al. (TCC 2015) initiated a formal study of FE for randomized functionalities with security against malicious encrypters, and gave a selectively secure construction from indistinguishability obfuscation. To date, this is the only construction of FE for randomized functionalities in the public-key setting. This stands in stark contrast to FE for deterministic functions which has been realized from a variety of assumptions.
Our key contribution in this work is a generic transformation that converts any general-purpose, public-key FE scheme for deterministic functionalities into one that supports randomized functionalities. Our transformation uses the underlying FE scheme in a black-box way and can be instantiated using very standard number-theoretic assumptions (for instance, the DDH and RSA assumptions suffice). When applied to existing FE constructions, we obtain several adaptively-secure, public-key functional encryption schemes for randomized functionalities with security against malicious encrypters from many different assumptions such as concrete assumptions on multilinear maps, indistinguishability obfuscation, and in the bounded-collusion setting, the existence of public-key encryption, together with standard number-theoretic assumptions.
Additionally, we introduce a new, stronger definition for malicious security as the existing one falls short of capturing an important class of correlation attacks. In realizing this definition, our compiler combines ideas from disparate domains like related-key security for pseudorandom functions and deterministic encryption in a novel way. We believe that our techniques could be useful in expanding the scope of new variants of functional encryption (e.g., multi-input, hierarchical, and others) to support randomized functionalities.
Side-Channel Analysis Protection and Low-Latency in Action - case study of PRINCE and Midori
During the last years, the industry sector showed particular interest in solutions which allow to encrypt and decrypt data within one clock cycle. Known as low-latency cryptography, such ciphers are desirable for pervasive applications with real-time security requirements. On the other hand, pervasive applications are very likely in control of the end user, and may operate in a hostile environment. Hence, in such scenarios it is necessary to provide security against side-channel analysis (SCA) attacks while still keeping the low-latency feature.
Since the single-clock-cycle concept requires an implementation in a fully-unrolled fashion, the application of masking schemes - as the most widely studied countermeasure - is not straightforward. The contribution of this work is to present and discuss about the difficulties and challenges that hardware engineers face when integrating SCA countermeasures into low-latency constructions. In addition to several design architectures, practical evaluations, and discussions about the problems and potential solutions with respect to the case study PRINCE (also compared with Midori), the final message of this paper is a couple of suggestions for future low-latency designs to - hopefully - ease the integration of SCA countermeasures.
Achieving Better Privacy for the 3GPP AKA Protocol
Proposed by the 3rd Generation Partnership Project (3GPP) as a standard for 3G and 4G mobile-network communications, the AKA protocol is meant to provide a mutually-authenticated key-exchange between clients and associated network servers. As a result AKA must guarantee the indistinguishability from random of the session keys (key-indistinguishability), as well as client- and server-impersonation resistance. A paramount requirement is also that of client privacy, which 3GPP defines in terms of: user identity confidentiality,service untraceability,and location untraceability. Moreover, since servers are sometimes untrusted (in the case of roaming),the AKA protocol must also protect clients with respect to these third parties. Following the description of client-tracking attacks e.g. by using error messages or IMSI catchers, van den Broek et al. and respectively Arapinis et al. each proposed a new variant of AKA, addressing such problems. In this paper we use the approach of provable security to show that these variants still fail to guarantee the privacy of mobile clients. We propose an improvement of AKA, which retains most of its structure and respects practical necessities such as key management, but which provably attains security with respect to servers and Man-in-the-Middle (MiM) adversaries. Moreover, it is impossible to link client sessions in the absence of client-corruptions. Finally, we prove that any variant of AKA retaining its mutual authentication specificities cannot achieve client-unlinkability in the presence of corruptions. In this sense, our proposed variant is optimal.
Survey of Microarchitectural Side and Covert Channels, Attacks, and Defenses
Uncategorized
Uncategorized
Over last two decades, side and covert channel research has shown variety of ways of exfiltrating information for a computer system. Processor microarchitectural side and covert channel attacks have emerged as some of the most clever attacks, and ones which are difficult to deal with, without impacting system performance. Unlike electro-magnetic or power-based channels, microarchitectural side and covert channel do not require physical proximity to the target device. Instead, only malicious or cooperating spy applications need to be co-located on the same machine as the victim. And in some attacks even co-location is not needed, only timing of the execution of the victim as measured by a remote attacker over the network can form a side channel for information leaks. This survey extracts the key features of the processor's microarchitectural functional units which make the channels possible, presents an analysis and categorization of the variety of microarchitectural side and covert channels others have presented in literature, and surveys existing defense proposals. With advent of cloud computing and ability to launch microarchitectural side and covert channels even across virtual machines, understanding of these channels is critical.
Cryptographic Solutions for Credibility and Liability Issues of Genomic Data
Uncategorized
Uncategorized
In this work, we consider a scenario that includes an individual sharing his genomic data (or results obtained from his genomic data) with a service provider. In this scenario, (i) the service provider wants to make sure that received genomic data (or results) in fact belongs to the corresponding individual (and computed correctly), (ii) the individual wants to provide a digital consent along with his data specifying whether the service provider is allowed to further share his data, and (iii) if his data is shared without his consent, the individual wants to determine the service provider that is responsible for this leakage. We propose two schemes based on homomorphic signature and aggregate signature that links the information about the legitimacy of the data to the consent and the phenotype of the individual. Thus, to verify the data, each party also needs to use the correct consent and phenotype of the individual who owns the data.
Shortening the Libert-Peters-Yung Revocable Group Signature Scheme by Using the Random Oracle Methodology
At EUROCRYPT 2012, Libert, Peters and Yung (LPY) proposed the first scalable revocable group signature (R-GS) scheme in the standard model which achieves constant signing/verification costs and other costs regarding signers are at most logarithmic in N, where N is the maximum number of group members. However, although the LPY R-GS scheme is asymptotically quite efficient, this scheme is not sufficiently efficient in practice. For example, the signature size of the LPY scheme is roughly 10 times larger than that of an RSA signature (for 160-bit security). In this paper, we propose a compact R-GS scheme secure in the random oracle model that is efficient not only in the asymptotic sense but also in practical parameter settings. We achieve the same efficiency as the LPY scheme in an asymptotic sense, and the signature size is nearly equal to that of an RSA signature (for 160-bit security). It is particularly worth noting that our R-GS scheme has the smallest signature size compared to those of previous R-GS schemes which enable constant signing/verification costs.
Our technique, which we call parallel Boneh–Boyen–Shacham group signature technique, helps to construct an R-GS scheme without following the technique used in LPY, i.e., we directly apply the Naor–Naor–Lotspiech framework without using any identity-based encryption.
Groth-Sahai Proofs Revisited Again: A Bug in ``Optimized'' Randomization
The Groth-Sahai proof system (EUROCRYPT 2008, SIAM Journal of Computing 41(5)) provides efficient non-interactive witness-indistinguishable (NIWI) and zero-knowledge (NIZK) proof systems for languages over bilinear groups and is a widely-used versatile tool to design efficient cryptographic schemes and protocols.
We revisit randomization of the prover in the GS proof system. We find an unnoticed bug in the ``optimized'' randomization in the symmetric bilinear setting with several assumptions, say, the DLIN assumption or the matrix-DH assumption. This bug leads to security issues of the GS NIWI proof system with ``optimized'' randomization for multi-scalar multiplication equations and the GS NIZK proof system with ``optimized'' randomization for certain cases of pairing product equations and multi-scalar multiplication equations.
Nonce-Disrespecting Adversaries: Practical Forgery Attacks on GCM in TLS
We investigate nonce reuse issues with the GCM block cipher mode as used in TLS and focus in particular on AES-GCM, the most widely deployed variant. With an Internet-wide scan we identified 184 HTTPS servers repeating nonces, which fully breaks the authenticity of the connections. Affected servers include large corporations, financial institutions, and a credit card company. We present a proof of concept of our attack allowing to violate the authenticity of affected HTTPS connections which in turn can be utilized to inject seemingly valid content into encrypted sessions. Furthermore we discovered over 70,000 HTTPS servers using random nonces, which puts them at risk of nonce reuse if a large amount of data is sent over the same connection.
T-Proof: Secure Communication via Non-Algorithmic Randomization
shared random strings are either communicated or recreated algorithmically in “pseudo” mode, thereby exhibiting innate vulnerability. Proposing a secure protocol based on unshared randomized data, which therefore can be based on ‘white noise’ or other real-world, non algorithmic randomization. Prospective use of this T-Proof protocol includes proving possession of data to a party in possession of same data. The principle: Alice wishes to prove to Bob that she is in possession of secret data s, known also to Bob. They agree on a parsing algorithm, dependent on the contents of s, resulting in breaking s into t distinct, consecutive sub-strings (letters). Alice then uses unshared randomization procedure to effect a perfectly random transposition of the t substrings, thereby generating a transposed string s’. She communicates s’ to Bob. Bob verifies that s’ is a permutation of s based on his parsing of s to the same t substrings, and he is then persuaded that Alice is in possession of s. Because s’ was generated via a perfectly randomized transposition of s, a cryptanalyst in possession of s’ faces t! s- candidates, each with a probability of 1/t! (what’s more: the value of t, and the identity of the t sub-strings is unknown to the cryptanalyst). Brute force cryptanalysis is the fastest theoretical strategy. T-Proof can be played over s, mixed with some agreed upon nonce to defend against replay options. Unlike the competitive solution of hashing, T-Proof does not stand the risk of algorithmic shortcut. Its intractability is credibly appraised
Exploiting the Physical Disparity: Side-Channel Attacks on Memory Encryption
Memory and disk encryption is a common measure to protect sensitive information in memory from adversaries with physical access. However, physical access also comes with the risk of physical attacks. As these may pose a threat to memory confidentiality, this paper investigates contemporary memory and disk encryption schemes and their implementations with respect to Differential Power Analysis (DPA) and Differential Fault Analysis (DFA). It shows that DPA and DFA recover the keys of all the investigated schemes, including the tweakable block ciphers XEX and XTS. This paper also verifies the feasibility of such attacks in practice. Using the EM side channel, a DPA on the disk encryption employed within the ext4 file system is shown to reveal the used master key on a Zynq Z-7010 system on chip. The results suggest that memory and disk encryption secure against physical attackers is at least four times more expensive.
Adequate Elliptic Curve for Computing the Product of n Pairings
Many pairing-based protocols require the computation of the product
and/or of a quotient of n pairings where n > 1 is a natural integer.
Zhang et al.[1] recently showed that the Kachisa-Schafer and Scott family
of elliptic curves with embedding degree 16 denoted KSS16 at the 192-bit
security level is suitable for such protocols comparatively to the Baretto-
Lynn and Scott family of elliptic curves of embedding degree 12 (BLS12).
In this work, we provide important corrections and improvements to their
work based on the computation of the optimal Ate pairing. We focus on
the computation of the nal exponentiation which represent an important
part of the overall computation of this pairing. Our results improve by
864 multiplications in Fp the computations of Zhang et al.[1]. We prove
that for computing the product or the quotient of 2 pairings, BLS12 curves
are the best solution. In other cases, specially when n > 2 as mentioned in
[1], KSS16 curves are recommended for computing product of n pairings.
Furthermore, we prove that the curve presented by Zhang et al.[1] is not
resistant against small subgroup attacks. We provide an example of KSS16
curve protected against such attacks.
NTRU Modular Lattice Signature Scheme on CUDA GPUs
In this work we show how to use Graphics Processing Units (GPUs) with Compute Unified Device Architecture (CUDA) to accelerate a lattice based signature scheme, namely, the NTRU modular lattice signature (NTRU-MLS) scheme. Lattice based schemes require operations on large vectors that are perfect candidates for GPU implementations. In addition, similar to most lattice based signature schemes, NTRU-MLS provides transcript security with a rejection sampling technique. With a GPU implementation, we are able to generate many candidates simultaneously, and hence mitigate the performance slowdown from rejection sampling. Our implementation results show that for the original NTRU-MLS parameter sets, we obtain a 2x improvement in the signing speed; for the revised parameter sets, where acceptance rate of rejection sampling is down to around 1%, our implementation can be as much as 47x faster than a CPU implementation.
Better Security for Queries on Encrypted Databases
Private database query (PDQ) processing has received much attention from the fields of both cryptography and databases. While previous approaches to design PDQ protocols exploit several cryptographic tools concurrently, recently the appearance of fully homomorphic encryption (FHE) schemes enables us to design PDQ protocols without the aid of additional tools. However, to the best of our knowledge, all currently existing FHE-based PDQ protocols focus on protecting only constants in query statements, together with the client's data stored in the database server.
In this paper, we provide a FHE-based PDQ protocol achieving better security, protecting query types as well as constants in query statements for conjunctive, disjunctive, and threshold queries with equality comparison. Our contributions are three-fold: First, we present a new security definition that reflects our enhanced security model which additionally protects query types in query statements. Second, we provide a new design for PDQ protocols using FHE schemes. To do this, we come up with a method to homomorphically evaluate our encrypted target queries on the encrypted database. Thereafter, we apply it to construct a protocol and show its security under our enhanced security definition in the semi-honest model. Finally, we provide proof-of-concept implementation results of our PDQ protocol. According to our rudimentary experiments, it takes 40 seconds to perform a query on 2352 elements consisting of 11 attributes of 40-bit using Brakerski-Gentry-Vaikuntanathan's leveled FHE with SIMD techniques for 80-bit security, yielding an amortized rate of just 0.12 seconds per element.
Identity Chains
In this short technical summary, the authors describe how the mathematical primitives of Ring Confidential Transactions may be used to provide anonymous identity authentication services in a similar manner to "Anonrep" but in a trustless (or permissioned), distributed manner, and with the additional security and resilience provided by a blockchain. The use of the mathematics in the RingCT paper additionally, and importantly, allows for combining different types of authentication in a seamless manner, in essence if the Bitcoin or Monero blockchain is a single "thread," then the protocol here allows one to "weave" such threads together. The resulting protocol is dubbed an "Identity Chain" and provides anonymous authentication working in a lightweight and interoperable manner between any number of different service providers running different identity chains.
Chaos Machine: Different Approach to the Application and Significance of Numbers
Uncategorized
Uncategorized
In this paper we describe a theoretical model of \underline{chaos machine}, which combines the benefits of hash function and pseudo-random function, forming flexible \textit{one-way} \underline{push-pull interface}. It presents the idea to create a universal tool (design pattern) with modular design and customizable parameters, that can be applied where \textit{randomness} and \textit{sensitiveness} is needed (random oracle), and where appropriate construction determines case of application and selection of parameters provides preferred properties and security level. Machine can be used to implement many cryptographic primitives, including cryptographic hashes, message authentication codes and pseudo-random number generators.
Additionally, document includes sample implementation of chaos machine named Naive Czyzewski Generator, abbreviated NCG, that passes all the Dieharder, NIST and TestU01 test sets. Algorithm was designed and evaluated to be a cryptographically strong, inasmuch as indistinguishable from a uniform random function. The generator was developed to work as cryptographically secure pseudo-random number generator, collision resistance hash function or a cryptographic module. One can target a given period length by choosing the appropriate space parameter, i.e., for a given parameter $m$, algorithm is claimed to have period between $2^{8m}$ to $2^{16m}$.
Speeding up R-LWE post-quantum key exchange
Post-quantum cryptography has attracted increased attention in the last couple of years, due to the threat of quantum computers breaking current cryptosystems. In particular, the key size and performance of post-quantum algorithms became a significant target for optimization. In this spirit, Alkim \etal have recently proposed a significant optimization for a key exchange scheme that is based on the R-LWE problem.
In this paper, we build on the implementation of Alkim \etal, and
focus on improving the algorithm for generating a uniformly random polynomial. We optimize three independent directions: efficient pseudorandom bytes generation, decreasing the rejection rate during sampling, and vectorizing the sampling step. When measured on the latest Intel processor Architecture Codename Skylake, our new optimizations improve over Alkim \etal by up to 1.59x on the server side, and by up to 1.54x on the client side.
AnNotify: A Private Notification Service
Uncategorized
Uncategorized
AnNotify is a scalable service for private, timely and low-cost online notifications, based on anonymous communication, sharding, dummy queries, and Bloom filters. We present the design and analysis of AnNotify, as well as an evaluation of its costs. We outline the design of AnNotify and calculate the concrete advantage of an adversary observing multiple queries. We present a number of extensions, such as generic presence and broadcast notifications, and applications, including notifications for incoming messages in anonymous communications, updates to private cached web and Domain Name Service (DNS) queries.
Can Large Deviation Theory be Used for Estimating Data Complexity?
Statistical analysis of attacks on block ciphers have mostly used normal approximations. A few recent works
have proposed doing away with normal approximations and instead use Chernoff and Hoeffding bounds to obtain rigorous bounds
on data complexities of several attacks. This opens up the question of whether even better general bounds can be obtained
using the statistical theory of large deviations. In this note we examine this question. Our conclusion is that while in theory
this is indeed possible, in general obtaining meaningful expressions for data complexity presents several difficulties.
This leaves open the question of whether this can be done for specific attacks.
Beaver: A Decentralized Anonymous Marketplace with Secure Reputation
Amid growing concerns of government surveillance
and corporate data sharing, web users increasingly demand tools
for preserving their privacy without placing trust in a third
party. Unfortunately, existing centralized reputation systems need
to be trusted for either privacy, correctness, or both. Existing
decentralized approaches, on the other hand, are either vulnerable to Sybil attacks, present inconsistent views of the network,
or leak critical information about the actions of their users.
In this paper, we present Beaver, a decentralized anonymous
marketplace that is resistant against Sybil attacks on vendor
reputation, while preserving user anonymity. Beaver allows its
participants to enjoy open enrollment, and provides every user
with the same global view of the reputation of other users through
public ledger based consensus. Various cryptographic primitives
allow Beaver to offer high levels of usability and practicality,
along with strong anonymity guarantees. Operations such as
creating a listing, purchasing an item, and leaving feedback take
just milliseconds to compute and require generating just a few
kilobytes of state while often constructing convenient anonymity
sets of hundreds of transactions.
Authenticated Encryption with Variable Stretch
In conventional authenticated-encryption (AE) schemes, the ciphertext expansion, a.k.a. stretch or tag length, is a constant or a parameter of the scheme that must be fixed per key. However, using variable-length tags per key can be desirable in practice or may occur as a result of a misuse. The RAE definition by Hoang, Krovetz, and Rogaway (Eurocrypt 2015), aiming at the "best-possible" AE security, supports variable stretch among other strong features, but achieving the RAE goal incurs a particular inefficiency: neither encryption nor decryption can be online. The problem of enhancing the well-established nonce-based AE (nAE) model and the standard schemes thereof to support variable tag lengths per key, without sacrificing any desirable functional and efficiency properties such as online encryption, has recently regained interest as evidenced by extensive discussion threads on the CFRG forum and the CAESAR competition. Yet there is a lack of formal definition for this goal. First, we show that several recently proposed heuristic measures trying to augment the known schemes by inserting the tag length into the nonce and/or associated data fail to deliver any meaningful security in this setting. Second, we provide a formal definition for the notion of nonce-based variable-stretch AE (nvAE) as a natural extension to the traditional nAE model. Then, we proceed by showing a second modular approach to formalizing the goal by combining the nAE notion and a new property we call key-equivalent separation by stretch(kess). It is proved that (after a mild adjustment to the syntax) any nAE scheme which additionally fulfills the kess property will achieve the nvAE goal. Finally, we show that the nvAE goal is efficiently and provably achievable; for instance, by simple tweaks to off-the-shelf schemes such as OCB.
Fully Homomorphic Encryption with Isotropic Elements
In previous work I proposed a fully homomorphic encryption without bootstrapping which has the weak point in the enciphering function. In this paper I propose the fully homomorphic encryption scheme with non-zero isotropic octonions. I improve the previous scheme by adopting the non-zero isotropic octonions so that the “m and -m attack” is not useful because in proposed scheme many ciphertexts exist where the plaintext m is not zero and the norm is zero. The improved scheme is based on multivariate algebraic equations with high degree or too many variables while the almost all multivariate cryptosystems proposed until now are based on the quadratic equations avoiding the explosion of the coefficients. The improved scheme is against the Gröbner basis attack.
NTRU Prime: reducing attack surface at low cost
Uncategorized
Uncategorized
Several ideal-lattice-based cryptosystems have been broken by recent attacks that exploit special structures of the rings used in those cryptosystems. The same structures are also used in the leading proposals for post-quantum lattice-based cryptography, including the classic NTRU cryptosystem and typical Ring-LWE-based cryptosystems.
This paper (1) proposes NTRU Prime, which tweaks NTRU to use rings without these structures; (2) proposes Streamlined NTRU Prime, a public-key cryptosystem optimized from an implementation perspective, subject to the standard design goal of IND-CCA2 security; (3) finds high-security post-quantum parameters for Streamlined NTRU Prime; and (4) optimizes a constant-time implementation of those parameters. The resulting sizes and speeds show that reducing the attack surface has very low cost.
Revocable Hierarchical Identity-Based Encryption with Shorter Private Keys and Update Keys
Revocable hierarchical identity-based encryption (RHIBE) is an extension of HIBE that supports the revocation of user's private keys to manage the dynamic credentials of users in a system. Many different RHIBE schemes were proposed previously, but they are not efficient in terms of the private key size and the update key size since the depth of a hierarchical identity is included as a multiplicative factor. In this paper, we propose efficient RHIBE schemes with shorter private keys and update keys and small public parameters by removing this multiplicative factor. To achieve our goals, we first present a new HIBE scheme with the different generation of private keys such that a private key can be simply derived from a short intermediate private key. Next, we show that two efficient RHIBE schemes can be built by combining our HIBE scheme, an IBE scheme, and a tree based broadcast encryption scheme in a modular way.
Non-Interactive RAM and Batch NP Delegation from any PIR
We present an adaptive and non-interactive protocol for verifying arbitrary efficient computations in fixed polynomial time. Our protocol is computationally sound and can be based on any computational PIR scheme, which in turn can be based on standard polynomial-time cryptographic assumptions (e.g. the worst case hardness of polynomial-factor approximation of short-vector lattice problems). In our protocol, the prover and the verifier do not need to interact at all: The verifier sets up a public key ahead of time, and this key can be used by any prover to prove arbitrary statements in a completely adaptive manner. Verification is done using a secret verification key, and soundness relies on this key not being known to the prover. Our protocol further allows to prove statements about computations of arbitrary RAM machines.
Previous works either relied on knowledge assumptions, or could only offer non-adaptive two-message protocols (where the first message could not be re-used), and required either obfuscation-based assumptions or super-polynomial hardness assumptions.
We show that our techniques can also be applied to construct a new type of (non-adaptive) 2-message delegation protocols for batch NP statements. Specifically, we can simultaneously prove the membership of multiple instances in a given NP language, with communication complexity proportional to the length of a single witness.
CompGC: Efficient Offline/Online Semi-honest Two-party Computation
We introduce a new technique, component-based garbled circuits, for increasing the efficiency of secure two-party computation in the offline/online semi-honest setting. We observe that real-world functions are generally constructed in a modular way, comprising many standard components such as arithmetic operations and other common tasks. Our technique allows circuits for these common tasks to be garbled and shared during an offline phase; once the function to compute is specified, these pre-shared components can be chained together to create a larger garbled circuit. We stress that we do not assume that the function is known during the offline phase --- only that it uses some common, predictable components.
We give an implementation, CompGC, of this technique and measure the efficiency gains for various examples. We find that our technique results in roughly an order of magnitude performance improvement over standard garbled circuit-based secure two-party computation.
Secure Protocol Transformations
In the rich literature of secure multi-party computation (MPC), several important results rely on “protocol transformations,” whereby protocols from one model of MPC are transformed to protocols from another model. Motivated by the goal of simplifying and unifying results in the area of MPC, we formalize a general notion of black-box protocol transformations that captures previous transformations from the literature as special cases, and present several new transformations. We motivate our study of protocol transformations by presenting the following applications.
• Simplifying feasibility results:
– Easily rederive a result in Goldreich’s book (2004), on MPC with full security in the presence of an honest majority, from an earlier result in the book, on MPC that offers “security with abort.”
– Rederive the classical result of Rabin and Ben-Or (1989) by applying a transformation to the simpler protocols of Ben-Or et al. or Chaum et al. (1988).
• Efficiency improvements:
– The first “constant-rate ”MPC protocol for a constant number of parties that offers full information-theoretic security with an optimal threshold, improving over the protocol of Rabin and Ben-Or;
– A fully secure MPC protocol with optimal threshold that improves over a previous protocol of Ben-Sasson et al. (2012) in the case of “deep and narrow” computations;
– A fully secure MPC protocol with near-optimal threshold that improves over a previous protocol of Damgård et al. (2010) by improving the dependence on the security parameter from linear to polylogarithmic;
– An efficient new transformation from passive-secure two-party computation in the OT-hybrid and OLE-hybrid model to zero-knowledge proofs, improving over a recent similar transformation of Hazay and Venkitasubramaniam (2016).
Finally, we prove the impossibility of two simple types of black-box protocol transformations, including an unconditional variant of a previous negative result of Rosulek (2012) that relied on the existence of one-way functions.
Extracting the RC4 secret key of the Open Smart Grid Protocol
The Open Smart Grid Protocol (OSGP) is a widely used industry standard for exchanging sensitive data between devices inside of smart grids. For message confidentiality, OSGP implements a customised form of the RC4 stream cipher. In this work, we show how already known weaknesses of RC4 can be exploited to successfully attack the OSGP implementation as well. The attack modification is able to effectively derive the secret OSGP encryption and decryption key, given that an attacker can accumulate the cipher streams of approximately 90,000 messages. The possession of this key allows the attacker to decrypt all data intercepted on the OSGP smart grid and thereby obtain privacy critical information of its participants.
Analysis of the Blockchain Protocol in Asynchronous Networks
Nakamoto's famous blockchain protocol enables achieving consensus in a so-called \emph{permissionless setting}---anyone can join (or leave) the protocol execution, and the protocol instructions do not depend on the identities of the players.
His ingenious protocol prevents ``sybil attacks'' (where an adversary spawns any number of new players) by relying on computational puzzles (a.k.a. ``moderately hard functions'') introduced by Dwork and Naor (Crypto'92).
The analysis of the blockchain consensus protocol (a.k.a. Nakamoto consensus) has been a notoriously difficult task. Prior works that analyze it either make the simplifying assumption that network channels are fully synchronous (i.e. messages are instantly delivered without delays) (Garay et al, Eurocrypt'15) or only consider specific attacks (Nakamoto'08; Sampolinsky and Zohar, FinancialCrypt'15); additionally, as far as we know, none of them deal with players joining or leaving the protocol.
In this work we prove that the blockchain consensus mechanism satisfies a strong forms of *consistency* and *liveness* in an asynchronous network with adversarial delays that are a-priori bounded, within a formal model allowing for adaptive corruption and spawning of new players, assuming that the computational puzzle is modeled as a random oracle. (We complement this result by showing a simple attack against the blockchain protocol in a fully asynchronous setting, showing that the “puzzle-hardness” needs to be appropriately set as a function of the maximum network delay; this attack applies even for static corruption.)
As an independent contribution, we define an abstract notion of a blockchain protocol and identify appropriate security properties of such protocols; we prove that Nakamoto's blockchain protocol satisfies them and that these properties are sufficient for typical applications; we hope that this abstraction may simplify further applications of blockchains.
SQL on Structurally-Encrypted Databases
Uncategorized
Uncategorized
We show how to encrypt a relational database in such a way that it can efficiently support a large class of SQL queries. Our construction is based solely on structured encryption (STE) and does not make use of any property-preserving encryption (PPE) schemes such as deterministic and order-preserving encryption. As such, our approach leaks considerably less than PPE-based solutions which have recently been shown to reveal a lot of information in certain settings (Naveed et al., CCS '15). Our construction is efficient and---under some conditions on the database and queries---can have asymptotically-optimal query complexity. We also show how to extend our solution to be dynamic while maintaining the scheme's optimal query complexity.
Secure Logging Schemes and Certificate Transparency
Since hundreds of certificate authorities (CAs) can issue browser-trusted certificates, it can be difficult for domain owners to detect certificates that have been fraudulently issued for their domain. Certificate Transparency (CT) is a recent standard by the Internet Engineering Task Force (IETF) that aims to construct public logs of all certificates issued by CAs, making it easier for domain owners to monitor for fraudulently issued certificates. To avoid relying on trusted log servers, CT includes mechanisms by which monitors and auditors can check whether logs are behaving honestly or not; these mechanisms are primarily based on Merkle tree hashing and authentication proofs. Given that CT is now being deployed, it is important to verify that it achieves its security goals.
In this work, we define four security properties of logging schemes such as CT that can be assured via cryptographic means, and show that CT does achieve these security properties. We consider two classes of security goals: those involving security against a malicious logger attempting to present different views of the log to different parties or at different points in time, and those involving security against malicious monitors who attempt to frame an honest log for failing to include a certificate in the log. We show that Certificate Transparency satisfies these security properties under various assumptions on Merkle trees all of which reduce to collision resistance of the underlying hash function (and in one case with the additional assumption of unforgeable signatures).
Efficient Zero-Knowledge Contingent Payments in Cryptocurrencies Without Scripts
Uncategorized
Uncategorized
One of the most promising innovations offered by the cryptographic currencies (like Bitcoin) are the so-called \emph{smart contracts}, which can be viewed as financial agreements between mutually distrusting participants. Their execution is enforced by the mechanics of the currency, and typically has monetary consequences for the parties. The rules of these contracts are written in the form of so-called ``scripts'', which are pieces of code in some ``scripting language''. Although smart contracts are believed to have a huge potential, for the moment they are not widely used in practice. In particular, most of Bitcoin miners allow only to post standard transactions (i.e.: those without the non-trivial scripts) on the blockchain. As a result, it is currently very hard to create non-trivial smart contracts in Bitcoin.
Motivated by this, we address the following question: ``is it possible to create non-trivial efficient smart contracts using the standard transactions only?'' We answer this question affirmatively, by constructing efficient Zero-Knowledge Contingent Payment protocol for a large class of NP-relations. This includes the relations for which efficient sigma protocols exist. In particular, our protocol can be used to sell a factorization $(p,q)$ of an RSA modulus $n=pq$, which is an example that we implemented and tested its efficiency in practice.
As another example of the ``smart contract without scripts'' we show how our techniques can be used to implement the contract called ``trading across chains''.
A Provably Secure Code-based Concurrent Signature Scheme
Concurrent signatures allow two entities to generate two signatures in such a way that both signatures are ambiguous till some information is revealed by one of the parties. This kind of signature is useful in auction protocols and a wide range of scenarios in which involving participants are mutually distrustful.
In this paper, to have quantum-attack-resistant concurrent signatures as recommended by National Institute of Standards and Technology (NISTIR 8105), the first concurrent signature scheme based on coding theory is proposed.
Then, its security is proved under Goppa Parameterized Bounded Decoding and the Goppa Code Distinguishing assumptions in the random oracle model. We should highlight that our proposal can be a post-quantum candidate for fair exchange of signatures without a trusted third party in an efficient way (without a highly degree of interactions).
Loop-Abort Faults on Lattice-Based Fiat–Shamir and Hash-and-Sign Signatures
Uncategorized
Uncategorized
As the advent of general-purpose quantum computers appears to be drawing closer, agencies and advisory bodies have started recommending that we prepare the transition away from factoring and discrete logarithm-based cryptography, and towards postquantum secure constructions, such as lattice- based schemes.
Almost all primitives of classical cryptography (and more!) can be realized with lattices, and the efficiency of primitives like encryption and signatures has gradually improved to the point that key sizes are competitive with RSA at similar security levels, and fast performance can be achieved both in soft- ware and hardware. However, little research has been conducted on physical attacks targeting concrete implementations of postquantum cryptography in general and lattice-based schemes in particular, and such research is essential if lattices are going to replace RSA and elliptic curves in our devices and smart cards.
In this paper, we look in particular at fault attacks against implementations of lattice-based signature schemes, looking both at Fiat–Shamir type constructions (particularly BLISS, but also GLP, PASSSing and Ring-TESLA) and at hash-and-sign schemes (particularly the GPV-based scheme of Ducas–Prest– Lyubashevsky). These schemes include essentially all practical lattice-based signatures, and achieve the best efficiency to date in both software and hardware. We present several fault attacks against those schemes yielding a full key recovery with only a few or even a single faulty signature, and discuss possible countermeasures to protect against these attacks.
A Note on ``Outsourcing Large Matrix Inversion Computation to a Public Cloud"
We remark that the Lei et al.'s scheme [IEEE Transactions on Cloud Computing, 1 (1), 78-87, 2013] fails, because the verifying equation does not hold over the infinite field R. For the field R, the computational errors should be considered seriously.
Theoretical Attacks on E2E Voting Systems
We give a survey of existing attacks against end-to-end verifiable voting systems in the academic literature. We discuss attacks on the integrity of the election, attacks on the privacy of voters, and attacks aiming at coercion of voters. For each attack, we give a brief overview of the voting system and a short description of the attack and its consequences.
Last updated: 2016-07-11
Quantum key distribution with combined conjugate coding and information overloading
Uncategorized
Uncategorized
Most quantum key distribution schemes depend either on a form of conjugate coding or on the principle of putting more information into a quantum state than can possibly be read out with a measurement. We construct a scheme that combines both these approaches. It is built from Round-Robin Differential Phase Shift (RRDPS) and Unclonable Encryption. Compared to RRDPS and BB84-like protocols our scheme has the advantage that it thwarts simple attacks and forces the adversary to use entanglement. We provide a security analysis in case of attacks without entanglement. Part of this analysis also applies to the unmodified RRDPS.
SecureMed: Secure Medical Computation using GPU-Accelerated Homomorphic Encryption Scheme
Uncategorized
Uncategorized
Sharing the medical records of individuals among healthcare providers and researchers around the world can accelerate advances in medical research. While the idea seems increasingly practical due to cloud data services, maintaining patient privacy is of paramount importance. Standard encryption algorithms help protect sensitive data from outside attackers but they cannot be used to compute on this sensitive data while being encrypted. Homomorphic Encryption (HE) presents a very useful tool that can compute on encrypted data without the need to decrypt it. In this work, we describe an optimized NTRU-based implementation of the GSW homomorphic encryption scheme. Our results show a factor of 58×58× improvement in CPU performance compared to other recent work on encrypted medical data under the same security settings. Our system is built to be easily portable to GPUs resulting in an additional speedup of up to a factor of 104x (and 410x) to offer an overall speedup of 6085x (and 24011x) using a single GPU (or four GPUs), respectively.
The QARMA Block Cipher Family -- Almost MDS Matrices Over Rings With Zero Divisors, Nearly Symmetric Even-Mansour Constructions With Non-Involutory Central Rounds, and Search Heuristics for Low-Latency S-Boxes
Uncategorized
Uncategorized
This paper introduces QARMA, a new family of lightweight tweakable block ciphers targeted at applications such as memory encryption, the generation of very short tags for hardware-assisted prevention of software exploitation, and the con- struction of keyed hash functions.
QARMA is inspired by reflection ciphers such as PRINCE, to which it adds a tweaking input, and MANTIS. However, QARMA differs from previous reflector constructions in that it is a three-round Even-Mansour scheme instead of a FX-construction, and its middle permutation is non-involutory and keyed. We introduce and analyse a family of Almost MDS matrices defined over a ring with zero divisors that allows us to encode rotations in its operation while maintaining the minimal latency associated to {0,1}-matrices. The purpose of all these design choices is to harden the cipher against various classes of attacks.
We also describe new S-Box search heuristics aimed at minimising the critical path.
QARMA exists in 64- and 128-bit block sizes, where block and tweak size are equal, and keys are twice as long as the blocks.
We argue that QARMA provides sufficient security margins within the constraints de- termined by the mentioned applications, while still achieving best-in-class latency.
Implementation results on a state-of-the art manufacturing process are reported.
We also introduce a technique to extend the length of the tweak by using, for instance, a universal hash function, which, additionally, can be used to strengthen the security of QARMA.
Thrifty Zero-Knowledge - When Linear Programming Meets Cryptography
We introduce “thrifty” zero-knowledge protocols, or TZK.
These protocols are constructed by introducing a bias in the challenge send by the prover. This bias is chosen so as to maximize the security versus effort trade-off. We illustrate the benefits of this approach on several well-known zero-knowledge protocols.
Blind Password Registration for Verifier-based PAKE
We propose Blind Password Registration (BPR), a new class of cryptographic protocols that is instrumental for secure registration of client passwords at remote servers with additional protection against unwitting password disclosures on the server side that may occur due to the lack of the state-of-the-art password protection mechanisms implemented by the server or due to common server-compromise attacks. The dictionary attack resistance property of BPR protocols guarantees that the only information available to the server during and after the execution of the protocol cannot be used to reveal the client password without performing an offline dictionary attack on a password verifier (e.g. salted hash value) that is stored by the server at the end of the protocol. In particular, at no point in time the server is supposed to work with plain passwords. Our BPR model allows servers to enforce password policies and the requirement on the client to obey them during the execution of the BPR protocol is covered by the policy compliance property.
We construct an efficient BPR protocol in the standard model for ASCII-based password policies using some techniques underlying the recently introduced Zero-Knowledge Password Policy Checks (ZKPPC). However, we do not rely on the full power of costly ZKPPC proofs and in fact show that BPR protocols can be modelled and realised simpler and significantly faster (as supported by our implementation) without using them as a building block. Our BPR protocol can directly be used to replace ZKPPC-based registration procedure for existing VPAKE protocols.
Fault Tolerant Implementations of Delay-based Physically Unclonable Functions on FPGA
Recent literature has demonstrated that the security of Physically Unclonable Function (PUF) circuits might be adversely affected by the introduction of faults. In this paper, we propose novel and efficient architectures for a variety of widely used delay-based PUFs which are robust against high precision laser fault attacks proposed by Tajik et al. in FDTC-2015. The proposed architectures can be used to detect run-time modifications in the PUF design due to fault injection. In addition, we propose fault recovery techniques based on either logical reconfiguration or dynamic partial reconfiguration of the PUF design. We validate the robustness of our proposed fault tolerant delay-based PUF designs on Xilinx Artix-7 FPGA platform.
Function-Hiding Inner Product Encryption is Practical
In a functional encryption scheme, secret keys are associated with functions and ciphertexts are associated with messages. Given a secret key for a function f, and a ciphertext for a message x, a decryptor learns f(x) and nothing else about x. Inner product encryption is a special case of functional encryption where both secret keys and ciphertext are associated with vectors. The combination of a secret key for a vector x and a ciphertext for a vector y reveal <x, y> and nothing more about y. An inner product encryption scheme is function- hiding if the keys and ciphertexts reveal no additional information about both x and y beyond their inner product.
In the last few years, there has been a flurry of works on the construction of function-hiding inner product encryption, starting with the work of Bishop, Jain, and Kowalczyk (Asiacrypt 2015) to the more recent work of Tomida, Abe, and Okamoto (ISC 2016). In this work, we focus on the practical applications of this primitive. First, we show that the parameter sizes and the run-time complexity of the state-of-the-art construction can be further reduced by another factor of 2, though we compromise by proving security in the generic group model. We then show that function privacy enables a number of applications in biometric authentication, nearest-neighbor search on encrypted data, and single-key two-input functional encryption for functions over small message spaces. Finally, we evaluate the practicality of our encryption scheme by implementing our function-hiding inner product encryption scheme. Using our construction, encryption and decryption operations for vectors of length 50 complete in a tenth of a second in a standard desktop environment.
A Measure Version of Gaussian Heuristic
Most applicable lattice reduction algorithms used in practice are BKZ (Block-Korkine-Zolotarev) type algorithms as the blockwise generalizations of the LLL algorithm (Lenstra-Lenstra-Lovasz). Its original version was proposed by Schnorr and Euchner in 1991. The quality of reduced lattice bases is measured by the Hermitian factor $\frac{||{\bf b}_1||}{vol({\bf L})^{1/d}}$ and the $d$-th root of this factor which is called root Hermitian factor. In Asiacrypt 2011 paper Y. Chen and Phong Q. Nguyen used BKZ with extreme pruning enumeration subroutine to handle the large block size lattice reduction with the purpose that the better root Hermitian factors can be expected. This BKZ 2.0 algorithm has been served as a base stone for the security evaluation of recent lattice-based cryptosystems such as fully homomorphic encryption and cryptographic multilinear mappings. In this paper we propose a measure version of Gaussian heuristic. This is a strict mathematical proven theorem. It can be used to give a strict mathematical proof for conjectured or simulated root Hermitian factors in BKZ 2.0 type algorithms and BKZ or slide reduction with large block-sizes. The theoretical analysis of these heuristic assumptions in the simulator of BKZ 2.0 type algorithms are also given.
sElect: A Lightweight Verifiable Remote Voting System
Modern remote electronic voting systems, such as the prominent Helios system, are designed to provide vote privacy and verifiability, where, roughly speaking, the latter means that voters can make sure that their votes were actually counted. In this paper, we propose a new practical voting system called sElect (secure/simple elections). This system, which we implemented as a platform independent web-based application, is meant for low-risk elections and is designed to be particularly simple and lightweight in terms of its structure, the cryptography it uses, and the user experience. One of the unique features of sElect is that it supports fully automated verification, which does not require any user interaction and is triggered as soon as a voter looks at the election result. Despite its simplicity, we prove that this system provides a good level of privacy, verifiability, and accountability for low-risk elections.
Observations on the LPN Solving Algorithm from Eurocrypt'16
In this note we re-evaluate the Eurocrypt'16 paper by Zhang et al. in the area of LPN solving algorithms. We present the history of LPN solving algorithms and give the general description of the algorithm. While this new algorithm claims to improve all the previous results, we have discovered issues in its analysis. We review inconsistencies in complexity estimates and a misconception of some new reduction algorithm.
What we show is that the results of Eurocrypt'16 do not provide better performance compared with the results from Asiacrypt'14.
Cryptanalysis of Reduced NORX
Uncategorized
Uncategorized
NORX is a second round candidate of the ongoing CAESAR competition for authenticated encryption. It is a nonce based authenticated encryption scheme based on the sponge construction. Its two variants denoted by NORX32 and NORX64 provide a security level of 128 and 256 bits, respectively. In this paper, we present a state/key recovery attack for both variants with the number of rounds of the core permutation reduced to 2 (out of 4) rounds. The time complexity of the attack for NORX32 and NORX64 is $2^{119}$ and $2^{234}$ respectively, while the data complexity is negligible. Furthermore, we show a state recovery attack against NORX in the parallel mode using an internal differential attack for 2 rounds of the permutation. The data, time and memory complexities of the attack for NORX32 are $2^{7.3}$, $2^{124.3}$ and $2^{115}$ respectively and for NORX64 are $2^{6.2}$, $2^{232.8}$ and $2^{225}$ respectively. Finally, we present a practical distinguisher for the keystream of NORX64 based on two rounds of the permutation in the parallel mode using an internal differential-linear attack. To the best of our knowledge, our results are the best known results for NORX in nonce respecting manner.
The Whole is Less than the Sum of its Parts: Constructing More Efficient Lattice-Based AKEs
Authenticated Key Exchange (AKE) is the backbone of internet security protocols such as TLS and IKE. A recent announcement by standardization bodies calling for a shift to quantum-resilient crypto has resulted in several AKE proposals from the research community. Because AKE can be generically constructed by combining a digital signature scheme with public key encryption (or a KEM), most of these proposals focused on optimizing the known KEMs and left the authentication part to the generic combination with digital signatures.\\
In this paper, we show that by simultaneously considering the secrecy and authenticity requirements of an AKE, we can construct a scheme that is more secure and with smaller communication complexity than a scheme created by a generic combination of a KEM with a signature scheme. Our improvement uses particular properties of lattice-based encryption and signature schemes and consists of two parts -- the first part increases security, whereas the second reduces communication complexity. \\
We first observe that parameters for lattice-based encryption schemes are always set so as to avoid decryption errors, since many observations by the adversary of such failures usually leads to him recovering the secret key. But since one of the requirements of an AKE is that it be forward-secure, the public key must change every time. The intuition is therefore that one can set the parameters of the scheme so as to not care about decryption errors and everything should still remain secure. We show that this naive solution is not quite correct, but the intuition can be made to work by a small change in the scheme. Our new AKE, which now remains secure in case of decryption errors, fails to create a shared key with probability around $2^{-30}$, but adds enough security that we are able to instantiate a KEM based on the NTRU assumption with rings of smaller dimension. \\
Our second improvement is showing that certain hash-and-sign lattice signatures can be used in ``message-recovery'' mode. In this mode, the signature size is doubled but this longer signature is enough to recover an even longer message -- thus the signature is longer but the message does not need to be sent. This is advantageous when signing relatively long messages, such as the public keys and ciphertexts generated by a lattice-based KEM. We show how this technique reduces the communication complexity of the generic construction of our AKE by around $20\%$. Using a lattice-based signature in message-recovery mode is quite generic (i.e it does not depend on the structure of the message), and so it may be used in AKE constructions that use a different KEM, or even simply as a way to reduce the transmission length of a message and its digital signature.
A Tale of Two Shares: Why Two-Share Threshold Implementation Seems Worthwhile-and Why it is Not
In this work, we explore the possibilities for practical Threshold Implementation (TI) with only two shares in order for a smaller design that needs less randomness but is still first-order leakage resistant.
We present the first two-share Threshold Implementations of two lightweight block ciphers---Simon and Present. The implementation results show that two-share TI gains in compactness while loses in throughput compared with three-share schemes. Moreover, the leakage analyses show that two-share TI retains perfect first-order resistance but is shadowed by a strong second-order leakage, making it less worthwhile.
Analysis of Key Wrapping APIs: Generic Policies, Computational Security
We present an analysis of key wrapping APIs with generic policies. We prove that certain minimal conditions on policies are sufficient for keys to be indistinguishable from random in any execution of an API.
Our result captures a large class of API policies, including both the hierarchies on keys that are common in the scientific literature and the non-linear dependencies on keys used in PKCS#11. Indeed, we use our result to propose a secure refinement of PKCS#11, assuming that the attributes of keys are transmitted as authenticated associated data when wrapping and that there is an enforced separation between keys used for wrapping and keys used for other cryptographic purposes.
We use the Computationally Complete Symbolic Attacker developed by Bana and Comon. This model enables us to obtain computational guarantees using a simple proof with a high degree of modularity.
Two-Input Functional Encryption for Inner Products from Bilinear Maps
Functional encryption is a new paradigm of public-key encryption that allows a user to compute $f(x)$ on encrypted data $CT(x)$ with a private key $SK_f$ to finely control the revealed information. Multi-input functional encryption is an important extension of (single-input) functional encryption that allows the computation $f(x_1, \ldots, x_n)$ on multiple ciphertexts $CT(x_1), \ldots, CT(x_n)$ with a private key $SK_f$. Although multi-input functional encryption has many interesting applications like running SQL queries on encrypted database and computation on encrypted stream, current candidates are not yet practical since many of them are built on indistinguishability obfuscation. To solve this unsatisfactory situation, we show that practical two-input functional encryption schemes for inner products can be built based on bilinear maps. In this paper, we first propose a two-input functional encryption scheme for inner products in composite-order bilinear groups and prove its selective IND-security under simple assumptions. Next, we propose a two-client functional encryption scheme for inner products where each ciphertext can be associated with a time period and prove its selective IND-security. Furthermore, we show that our two-input functional encryption schemes in composite-order bilinear groups can be converted into schemes in prime-order asymmetric bilinear groups by using the asymmetric property of asymmetric bilinear groups.
Security Proofs for Participation Privacy, Receipt-Freeness, Ballot Privacy, and Verifiability Against Malicious Bulletin Board for the Helios Voting Scheme
The Helios voting scheme is well studied including formal proofs for verifiability and ballot privacy. However, depending on its version, the scheme provides either participation privacy (hiding who participated in the election) or verifiability against malicious bulletin board (preventing election manipulation by ballot stuffing), but not both at the same time. It also does not provide receipt-freeness, thus enabling vote buying by letting the voters contstruct receipts proving how they voted. Recently, an extension to Helios, further referred to as KTV-Helios, has been proposed that claims to provide these additional security properties. However, the authors of KTV-Helios did not prove their claims. Our first contribution is to provide formal definition for participation privacy and receipt-freeness, that can be applied to KTV-Helios. These definitions were used to also prove the corresponding claims of KTV-Helios. Our second contribution is to use the existing definitions of ballot privacy and verifiability against malicious bulletin board as applied to Helios in order to prove that both security properties also hold for KTV-Helios.
Partially homomorphic encryption schemes over finite fields
Uncategorized
Uncategorized
Homomorphic encryption scheme enables computation
in the encrypted domain, which is of great importance because of its wide and growing range of applications. The main issue with the known fully (or partially) homomorphic encryption schemes is the high computational complexity and large
communication cost required for their execution. In this work, we study symmetric partially homomorphic encryption schemes over finite fields, establishing relationships between homomorphisms over finite fields with $q$-ary functions. Our proposed partially homomorphic encryption schemes have perfect secrecy and resist cipher-only attacks to some extent.
Information-Theoretical Analysis of Two Shannon's Ciphers
Uncategorized
Uncategorized
We describe generalized running key ciphers and apply them for analysis of two Shannon's methods. In particular, we suggest some estimation of the cipher equivocation and the probability of correct deciphering without key.
An Efficient and Scalable Modeling Attack on Lightweight Secure Physically Unclonable Function
The Lightweight Secure Physically Unclonable Function (LSPUF) was proposed as a secure composition of Arbiter PUFs with additional XOR based input and output networks. But later, researchers proposed a Machine Learning (ML) based modeling attack on $x$-XOR LSPUF, and they also empirically showed that pure ML based modeling is not computationally scalable if the parameter $x$ of $x$-XOR LSPUF is larger than nine. Besides this pure computational attack using only challenge-response pairs (CRPs),
there are other proposals for modeling attacks on LSPUF using timing and power side-channel information, reliability information and photonic side-channel information of an LSPUF instance.
%
In this paper, we proposed another pure computational attack (i.e. without any side-channel information) on multibit output LSPUF variants using both cryptanalysis and ML techniques together. We, first, cryptanalyze the output network of LSPUF to reduce the computational efforts required by previously proposed pure ML based modeling of an $x$-XOR LSPUF. Specifically, we model an LSPUF instance, while its output bit is defined as $x$-XOR PUF, using the ML modeling of $y$-XOR PUF where $y<x$. From the computational complexity view point, our proposed modeling attack is efficient and scalable than previously proposed pure ML based modeling of LSPUFs with respect to both data and time complexities. We demonstrate the effectiveness of our proposed attack using the Matlab based simulation of LSPUFs and LSPUFs implemented on Xilinx Artix-7 Field Programmable Gate Arrays (FPGAs).
Privacy Preserving Network Analysis of Distributed Social Networks
Uncategorized
Uncategorized
Social network analysis as a technique has been applied to
a diverse set of fields, including, organizational behavior,
sociology, economics and biology. However, for sensitive networks such as hate networks,
trust networks and sexual networks, these techniques have been
sparsely used. This is majorly attributed to the unavailability of network
data. Anonymization is the most commonly used technique for performing
privacy preserving network analysis. The process involves the presence of a
trusted third party, who is aware of the complete network, and
releases a sanitized version of it. In this paper, we propose an
alternative, in which, the desired analysis can be performed by the parties who
distributedly hold the network, such that : (a) no central third party is
required; (b) the topology of the underlying network is kept hidden. We
design multiparty protocols for securely performing few of the commonly
studied social network analysis algorithms. The current paper addresses
a secure implementation of the most commonly used network analysis
measures, which include degree distribution, closeness centrality,
PageRank algorithm and K-shell decomposition algorithm. The designed
protocols are proven to be secure in the presence of an arithmetic black-box
extended with operations like comparison and modulo.
A Practical Framework for Executing Complex Queries over Encrypted Multimedia Data
Over the last few years, data storage in cloud based services has been very popular due to easy management and monetary advantages of cloud computing. Recent developments showed that such data could be leaked due to various attacks. To address some of these attacks, encrypting sensitive data before sending to cloud emerged as an important protection mechanism. If the data is encrypted with traditional techniques, selective retrieval of encrypted data becomes challenging. To address this challenge, efficient searchable encryption schemes have been developed over the years. Almost all of the existing searchable encryption schemes are developed for keyword searches and require running some code on the cloud servers. However, many of the existing cloud storage services (e.g., Dropbox, Box, Google Drive, etc.) only allow simple data object retrieval and do not provide computational support needed to realize most of the searchable encryption schemes.
In this paper, we address the problem of efficient execution of complex search queries over wide range of encrypted data types (e.g., image files) without requiring customized computational support from the cloud servers. To this end, we provide an extensible framework for supporting complex search queries over encrypted multimedia data. Before any data is uploaded to the cloud, important features are extracted to support different query types (e.g., extracting facial features to support face recognition queries) and complex queries are converted to series of object retrieval tasks for cloud service. Our results show that this framework may support wide range of image retrieval queries on encrypted data with little overhead and without any change to underlying data storage services.
Multi-Input Inner-Product Functional Encryption from Pairings
Uncategorized
Uncategorized
We present a multi-input functional encryption scheme (MIFE) for the inner product functionality based on the k-Lin assumption in prime-order bilinear groups. Our construction works for any
polynomial number of encryption slots and achieves adaptive security against unbounded collusion, while relying on standard polynomial hardness assumptions. Prior to this work, we did not even have
a candidate for 3-slot MIFE for inner products in the generic bilinear group model. Our work is also the first MIFE scheme for a non-trivial functionality based on standard cryptographic assumptions, as well as the first to achieve polynomial security loss for a super-constant number of slots under falsifiable assumptions. Prior works required stronger non-standard assumptions such as indistinguishability
obfuscation or multi-linear maps.
Computational Security of Quantum Encryption
Quantum-mechanical devices have the potential to transform cryptography. Most research in this area has focused either on the information-theoretic advantages of quantum protocols or on the security of classical cryptographic schemes against quantum attacks. In this work, we initiate the study of another relevant topic: the encryption of quantum data in the computational setting.
In this direction, we establish quantum versions of several fundamental classical results. First, we develop natural definitions for private-key and public-key encryption schemes for quantum data. We then define notions of semantic security and indistinguishability, and, in analogy with the classical work of Goldwasser and Micali, show that these notions are equivalent. Finally, we construct secure quantum encryption schemes from basic primitives. In particular, we show that quantum-secure one-way functions imply IND-CCA1-secure symmetric-key quantum encryption, and that quantum-secure trapdoor one-way permutations imply semantically-secure public-key quantum encryption.
Modeling Random Oracles under Unpredictable Queries
In recent work, Bellare, Hoang, and Keelveedhi (CRYPTO 2013) introduced a new abstraction called Universal Computational Extractors (UCEs), and showed how they can replace random oracles (ROs) across a wide range of cryptosystems. We formulate a new framework, called Interactive Computational Extractors (ICEs), that extends UCEs by viewing them as models of ROs under unpredictable (aka. high-entropy) queries. We overcome a number of limitations of UCEs in the new framework, and in particular prove the adaptive RKA and semi-adaptive KDM securities of a highly efficient symmetric encryption scheme using ICEs under key offsets.
We show both negative and positive feasibility results for ICEs. On the negative side, we demonstrate ICE attacks on the HMAC and NMAC constructions. On the positive side we show that: 1) ROs are indeed ICE secure, thereby confirming the structural soundness of our definition and enabling a finer layered approach to protocol design in the RO model; and 2) a modified version of Liskov's Zipper Hash is ICE secure with respect to an underlying fixed- input-length RO, for appropriately restricted classes of adversaries. This brings the first result closer to practice by moving away from variable-input- length ROs. Our security proofs employ techniques from indifferentiability in multi-stage settings.
A deeper understanding of the XOR count distribution in the context of lightweight cryptography
Uncategorized
Uncategorized
In this paper, we study the behavior of the XOR count distributions under different bases of finite field. XOR count of a field element is a simplified metric to estimate the hardware implementation cost to compute the finite field multiplication of an element. It is an important criterion in the design of lightweight cryptographic primitives, typically to estimate the efficiency of the diffusion layer in a block cipher. Although several works have been done to find lightweight MDS diffusion matrices, to the best of our knowledge, none has considered finding lightweight diffusion matrices under other bases of finite field apart from the conventional polynomial basis. The main challenge for considering different bases for lightweight diffusion matrix is that the number of bases grows exponentially as the dimension of a finite field increases, causing it to be infeasible to check all possible bases. Through analyzing the XOR count distributions and the relationship between the XOR count distributions under different bases, we find that when all possible bases for a finite field are considered, the collection of the XOR count distribution is invariant to the choice of the irreducible polynomial of the same degree. In addition, we can partition the set of bases into equivalence classes, where the XOR count distribution is invariant in an equivalence class, thus when changing bases within an equivalence class, the XOR count of a diffusion matrix will be the same. This significantly reduces the number of bases to check as we only need to check one representative from each equivalence class for lightweight diffusion matrices. The empirical evidence from our investigation says that the bases which are in the equivalence class of the polynomial basis are the recommended choices for constructing lightweight MDS diffusion matrices.
Homomorphic Encryption for Arithmetic of Approximate Numbers
Uncategorized
Uncategorized
We suggest a method to construct a homomorphic encryption scheme for approximate arithmetic. It supports an approximate addition and multiplication of encrypted messages, together with a new rescaling procedure for managing the magnitude of plaintext. This procedure truncates a ciphertext into a smaller modulus, which leads to rounding of plaintext. The main idea is to add a noise following significant figures which contain a main message. This noise is originally added to the plaintext for security, but considered to be a part of error occurring during approximate computations that is reduced along with plaintext by rescaling. As a result, our decryption structure outputs an approximate value of plaintext with a predetermined precision.
We also propose a new batching technique for a RLWE-based construction. A plaintext polynomial is an element of a cyclotomic ring of characteristic zero and it is mapped to a message vector of complex numbers via complex canonical embedding map, which is an isometric ring homomorphism. This transformation does not blow up the size of errors, therefore enables us to preserve the precision of plaintext after encoding.
In our construction, the bit size of ciphertext modulus grows linearly with the depth of the circuit being evaluated due to rescaling procedure, while all the previous works either require an exponentially large size of modulus or expensive computations such as bootstrapping or bit extraction. One important feature of our method is that the precision loss during evaluation is bounded by the depth of a circuit and it exceeds at most one more bit compared to unencrypted approximate arithmetic such as floating-point operations. In addition to the basic approximate circuits, we show that our scheme can be applied to the efficient evaluation of transcendental functions such as multiplicative inverse, exponential function, logistic function and discrete Fourier transform.
A note on the security of threshold implementations with $d+1$ input shares
Recently, threshold implementations (TI) with $d + 1$ input shares have been proposed at Crypto 2015. This optimization aims for more lightweight TI designs while keeping the glitch-resistance of the original concept. In this note, we consider such an approach and provide preliminary simulation-based evidence, backed by empirical results, of the existence of $d^{\text{th}}$-order leakages. We conclude that, while for first-order TI designs this solution can be overkill due to the extra randomness requirements, higher-order TIs can still benefit from it.
Walsh-Hadamard Transform and Cryptographic Applications in Bias Computing
Walsh-Hadamard transform is used in a wide variety of scientific and engineering applications, including bent functions and cryptanalytic optimization techniques in cryptography. In linear cryptanalysis, it is a key question to find a good linear approximation, which holds with probability $(1+d)/2$ and the bias $d$ is large in absolute value. Lu and Desmedt (2011) take a step toward answering this key question in a more generalized setting and initiate the work on the generalized bias problem with linearly-dependent inputs. In this paper, we give fully extended results. Deep insights on assumptions behind the problem are given. We take an information-theoretic approach to show that our bias problem assumes the setting of the maximum input entropy subject to the input constraint. By means of Walsh transform, the bias can be expressed in a simple form. It incorporates Piling-up lemma as a special case. Secondly, as application, we answer a long-standing open problem in correlation attacks on combiners with memory. We give a closed-form exact solution for the correlation involving the multiple polynomial of any weight \emph{for the first time}. We also give Walsh analysis for numerical approximation. An interesting bias phenomenon is uncovered, i.e., for even and odd weight of the polynomial, the correlation behaves differently. Thirdly, we introduce the notion of weakly biased distribution, and study bias approximation for a more general case by Walsh analysis. We show that for weakly biased distribution, Piling-up lemma is still valid. Our work shows that Walsh analysis is useful and effective to a broad class of cryptanalysis problems.
Shorter Circuit Obfuscation in Challenging Security Models
Uncategorized
Uncategorized
The study of program obfuscation is seeing great progress in recent years,
which is crucially attributed to the introduction of graded encoding schemes
by Garg, Gentry and Halevi (Eurocrypt 2013). In such schemes, elements of a
ring can be encoded such that the content of the encoding is hidden, but
restricted algebraic manipulations, followed by zero-testing, can be performed
publicly. This primitive currently underlies all known constructions of
general-purpose obfuscators.
However, the security properties of the current candidate graded encoding
schemes are not well understood, and new attacks frequently introduced. It is
therefore important to assume as little as possible about the security of the
graded encoding scheme, and use as conservative security models as possible.
This often comes at a cost of reducing the efficiency or the functionality of
the obfuscator.
In this work, we present a candidate obfuscator, based on composite-order
graded encoding schemes, which obfuscates circuits directly a la Zimmerman
(Eurocrypt 2015) and Applebaum-Brakerski (TCC 2015). Our construction requires
a graded encoding scheme with only $3$ ``plaintext slots'' (= sub-rings of the
underlying ring), which is directly related to the size and complexity of the
obfuscated program. We prove that our obfuscator is superior to previous works
in two different security models.
1. We prove that our obfuscator is indistinguishability-secure (iO) in the
\emph{Unique Representation Generic Graded Encoding} model. Previous works
either required a composite-order scheme with polynomially many slots, or were
provable in a milder security model. This immediately translates to a
polynomial improvement in efficiency, and shows that improved security does
not come at the cost of efficiency in this case.
2. Following Badrinarayanan et al.\ (Eurocrypt 2016), we consider a model
where finding any ``non-trivial'' encoding of zero breaks the security of the
encoding scheme. We show that, perhaps surprisingly, secure obfuscation is
possible in this model even for some classes of \emph{non-evasive functions}
(for example, any class of conjunctions). We define the property required of
the function class, formulate an appropriate (generic) security model, and
prove that our aforementioned obfuscator is virtual-black-box (VBB) secure in
this model.
New Tools for Multi-Party Computation
In this work we extend the electronic voting scheme introduced by R. Cramer, R. Gennaro and B. Schoenmakers in [CGS97]. In the original paper the privacy of votes is based on the decisional Diffie-Hellman or respectively the higher residuosity assumption. Since both problems can be solved efficiently in the event of quantum computers, a desirable goal is to implement the voting scheme with privacy based on different assumptions. We present the framework and a concrete instantiation for an efficient solution with privacy based on learning with errors over rings. Additionally we show how to achieve privacy assuming hardness of worst-case lattice problems, which are well analyzed and conjectured to be secure against quantum computers.
A Decentralized Anonymity-Preserving Reputation System with Constant-time Score Retrieval
Reputation systems are a major feature of every modern e-commerce website, helping buyers carefully choose their service providers and products. However, most websites use centralized reputation systems, where the security of the system rests entirely upon a single Trusted Third Party. Moreover, they often disclose the identities of the raters, which may discourage honest users from posting
frank reviews due to the fear of retaliation from the ratees. We present a reputation system that is decentralized yet secure and efficient, and could therefore be applied in a practical context. In fact, users are able to retrieve the reputation score of a service provider directly from it in constant time, with assurance regarding the correctness of the information obtained. Additionally, the reputation system is anonymity-preserving, which ensures that users can submit feedback without their identities being associated to it. Despite this anonymity, the system still offers robustness against attacks such as ballot-stuffing and Sybil attacks.
Lattice-Based Signature Schemes and their Sensitivity to Fault Attacks
Due to their high efficiency and their strong security properties, lattice-based cryptographic schemes seem to be a very promising post-quantum replacement for currently used public key cryptography. The security of lattice-based schemes has been deeply analyzed mathematically, whereas little effort has been spent on the analysis against implementation attacks.
In this paper, we start with the fault analysis of one of the most important cryptographic primitives: signature schemes. We investigate the vulnerability and resistance of the currently most efficient lattice-based signature schemes BLISS (CRYPTO 2013), ring-TESLA (AfricaCrypt 2016), and the GLP scheme (CHES 2012) and their implementations. We consider different kinds of (first-order) randomizing, zeroing, and skipping faults. For each of the signature
schemes, we found at least six effective attacks. To increase the security of lattice-based signature schemes, we propose countermeasures for each of the respective attacks.
Automatic Search for Key-Bridging Technique: Applications to LBlock and TWINE (Full Version)
Key schedules in block ciphers are often highly simplified, which causes weakness that can be exploited in many attacks. At ASIACRYPT 2011, Dunkelman et al. proposed a technique using the weakness in the key schedule of AES, called key-bridging technique, to improve the overall complexity. The advantage of key-bridging technique is that it allows the adversary to deduce some sub-key bits from some other sub-key bits, even though they are separated by many key mixing steps. Although the relations of successive rounds may be easy to see,
the relations of two rounds separated by some mixing steps are very hard to find. In this paper, we describe a versatile and powerful algorithm for searching key-bridging technique on word-oriented and bit-oriented block ciphers. To demonstrate the usefulness of our approach, we apply our tool to the impossible differential and multidimensional zero correlation linear attacks on 23-round LBlock, 23-round TWINE-80 and 25-round TWINE-128. To the best of our knowledge, these results are the currently best results on LBlock and TWINE in the single-key setting.
Efficient algorithms for supersingular isogeny Diffie-Hellman
Uncategorized
Uncategorized
We propose a new suite of algorithms that significantly improve the performance of supersingular isogeny Diffie-Hellman (SIDH) key exchange. Subsequently, we present a full-fledged implementation of SIDH that is geared towards the 128-bit quantum and 192-bit classical security levels. Our library is the first constant-time SIDH implementation and is up to 2.9 times faster than the previous best (non-constant-time) SIDH software. The high speeds in this paper are driven by compact, inversion-free point and isogeny arithmetic and fast SIDH-tailored field arithmetic: on an Intel Haswell processor, generating ephemeral public keys takes 46 million cycles for Alice and 54 million cycles for Bob, while computing the shared secret takes 44 million and 52 million cycles, respectively. The size of public keys is only 564 bytes, which is significantly smaller than most of the popular post-quantum key exchange alternatives. Ultimately, the size and speed of our software illustrates the strong potential of SIDH as a post-quantum key exchange candidate and we hope that these results encourage a wider cryptanalytic effort.
Solving Quadratic Equations with XL on Parallel Architectures - extended version
Uncategorized
Uncategorized
Solving a system of multivariate quadratic equations (MQ) is an NP-complete problem whose complexity estimates are relevant to many cryptographic scenarios. In some cases it is required in the best known attack; sometimes it is a generic attack (such as for the multivariate PKCs), and sometimes it determines a provable level of security (such as for the QUAD stream ciphers).
Under reasonable assumptions, the best way to solve generic MQ systems is the XL algorithm implemented with a sparse matrix solver such as Wiedemann's algorithm. Knowing how much time an implementation of this attack requires gives us a good idea of how future cryptosystems related to MQ can be broken, similar to how implementations of the General Number Field Sieve that factors smaller RSA numbers give us more insight into the security of actual RSA-based cryptosystems.
This paper describes such an implementation of XL using the block
Wiedemann algorithm. In 5 days we are able to solve a system with 32 variables and 64 equations over $\mathbb{F}_{16}$ (a computation of about $2^{60.3}$ bit operations) on a small cluster of 8 nodes, with 8 CPU cores and 36 GB of RAM in each node. We do not expect system solvers of the F$_4$/F$_5$ family to accomplish this due to their much higher memory demand. Our software also offers implementations for $\mathbb{F}_{2}$ and $\mathbb{F}_{31}$ and
can be easily adapted to other small fields. More importantly, it scales nicely for small clusters, NUMA machines, and a combination of both.
Polymorphic Encryption and Pseudonymisation for Personalised Healthcare
Polymorphic encryption and Pseudonymisation, abbreviated as PEP, form
a novel approach for the management of sensitive personal data,
especially in health care. Traditional encryption is rather rigid:
once encrypted, only one key can be used to decrypt the data. This
rigidity is becoming an every greater problem in the context of big
data analytics, where different parties who wish to investigate part
of an encrypted data set all need the one key for decryption.
Polymorphic encryption is a new cryptographic technique that solves
these problems. Together with the associated technique of polymorphic
pseudonymisation new security and privacy guarantees can be given
which are essential in areas such as (personalised) healthcare,
medical data collection via self-measurement apps, and more generally
in privacy-friendly identity management and data analytics.
The key ideas of polymorphic encryption are:
1. Directly after generation, data can be encrypted in a
`polymorphic' manner and stored at a (cloud) storage facility in
such a way that the storage provider cannot get access. Crucially,
there is no need to a priori fix who gets to see the data, so that
the data can immediately be protected.
For instance a PEP-enabled self-measurement device will store all its
measurement data in polymorphically encrypted form in a back-end data
base.
2. Later on it can be decided who can decrypt the data. This
decision will be made on the basis of a policy, in which the data
subject should play a key role.
The user of the PEP-enabled device can, for instance, decide that
doctors $X,Y,Z$ may at some stage decrypt to use the data in their
diagnosis, or medical researcher groups $A, B, C$ may use it for their
investigations, or third parties $U,V,W$ may use it for additional
services, etc.
3. This `tweaking' of the encrypted data to make it decryptable by
a specific party can be done in a blind manner. It will have to be
done by a trusted party who knows how to tweak the ciphertext for
whom.
This PEP technology can provide the necessary security and privacy
infrastructure for big data analytics. People can entrust their data
in polymorphically encrypted form, and each time decide later to make
(parts of) it available (decryptable) for specific parties, for
specific analysis purposes. In this way users remain in control, and
can monitor which of their data is used where by whom for which
purposes.
The polymorphic encryption infrastructure can be supplemented with a
pseudonymisation infrastructure which is also polymorphic, and
guarantees that each individual will automatically have different
pseudonyms at different parties and can only be de-pseudonymised by
participants (like medical doctors) who know the original identity.
This white paper provides an introduction to Polymorphic Encryption
and Pseudonymisation (PEP), at different levels of abstraction,
focusing on health care as application area. It contains a general
description of PEP, explaining the basic functionality for laymen,
supplemented by a clarification of the legal framework provided by the
upcoming General Data Protection Regulation (GDPR) of the European
Union. The paper also contains a more advanced, mathematically
oriented description of PEP, including the underlying cryptographic
primitives, key and pseudonym managment, interaction protocols,
etc. This second part is aimed at readers with a background in
computer security and cryptography. The cryptographic basis for PEP is
ElGamal public key encryption, which is well-known since the mid
1980s. It is the way in which this encryption is used --- with
re-randomisation, re-keying and re-shuffling --- that is new.
The PEP framework is currently elaborated into an open design and open
source (prototype) implementation at Radboud University in Nijmegen,
The Netherlands. The technology will be used and tested in a real-life
medical research project at the Radboud University Medical Center.
Efficient Quantum-Resistant Trust Infrastructure based on HIMMO
Uncategorized
Uncategorized
Secure Internet communications face conflicting demands: while advances in (quantum) computers require stronger, quantum-resistant cryptographic algorithms, the Internet of Things demands better-performing protocols. Finally, communication links usually depend on a single root-of-trust, e.g., a certification authority which forms a single point-of-failure that is too big of a risk for future systems. This paper addresses these problems by proposing a hybrid infrastructure that combines the quantum-resistant HIMMO key pre-distribution scheme based on multiple Trusted Third Parties with public-key cryptography. During operation, any pair of devices can use private HIMMO key material and public keys to establish a secure and authenticated link, where their public keys are certified beforehand by multiple TTPs, acting as roots of trust. Our solution is resilient to the capture of individual roots of trust without affecting performance, while public-key cryptography provides features such as forward-secrecy. Combining HIMMO identities with public keys enables secure certification of public keys and distribution of HIMMO key material from multiple TTPs, without requiring an out-of-band channel. The infrastructure can be tuned to fit Internet of Things use-cases benefiting from an efficient, non-interactive and authenticated key exchange, or to fit use-cases where the use of multiple TTPs provides privacy safe-guards when lawful interception is required. Our TLS proof-of-concept shows the feasibility of our proposal by integrating the above security features with minimal changes in the TLS protocol. Our TLS implementation provides classic and post-quantum confidentiality and authentication, all while adding a computation overhead of only 2.8% and communication overhead of approximately 50 bytes to a pre-quantum Elliptic Curve Diffie-Hellman ciphersuite.
Automatic Search for the Best Trails in ARX: Application to Block Cipher \textsc{Speck}
Uncategorized
Uncategorized
We propose the first adaptation of Matsui's algorithm for finding the best differential and linear trails to the class of ARX ciphers. It is based on a branch-and-bound search strategy, does not use any heuristics and returns optimal results. The practical application of the new algorithm is demonstrated on reduced round variants of block ciphers from the \textsc{Speck} family. More specifically, we report the probabilities of the best differential trails for up to 10, 9, 8, 7, and 7 rounds of \textsc{Speck32}, \textsc{Speck48}, \textsc{Speck64}, \textsc{Speck96} and \textsc{Speck128} respectively, together with the exact number of differential trails that have the best probability. The new results are used to compute bounds, under the Markov assumption, on the security of \textsc{Speck} against single-trail differential cryptanalysis. Finally, we propose two new ARX primitives with provable bounds against single-trail differential and linear cryptanalysis -- a long standing open problem in the area of ARX design.
Towards Bitcoin Payment Networks
Uncategorized
Uncategorized
Bitcoin as deployed today does not scale. Scalability research has focused on two directions: 1) redesigning the Blockchain protocol, and 2) facilitating `off-chain transactions' and only consulting the Blockchain if an adjudicator is required.
In this paper we focus on the latter and provide an overview of Bitcoin payment networks.
These consist of two components: payment channels to facilitate off-chain transactions between two parties, and the capability to fairly exchange bitcoins across multiple channels.
We compare Duplex Micropayment Channels and Lightning Channels, before discussing Hashed Time Locked Contracts which enable Bitcoin-based payment networks.
Finally, we highlight challenges for route discovery in these networks.
MILP-Based Automatic Search Algorithms for Differential and Linear Trails for Speck
Uncategorized
Uncategorized
In recent years, Mixed Integer Linear Programming (MILP) has been successfully applied in searching for differential characteristics and linear approximations in block ciphers and has produced the significant results for some ciphers such as SIMON (a family of lightweight and hardware-optimized block ciphers designed by NSA) etc.
However, in the literature, the MILP-based automatic search algorithm for differential characteristics and linear approximations is still infeasible for block ciphers such as ARX constructions.
In this paper, we propose an MILP-based method for automatic search for differential characteristics and linear approximations in ARX ciphers.
By researching the properties of differential characteristic and linear approximation of modular addition in ARX ciphers, we present a method to describe the differential characteristic and linear approximation with linear inequalities under the assumptions of independent inputs to the modular addition and independent rounds.
We use this representation as an input to the publicly available MILP optimizer Gurobi to search for differential characteristics and linear approximations for ARX ciphers.
As an illustration, we apply our method to Speck, a family of lightweight and software-optimized block ciphers designed by NSA, which results in the improved differential characteristics and linear approximations compared with the existing ones.
Moreover, we provide the improved differential attacks on Speck48, Speck64, Speck96 and Speck128, which are the best attacks on them in terms of the number of rounds.
On the Construction of Lightweight Circulant Involutory MDS Matrices
In the present paper, we investigate the problem of constructing MDS matrices with as few bit XOR operations as possible. The key contribution of the present paper is constructing MDS matrices with entries in the set of $m\times m$ non-singular matrices over $\mathbb{F}_2$ directly, and the linear transformations we used to construct MDS matrices are not assumed pairwise commutative. With this method, it is shown that circulant involutory MDS matrices, which have been proved do not exist over the finite field $\mathbb{F}_{2^m}$, can be constructed by using non-commutative entries.
Some constructions of $4\times4$ and $5\times5$ circulant involutory MDS matrices are given when $m=4,8$. To the best of our knowledge, it is the first time that circulant involutory MDS matrices have been constructed. Furthermore,
some lower bounds
on XORs that required to evaluate one row of circulant and Hadamard MDS matrices of order 4 are given when $m=4,8$. Some constructions achieving the bound are also given, which have fewer XORs than previous constructions.
Multiple Differential Cryptanalysis: A Rigorous Analysis
Uncategorized
Uncategorized
Statistical analyses of multiple differential attacks are considered in this paper. Following the work of Blondeau
and Gérard, the most general situation of multiple differential attack where there are no restrictions on the set of differentials
is studied. We obtain closed form bounds on the data complexity in terms of the success probability and the advantage
of an attack. This is done under two scenarios -- one, where an independence assumption used by Blondeau and Gérard is assumed
to hold and second, where no such assumption is made. The first case employs the Chernoff bounds while the second case
uses the Hoeffding bounds from the theory of concentration inequalities. In both cases, we do not make use of any approximations in
our analysis. As a consequence, the results are more generally applicable compared to previous works. The analysis without the
independence assumption is the first of its kind in the literature. We believe that the current work places the statistical
analysis of multiple differential attack on a more rigorous foundation than what was previously known.
A New Test Statistic for Key Recovery Attacks Using Multiple Linear Approximations
Uncategorized
Uncategorized
The log-likelihood ratio (LLR) and the chi-squared distribution based test statistics have been proposed in the literature for
performing statistical analysis of key recovery attacks on block ciphers. A limitation of the LLR test statistic is that its
application requires the full knowledge of the corresponding distribution. Previous work using the chi-squared approach required
{\em approximating} the distribution of the relevant test statistic by chi-squared and normal distributions. Problematic issues
regarding such approximations have been reported in the literature.
Perhaps more importantly, both the LLR and the chi-squared based methods are applicable only if the success probability $P_S$ is
greater than 0.5. On the other hand, an attack with success probability less than $0.5$ is also of considerable interest.
This work proposes a new test statistic for key recovery attacks which has the following features.
Its application does not require the full knowledge of the underlying distribution; it is possible to carry out an analysis using this
test statistic without using any approximations; the method applies for all values of the success probability.
The statistical analysis of the new test statistic follows the hypothesis testing framework and uses Hoeffding's inequalities to
bound the probabilities of Type-I and Type-II errors.
On Instantiating Pairing-Based Protocols with Elliptic Curves of Embedding Degree One
Since the discovery of identity-based encryption schemes in 2000, bilinear pairings have been used in the design of hundreds of cryptographic protocols. The most commonly used pairings are constructed from elliptic curves over finite fields with small embedding degree. These pairings can have different security, performance, and functionality characteristics, and were therefore classified into Types 1, 2, 3 and 4. In this paper, we observe that this conventional classification is not applicable to pairings
from elliptic curves with embedding degree one. It is important to understand the security, efficiency, and functionality of these pairings in light of recent attacks on certain pairings constructed from elliptic curves with embedding degree greater than one. We define three kinds of pairings from elliptic curves with embedding degree one, discuss some subtleties with using them to implement pairing-based protocols, and provide an estimated cost of implementing them on modern processors.
Fully Homomorphic Encryption for Point Numbers
In this paper, based on the FV scheme, we construct a first fully homomorphic encryption scheme FHE4FX that can homomorphically compute addition and/or multiplication of encrypted fixed point numbers without knowing the secret key. Then, we show that in the FHE4FX scheme one can efficiently and homomorphically compare magnitude of two encrypted numbers. That is, one can compute an encryption of the greater-than bit
that represents whether or not $x > x'$ given two ciphertexts $c$ and $c'$ (of $x$ and $x'$, respectively) without knowing the secret key. Finally we show that these properties of the FHE4FX scheme enables us to construct a fully homomorphic encryption scheme FHE4FL that can homomorphically compute addition and/or multiplication of encrypted floating point numbers.
Tower Number Field Sieve Variant of a Recent Polynomial Selection Method
Uncategorized
Uncategorized
At Asiacrypt 2015, Barbulescu et al. performed a thorough analysis of the tower number field sieve (TNFS) variant of the number
field sieve algorithm. More recently, Kim and Barbulescu combined the TNFS variant with several polynomial selection methods
including the Generalised Joux-Lercier method and the Conjugation method proposed by Barbulescu et al. at Eurocrypt 2015.
Sarkar and Singh (Eurocrypt 2016) proposed
a polynomial selection method which subsumes both the GJL and the Conjugation methods. This study was done in the context of
the NFS and the multiple NFS (MNFS). The purpose of the present note is to show that the polynomial selection method of Sarkar
and Singh subsumes the GJL and the Conjugation methods also in the context of the TNFS and the multiple TNFS variants. This was not
clear from the recent work by Kim and Barbulescu. Applying the new polynomial selection method to the TNFS variants results in
new asymptotic complexities for certain ranges of primes.
Towards Secure Quadratic Voting
Uncategorized
Uncategorized
We provide an overview of some of the security issues involved in securely implementing Lalley and Weyl's ``Quadratic Voting'', and
suggest some possible implementation architectures. Our proposals blend ``end-to-end verifiable'' voting methods with anonymous payments. We also consider new refund rules for quadratic voting, such as a ``lottery'' method.
Slow Motion Zero Knowledge Identifying With Colliding Commitments
Uncategorized
Uncategorized
Discrete-logarithm authentication protocols are known to present two interesting features: The first is that the prover's commitment, $x=g^r$, claims most of the prover's computational effort. The second is that $x$ does not depend on the challenge and can hence be computed in advance. Provers exploit this feature by pre-loading (or pre-computing) ready to use commitment pairs $r_i,x_i$. The $r_i$ can be derived from a common seed but storing each $x_i$ still requires 160 to 256 bits when implementing DSA or Schnorr.
This paper proposes a new concept called slow motion zero-knowledge. SM-ZK allows the prover to slash commitment size (by a factor of 4 to 6) by combining classical zero-knowledge and a timing side-channel. We pay the conceptual price of requiring the ability to measure time but, in exchange, obtain communication-efficient protocols.
Algebraic Insights into the Secret Feistel Network (Full version)
We introduce the high-degree indicator matrix (HDIM), an object closely related with both the linear approximation table and the algebraic normal form (ANF) of a permutation. We show that the HDIM of a Feistel Network contains very specific patterns depending on the degree of the Feistel functions, the number of rounds and whether the Feistel functions are 1-to-1 or not. We exploit these patterns to distinguish Feistel Networks, even if the Feistel Network is whitened using unknown affine layers.
We also present a new type of structural attack exploiting monomials that cannot be present at round $r-1$ to recover the ANF of the last Feistel function of a $r$-round Feistel Network. Finally, we discuss the relations between our findings, integral attacks, cube attacks, Todo's division property and the congruence modulo 4 of the Linear Approximation Table.
Linear-Time Non-Malleable Codes in the Bit-Wise Independent Tampering Model
Uncategorized
Uncategorized
Non-malleable codes were introduced by Dziembowski et al. (ICS 2010) as coding schemes that protect a message against tampering attacks. Roughly speaking, a code is non-malleable if decoding an adversarially tampered encoding of a message m produces the original message m or a value m' (eventually abort) completely unrelated with m. It is known that non-malleability is possible only for restricted classes of tampering functions. Since their introduction, a long line of works has established feasibility results of non-malleable codes against different families of tampering functions. However, for many interesting families the challenge of finding "good" non-malleable codes remains open. In particular, we would like to have explicit constructions of non-malleable codes with high-rate and efficient encoding/decoding algorithms (i.e. low computational complexity). In this work we present two explicit constructions: the first one is a natural generalization of the work of Dziembowski et al. and gives rise to the first constant-rate non-malleable code with linear-time complexity (in a model including bit-wise indepen- dent tampering). The second construction is inspired by the recent works about non-malleable codes of Agrawal et al. (TCC 2015) and of Cher- aghchi and Guruswami (TCC 2014) and improves the previous result in the bit-wise tampering model: it builds the first non-malleable codes with linear-time complexity and optimal-rate (i.e. rate 1 - o(1)).
Cryptanalysis of Haraka
In this note, we describe attacks on the recently proposed Haraka hash
functions. First, for the two hash functions Haraka-256/256 and
Haraka-512/256 in the family, we show how two colliding messages can
be constructed in about $2^{16}$ function evaluations. Second, we
invalidate the preimage security claim for Haraka-512/256 with an
attack finding one preimage in about $2^{192}$ function
evaluations. These attacks are possible thanks to symmetries in the
internal state that are preserved over several rounds.