All papers in 2019 (Page 8 of 1498 results)
Breaking the Lightweight Secure PUF: Understanding the Relation of Input Transformations and Machine Learning Resistance
Physical Unclonable Functions (PUFs) and, in particular, XOR Arbiter PUFs have gained much research interest as an authentication mechanism for embedded systems.
One of the biggest problems of (strong) PUFs is their vulnerability to so called machine learning attacks.
In this paper we take a closer look at one aspect of machine learning attacks that has not yet gained the needed attention:
the generation of the sub-challenges in XOR Arbiter PUFs fed to the individual Arbiter PUFs.
Specifically, we look at one of the most popular ways to generate sub-challenges based on a combination of permutations and XORs as it has been described for the "Lightweight Secure PUF".
Previous research suggested that using such a sub-challenge generation increases the machine learning resistance significantly.
Our contribution in the field of sub-challenge generation is three-fold: First, drastically improving attack results by Rührmair et al., we describe a novel attack that can break the Lightweight Secure PUF in time roughly equivalent to an XOR Arbiter PUF without transformation of the challenge input. Second, we give a mathematical model that gives insight into the weakness of the Lightweight Secure PUF and provides a way to study generation of sub-challenges in general. Third, we propose a new, efficient, and cost-effective way for sub-challenge generation that mitigates the attack strategy we used and outperforms the Lightweight Secure PUF in both machine learning resistance and resource overhead.
A Generic Construction for Revocable Identity-Based Encryption with Subset Difference Methods
To deal with dynamically changing user's credentials in identity-based encryption (IBE), providing an efficient key revocation method is a very important issue. Recently, Ma and Lin proposed a generic method of designing a revocable IBE (RIBE) scheme that uses the complete subtree (CS) method by combining IBE and hierarchical IBE (HIBE) schemes. In this paper, we propose a new generic method for designing an RIBE scheme that uses the subset difference (SD) method instead of using the CS method. In order to use the SD method, we generically design an RIBE scheme by combining two-level HIBE and single revocation encryption (SRE) schemes. If the underlying HIBE and SRE schemes are adaptively (or selectively) secure, then our RIBE scheme is also adaptively (or selectively) secure. In addition, we show that the layered SD (LSD) method can be applied to our RIBE scheme and a chosen-ciphertext secure RIBE scheme also can be designed generically.
Don't forget your roots: constant-time root finding over $\mathbb{F}_{2^m}$
In the last few years, post-quantum cryptography has received much attention. NIST is running a competition to select some post-quantum schemes as standard. As a consequence, implementations of post-quantum schemes have become important and with them side-channel attacks. In this paper, we show a timing attack on a code-based scheme which was submitted to the NIST competition. This timing attack recovers secret information because of a timing variance in finding roots in a polynomial. We present four algorithms to find roots that are protected against remote timing exploitation.
The End of Logic Locking? A Critical View on the Security of Logic Locking
With continuously shrinking feature sizes of integrated circuits, the vast majority of semiconductor companies have become fabless, i.e., chip manufacturing has been outsourced to foundries across the globe.
However, by outsourcing critical stages of IC fabrication, the design house puts trust in entities which may have malicious intents. This exposes the design industry to a number of threats, including piracy via unauthorized overproduction and subsequent reselling on the black market.
One alleged solution for this problem is logic locking, also known as logic encryption, where the genuine functionality of a chip is “locked” using a key only known to the designer. If a correct key is provided, the design works as intended but with an incorrect key, the circuit produces faulty outputs. Unlocking is handled by the designer only after production, hence an adversarial foundry should not be able to unlock overproduced chips.
In this work, we highlight major shortcomings of proposed logic locking schemes.
They exist primarily due to the absence of a well-defined and realistic attacker model in the current literature. We characterize the physical capabilities of adversaries, especially with respect to invasive attacks and a malicious foundry.
This allows us to derive an attacker model that matches reality, yielding attacks against the foundations of locking schemes
beyond the usually employed SAT-based attacks. Our analysis, which is accompanied by two case studies, shows that none of the previously proposed logic locking schemes is able to achieve the intended protection goals against piracy in real-world scenarios.
As an important conclusion, we argue that there are strong indications that logic locking will most likely never be secure against a determined malicious foundry.
More Practical Single-Trace Attacks on the Number Theoretic Transform
Single-trace side-channel attacks are a considerable threat to implementations of classic public-key schemes. For lattice-based cryptography, however, this class of attacks is much less understood, and only a small number of previous works show attacks. Primas et al., for instance, present a single-trace attack on the Number Theoretic Transform (NTT), which is at the heart of many efficient lattice-based schemes.
They, however, attack a variable-time implementation and also require a rather powerful side-channel adversary capable of creating close to a million multivariate templates. Thus, it was an open question if such an attack can be made practical while also targeting state-of-the-art constant-time implementations.
In this paper, we answer this question positively. First, we introduce several improvements to the usage of belief propagation, which underlies the attack. And second, we change the target to encryption instead of decryption; this limits attacks to the recovery of the transmitted symmetric key, but in turn, increases attack performance. All this then allows successful attacks even when switching to univariate Hamming-weight templates. We evaluate the performance and noise resistance of our attack using simulations, but also target a real device. Concretely, we successfully attack an assembly-optimized constant-time Kyber implementation running on an ARM Cortex M4 microcontroller while requiring the construction of only 213 templates.
Efficient Cryptography on the RISC-V Architecture
RISC-V is a promising free and open-source instruction set architecture. Most of the instruction set has been standardized and several hardware implementations are commercially available. In this paper we highlight features of RISC-V that are interesting for optimizing implementations of cryptographic primitives. We provide the first optimized assembly implementations of table-based AES, bitsliced AES, ChaCha, and the Keccak-$f$[1600] permutation for the RV32I instruction set. With respect to public-key cryptography, we study the performance of arbitrary-precision integer arithmetic without a carry flag. We then estimate the improvement that can be gained by several RISC-V extensions. These performance studies also serve to aid design choices for future RISC-V extensions and implementations.
On equivalence between known families of quadratic APN functions
We study a question whether the currently known families of quadratic APN polynomials are pairwise different up to CCZ-equivalence. We reduce the list of these families to those CCZ-inequivalent to each other. In particular, we prove that the families of APN trinomials (constructed by Budaghyan and Carlet in 2008) and multinomials (constructed by Bracken et al. 2008) are CCZ-equivalent to the APN hexanomial family introduced by Budaghyan and Carlet in 2008. We also prove that a generalization of these trinomial and multinomial families given by Duan et al. (2014) is CCZ-equivalent to the family of hexanomials as well.
TICK: Tiny Client for Blockchains
In Bitcoin-like systems, when a payee chooses to accept
zero-confirmation transactions, it needs to verify the validity of
the transaction. In particular, one of the steps is to verify that
each referred output of the transaction is not previously spent. The conventional
lightweight client design can only support such operation in the
complexity of O($N_T$), where $N_T$ is the total number
of transactions in the system. This is impractical for lightweight clients.
The latest proposals suggest to summarize all the unspent outputs in
an ordered Merkle tree. Therefore, a light client can request proof of
presence and/or absence of an element in it to prove whether a
referred output is previously spent or not, in the complexity of
O(log($N_U$)), where $N_U$ is the total number of unspent output in
the system. However, updating such ordered Merkle tree is slow, thus making the system impractical --- by
our evaluation, when a new block is generated in Bitcoin, it takes
more than one minute to update the ordered Merkle tree.
We propose a practical client, TICK, to solve this problem. TICK uses
the AVL hash tree to store all the unspent outputs. The AVL hash tree can be
update in the time of O(M*log($N_U$)), where $M$ is the number of
elements that need to be inserted or removed from the AVL hash tree. By
evaluation, when a new block is generated, the AVL hash tree can be updated
within $1$ second. Similarly, the proof can also be generated in the
time of O(log($N_U$)). Therefore, TICK brings negligible run-time
overhead, and thus it is practical. Benefited by the AVL hash tree, a
storage-limited device can efficiently and cryptographically verify
transactions. In addition, rather than requiring new miners to
download the entire blockchain before mining, TICK allows new miners
to download only a small portion of data to start mining.
We implement TICK for Bitcoin and provide an experimental
evaluation on its performance by using the current Bitcoin
blockchain data. Our result shows that the proof for verifying
whether an output of a transaction is spent or not is only several KB. The verification is very fast -- generating a proof generally takes less than $1$ millisecond, and verifying a proof even takes much
less time. In addition, to start mining, new
miners only need to download several GB data, rather than downloading
over 230 GB data.
Sublattice Attacks on LWE over Arbitrary Number Field Lattices
Learning with errors over algebraic integer rings (Ring-LWE) was
introduced by Lyubashevsky, Peikert and Regev in Eurocrypt 2010 and
has been served as the fundamental hard problem for lattice cryptogra-
phy. In recent years variants of algebraically structured learning with
errors such as order-LWE, module-LWE and LWE over number field
lattices have been introduced. In this paper we prove that for LWE
over a number field lattice L in an arbitrary number field of degree √
logn
n, when the width is smaller than O(λ1(L∨1 )) for some polynomially
bounded cardinality |L∨/L1| sublattice L1 ⊂ L∨ with non-negligible OL1 , then the LWE over L can be solved by a polynomial time al- gorithm for some modulus parameters. This leads to new sublattice bounds on widths of solvable Ring-LWE instances. From our sublat- tice attack on Ring-LWE it is natural to ask if there exists sublattices L ⊂ RK for some number field K with very small λ1(L∨) and non- negligible OL? In practice sub lattice attack is very necessary for Ring-LWE based lattice cryptography. Secondly we prove that for LWE over an arbitrary num- ber field lattice there are infinitely many modulus parameters such that the problem can be transformed to distinguishing the discretization of one-dimensional continuous Gaussian distribution from the uniform distribution. Hence for these modulus parameters these LWE over ar- bitrary number arbitrary number field lattices can be solved within a polynomial time for a suitable large width (though still narrower than the range in hardness reduction results). While for plain LWE there is no such modulus parameters.
Simple and Efficient Approach for Achieving End-to-End Anonymous Communication
Anonymous communication, that is secure end-to-end and unlinkable, plays a critical role in protecting user privacy by preventing service providers from using message metadata to discover communication links between any two users. Techniques, such as Mix-net, DC-net, time delay, cover traffic, Secure Multiparty Computation and Private Information Retrieval techniques, can be used to achieve anonymous communication. However, the existing solutions are very complex and difficult to implement in practice. More importantly, they do not offer security against malicious adversaries who can arbitrarily deviate from normal protocol execution, e.g., refusing to participate and modifying messages exchanged in the protocol. In this paper, we propose a simple and novel approach to establishing anonymous communication, easily implementable with servers having only communication and storage related capabilities. Our approach offers stronger security guarantee against malicious adversaries without incurring a great deal of extra computation and communication costs. We formally prove the security guarantee of the proposed solution and analyze its pros and cons comparing to the existing work.
Relation between o-equivalence and EA-equivalence for Niho bent functions
Boolean functions, and bent functions in particular, are considered up to so-called EA-equivalence, which is the most general known equivalence relation preserving bentness of functions.
However, for a special type of bent functions, so-called Niho bent functions there is a more general equivalence relation called o-equivalence which is induced from the equivalence of o-polynomials.
In the present work we study, for a given o-polynomial, a general construction which provides all possible o-equivalent Niho bent functions, and we considerably simplify it to a form which excludes EA-equivalent cases.
That is, we identify all cases which can potentially lead to pairwise EA-inequivalent Niho bent functions derived from o-equivalence of any given Niho bent function. Furthermore, we determine all pairwise EA-inequivalent Niho bent functions arising from all known o-polynomials via o-equivalence.
The Impact of Time on DNS Security
Time is an important component of the Domain Name System (DNS) and the DNS Security Extensions (DNSSEC). DNS caches rely on an absolute notion of time (eg "August 8, 2018 at 11:59pm'') to determine how long DNS records can be cached (i.e their Time To Live (TTL)) and to determine the validity interval of DNSSEC signatures. This is especially interesting for two reasons. First, absolute time is set from external sources, and is thus vulnerable to a variety of network attacks that maliciously alter time. Meanwhile, relative time (e.g. "2 hours from the time the DNS query was sent'') can be set using sources internal to the operating system, and is thus not vulnerable to network attacks. Second, the DNS on-the-wire protocol only uses relative time; relative time is then translated into absolute time as a part of DNS caching, which introduces vulnerabilities.
We leverage these two observations to show how to pivot from network attacks on absolute time to attacks on DNS caching. Specifically, we present and discuss the implications of attacks that (1) expire the cache earlier than intended and (2) make the cached responses stick in the cache longer than intended. We use network measurements to identify a significant attack surface for these DNS cache attacks, focusing specifically on pivots from Network Time Protocol (NTP) attacks by both on-path and off-path attackers. We therefore recommend that DNS resolvers stop using absolute time for caching, and instead start using relative time. We have implemented our recommendations as part of the popular Unbound open source resolver, and our implementation will be part of Unbound's upcoming release.
Optimized implementation of the NIST PQC submission ROLLO on microcontroller
We present in this paper an efficient implementation of the code-based cryptosystem ROLLO, a candidate to the NIST PQC project, on a device available on the market. This implementation benefits of the existing hardware by using a crypto co-processor contained in an already deployed microcontroller to speed-up operations in $\mathbb{F}_{2^m}$. Optimizations are then made on operations in $\mathbb{F}_{2^m}^n$. Finally, the cryptosystem outperforms the public key exchange protocol ECDH for a security level of 192 bits showing then the possibility of the integration of this new cryptosystem in current chips.
According to our implementation, the ROLLO-I-128 submission takes 173,6 ms for key generation, 12 ms for encapsulation and 79.4 ms for decapsulation on a microcontroller featuring $\text{ARM}^{\text{\textregistered}}$ $\text{SecurCore}^{\text{\textregistered}}$ SC300\texttrademark core running at 50 MHz.
P6V2G: A Privacy-Preserving V2G Scheme for Two-Way Payments and Reputation
The number of electric vehicles (EVs) is steadily growing. This provides a promising opportunity for balancing the smart grid of
the future, because vehicle-to-grid (V2G) systems can utilize the batteries of plugged-in EVs as much needed distributed energy storage: In times of high production and low demand the excess energy in the grid is stored in the EVs’ batteries, while peaks in demand are mitigated by EVs feeding electricity back to the grid. But the data needed for managing individual V2G charging sessions as well as for billing and rewards is of a highly personal and therefore sensitive nature. This causes V2G systems to pose a significant threat to the privacy of their users. Existing cryptographic protocols for this scenario either do not offer adequate privacy protection or fail to provide key features necessary to obtain a practical system.
Based on the recent cryptographic toll collection framework P4TC, this work introduces a privacy-preserving but efficient V2G
payment and reward system called P6V2G. Our system facilitates two-way transactions in a semi online and post-payment setting. It provides double-spending detection, an integrated reputation system, contingency traceability and blacklisting features, and is portable between EVs. The aforementioned properties are holistically captured within an established cryptographic security framework. In contrast to existing protocols, this formal model of a V2G payment and reward system allows us to assert all properties through a comprehensive formal proof.
A publicly verifiable quantum blind signature scheme without entanglement based on asymmetric cryptography
In recent years, several cryptographic scholars have proposed quantum blind signature schemes. However, their methods require the signatories and the inspectors to share common keys in advance, which makes them not only complicated in concept, but also suffering deniable problem. Moreover, due to the fact that not everyone can verify the blind signature, it needs to have a designated verifier. In view of Laurent, et al.’s argument that other than the assumption of the pre-image being collision-free, the one-way hash function is an attractive cryptographic component in the post-quantum era when designing a cryptosystem. Inspired by this, we propose a publicly verifiable quantum blind signature scheme based on the hash function. After security analyses, we confirm that our quantum blind signature not only is secure, but also have the needed properties. It includes anonymity, unforgeability, non-repudiation, blindness, public verifiability, and traceability. Hence, we conclude that this approach is better than the state-of-the-art, and is therefore more suitable for applications in real life, such as, mobile payments, quantum voting, or quantum government.
Towards a Hybrid Public Key Infrastructure (PKI): A Review
Traditional Certificate-based public key infrastructure (PKI) suffers from the problem of certificate overhead like its storage, verification, revocation etc. To overcome these problems, the idea of certificate less identity-based public key cryptography (ID-PKC) was proposed by Shamir. This is suitable for closed trusted group only. Also, this concept has some inherent problems like key escrow problem, secure key channel problem, identity management overhead etc. Later on, there had been several works which tried to combine both the cryptographic techniques such that the resulting hybrid PKI framework is built upon the best features of both the cryptographic techniques. It had been shown that this approach solves many problems associated with an individual cryptosystem.
In this paper, we have reviewed and compared such hybrid schemes which tried to combine both the certificate based PKC and ID-based PKC. Also, the summary of the comparison, based on various features, is presented in a table.
Dissecting the CHES 2018 AES Challenge
One challenge of the CHES 2018 side channel contest was to break a masked AES implementation. It was impressively won by Gohr et al. by applying ridge regression to obtain guesses for the hamming weights of the (unmasked) AES key schedule, and then using a SAT solver to brute force search the remaining key space. Template attacks are one of the most common approaches used to assess the leakage of a device in a security evaluation. Hence, this raises the question whether ridge regression is a more suitable choice for security evaluation, especially w.r.t. portability. We investigate the feasibility of template attacks to break the presented AES implementation, analyze the leakage of the device, and based on this mount a template attack on hamming weights of the key expansion. We then use classical key search algorithms to recover the AES key. By analyzing the leakage and applying dimension reduction techniques we are able to compress each trace from 650 000
points to only 30 points that are then used to create the templates. Our experimental results indicate that such classical templates achieve similar results compared to ridge regression, and in several cases even slightly outperforming it. According to the organizers, the CTF was aimed to evaluate the concepts of deep learning and classic profiling. Our final conclusion is that the challenge traces are not optimal to settle the question intended, as the leakage is very strong and local. Therefore it is very suitable to apply classical machine learning techniques such as template attacks or ridge regression, and the difficulty in recovering the key is more linked to the resulting key search problem than to the actual attack.
Algorithmic aspects of elliptic bases in finite field discrete logarithm algorithms
Elliptic bases, introduced by Couveignes and Lercier in 2009, give an elegant way of representing finite field extensions. A natural question which seems to have been considered independently by several groups is to use this representation as a starting point for small characteristic finite field discrete logarithm algorithms. This idea has been recently proposed by two groups working on it, in order to achieve provable quasi-polynomial time for discrete logarithms in small characteristic finite fields.
In this paper, we don’t try to achieve a provable algorithm but, instead, investigate the practicality of heuristic algorithms based on elliptic bases. Our key idea, is to use a different model of the elliptic curve used for the elliptic basis that allows for a relatively simple adaptation of the techniques used with former Frobenius representation algorithms. We haven’t performed any record computation with this new method but our experiments with the field GF(3^1345) indicate that switching to elliptic representations might be possible with performances comparable to the current best practical methods.
BBQ: Using AES in Picnic Signatures
This works studies the use of the AES block-cipher for Picnic-style signatures, which work in the multiparty-computation-in-the-head model. It applies advancements to arithmetic circuits for the computation of the AES S-box over multiparty computation in the preprocessing model to obtain an improvement of signature sizes of 40\% on average compared to using binary circuits for AES-128, AES-192 and AES-256 in combination with previous techniques. This work also discusses other methods for the computation of the S-box and provides insights into the reaches and limits of the multiparty-computation-in-the-head paradigm.
Statistical ZAP Arguments
Dwork and Naor (FOCS’00) first introduced and constructed two message public coin witness indistinguishable proofs (ZAPs) for NP based on trapdoor permutations. Since then, ZAPs have also been obtained based on the decisional linear assumption on bilinear maps, and indistinguishability obfuscation, and have proven extremely useful in the design of several cryptographic primitives.
However, all known constructions of two-message public coin (or even publicly verifiable) proof systems only guarantee witness indistinguishability against computationally bounded verifiers. In this paper, we construct the first public coin two message witness indistinguishable (WI) arguments for NP with statistical privacy, assuming the learning with errors (LWE) assumption holds with an explicit, efficently computable upper bound on the adversary’s advantage. Prior to this, there were no known constructions of two-message publicly verifiable WI protocols under lattice assumptions, even satisfying the weaker notion of computational witness indistinguishability.
Seems Legit: Automated Analysis of Subtle Attacks on Protocols that Use Signatures
The standard definition of security for digital signatures---existential unforgeability---does not ensure certain properties that protocol designers might expect. For example, in many modern signature schemes, one signature may verify against multiple distinct public keys. It is left to protocol designers to ensure that the absence of these properties does not lead to attacks.
Modern automated protocol analysis tools are able to provably exclude large classes of attacks on complex real-world protocols such as TLS 1.3 and 5G. However, their abstraction of signatures (implicitly) assumes much more than existential unforgeability, thereby missing several classes of practical attacks.
We give a hierarchy of new formal models for signature schemes that captures these subtleties, and thereby allows us to analyse (often unexpected) behaviours of real-world protocols that were previously out of reach of symbolic analysis. We implement our models in the Tamarin Prover, yielding the first way to perform these analyses automatically, and validate them on several case studies. In the process, we find new attacks on DRKey and SOAP's WS-Security, both protocols which were previously proven secure in traditional symbolic models.
A Composable Security Treatment of the Lightning Network
The high latency and low throughput of blockchain protocols constitute one of the fundamental barriers for their wider adoption.Overlay protocols, notably the lightning network, have been touted as the most viable direction for rectifying this in practice. In this work we present for the first time a full formalisation and security analysis of the lightning network in the (global) universal composition setting that takes into account a global ledger functionality for which previous work [Badertscher et al., Crypto’17] has demonstrated its realisability by the Bitcoin blockchain protocol. As a result, our treatment delineates exactly how the security guarantees of the protocol depend on the properties of the underlying ledger. Moreover, we provide a complete and modular description of the core of the lightning protocol that highlights precisely its dependency to underlying basic cryptographic primitives such as digital signatures, pseudorandom functions, identity-based signatures and a less common two-party primitive, which we term a combined digital signature, that were originally hidden within the lightning protocol’s implementation.
A Reduction-Based Proof for Authentication and Session Key Security in 3-Party Kerberos
Kerberos is one of the earliest network security protocols, providing authentication between clients and servers with the assistance of trusted servers. It remains widely used, notably as the default authentication protocol in Microsoft Active Directory (thus shipped with every major operating system), and is the ancestor of modern single sign-on protocols like OAuth and OpenID Connect.
There have been many analyses of Kerberos in the symbolic (Dolev--Yao) model, which is more amenable to computer-aided verification tools than the computational model, but also idealizes messages and cryptographic primitives more. Reduction-based proofs in the computational model can provide assurance against a richer class of adversaries, and proofs with concrete probability analyses help in picking security parameters, but Kerberos has had no such analyses to date.
We give a reduction-based security proof of Kerberos authentication and key establishment, focusing on the mandatory 3-party mode. We show that it is a secure authentication protocol under standard assumptions on its encryption scheme; our results can be lifted to apply to quantum adversaries as well.
As has been the case for other real-world authenticated key exchange (AKE) protocols, the standard AKE security notion of session key indistinguishability cannot be proven for Kerberos since the session key is used in the protocol itself, breaking indistinguishability. We provide two positive results despite this: we show that the standardized but optional sub-session mode of Kerberos does yield secure session keys, and that the hash of the main session key is also a secure session key under Krawczyk's generalization of the authenticated and confidential channel establishment (ACCE) model.
Scalable Private Set Union from Symmetric-Key Techniques
We present a new efficient protocol for computing private set union (PSU). Here two semi-honest parties, each holding a dataset of known size (or of a known upper bound), wish to compute the union of their sets without revealing anything else to either party. Our protocol is in the OT hybrid model. Beyond OT extension, it is fully based on symmetric-key primitives. We motivate the PSU primitive by its direct application to network security and other areas.
At the technical core of our PSU construction is the reverse private membership test RPMT protocol. In RPMT, the sender with input $x^*$ interacts with a receiver holding a set $X$. As a result, the receiver learns (only) the bit indicating whether $x^*$ is in $X$, while the sender learns nothing about the set $X$. (Previous similar protocols provide output to the opposite party, hence the term "reverse'' private membership.) We believe our RPMT abstraction and constructions may be a building block in other applications as well.
We demonstrate the practicality of our proposed protocol with an implementation. For input sets of size $2^{20}$ and using a single thread, our protocol requires 238 seconds to securely compute the set union, regardless of the bit length of the items. Our protocol is amenable to parallelization. Increasing the number of threads from 1 to 32, our protocol requires only 13.1 seconds, a factor of $18.25 \times$ improvement.
To the best of our knowledge, ours is the first protocol that reports on large-size experiments, makes code available, and avoids extensive use of computationally expensive public-key operations. (No PSU code is publicly available for prior work, and the only prior symmetric-key-based work reports on small experiments and focuses on the simpler 3-party, 1-corruption setting.)Our work improves reported PSU state of the art by factor up to $7,600\times$ for large instances.
Pay To Win: Cheap, Crowdfundable, Cross-chain Algorithmic Incentive Manipulation Attacks on PoW Cryptocurrencies
In this paper we extend the attack landscape of bribing attacks on cryptocurrencies by presenting a new method, which we call
Pay-To-Win (P2W). To the best of our knowledge, it is the first approach capable of facilitating double-spend collusion across different blockchains. Moreover, our technique can also be used to specifically incentivize transaction exclusion or (re)ordering. For our construction we rely on smart contracts to render the payment and receipt of bribes trustless for the briber as well as the bribee. Attacks using our approach are operated and financed out-of-band i.e., on a funding cryptocurrency, while the consequences are induced in a different target cryptocurrency. Hereby, the main requirement is that smart contracts on the funding cryptocurrency are able to verify consensus rules of the target. For a concrete instantiation of our P2W method, we choose Bitcoin as a target and Ethereum as a funding cryptocurrency. Our P2W method is designed in a way that reimburses collaborators even in the case of an unsuccessful attack. Interestingly, this actually renders our approach approximately one order of magnitude cheaper than comparable bribing techniques (e.g., the whale attack). We demonstrate the technical feasibility of P2W attacks through publishing all relevant artifacts of this paper, ranging from calculations of success probabilities to a fully functional proof-of-concept implementation, consisting of an Ethereum smart contract and a Python client.
Estimating Gaps in Martingales and Applications to Coin-Tossing: Constructions and Hardness
Consider the representative task of designing a distributed coin-tossing protocol for $n$ processors such that the probability of heads is $X_0\in[0,1]$.
This protocol should be robust to an adversary who can reset one processor to change the distribution of the final outcome.
For $X_0=1/2$, in the information-theoretic setting, no adversary can deviate the probability of the outcome of the well-known Blum's ``majority protocol'' by more than $\frac1{\sqrt{2\pi n}}$, i.e., it is $\frac1{\sqrt{2\pi n}}$ insecure.
In this paper, we study discrete-time martingales $(X_0,X_1,\dotsc,X_n)$ such that $X_i\in[0,1]$, for all $i\in\{0,\dotsc,n\}$, and $X_n\in\{0,1\}$.
These martingales are commonplace in modeling stochastic processes like coin-tossing protocols in the information-theoretic setting mentioned above.
In particular, for any $X_0\in[0,1]$, we construct martingales that yield $\frac12\sqrt{\frac{X_0(1-X_0)}{n}}$ insecure coin-tossing protocols.
For $X_0=1/2$, our protocol requires only 40\% of the processors to achieve the same security as the majority protocol.
The technical heart of our paper is a new inductive technique that uses geometric transformations to precisely account for the large gaps in these martingales.
For any $X_0\in[0,1]$, we show that there exists a stopping time $\tau$ such that
$$\mathbb{E}[\left\vert X_\tau-X_{\tau-1} \right\vert] \geq \frac2{\sqrt{2n-1}}\cdot X_0(1-X_0)$$
The inductive technique simultaneously constructs martingales that demonstrate the optimality of our bound, i.e., a martingale where the gap corresponding to any stopping time is small.
In particular, we construct optimal martingales such that \textit{ any} stopping time $\tau$ has
$$\mathbb{E}[\left\vert X_\tau-X_{\tau-1} \right\vert] \leq \frac1{\sqrt{n}}\cdot \sqrt{X_0(1-X_0)}$$
Our lower-bound holds for all $X_0\in[0,1]$; while the previous bound of Cleve and Impagliazzo (1993) exists only for positive constant $X_0$.
Conceptually, our approach only employs elementary techniques to analyze these martingales and entirely circumvents the complex probabilistic tools inherent to the approaches of Cleve and Impagliazzo (1993) and Beimel, Haitner, Makriyannis, and Omri (2018).
By appropriately restricting the set of possible stopping-times, we present representative applications to constructing distributed coin-tossing/dice-rolling protocols, discrete control processes, fail-stop attacking coin-tossing/dice-rolling protocols, and black-box separations.
Efficient Secure Ridge Regression from Randomized Gaussian Elimination
In this paper we present a practical protocol for secure ridge regression. We develop the necessary secure linear algebra tools, using only basic arithmetic over prime fields. In particular, we will show how to solve linear systems of equations and compute matrix inverses efficiently, using appropriate secure random self-reductions of these problems. The distinguishing feature of our approach is that the use of secure fixed-point arithmetic is avoided entirely, while circumventing the need for rational reconstruction at any stage as well.
We demonstrate the potential of our protocol in a standard setting for information-theoretically secure multiparty computation, tolerating a dishonest minority of passively corrupt parties. Using the MPyC framework, which is based on threshold secret sharing over finite fields, we show how to handle large datasets efficiently, achieving practically the same root-mean-square errors as Scikit-learn. Moreover, we do not assume that any (part) of the datasets is held privately by any of the parties, which makes our protocol much more versatile than existing solutions.
Mixture Integral Attacks on Reduced-Round AES with a Known/Secret S-Box
In this work, we present new low-data secret-key distinguishers and key-recovery attacks on reduced-round AES.
The starting point of our work is “Mixture Differential Cryptanalysis” recently introduced at FSE/ToSC 2019, a way to turn the “multiple-of-8” 5-round AES secret-key distinguisher presented at Eurocrypt 2017 into a simpler and more convenient one (though, on a smaller number of rounds). By reconsidering this result on a smaller number of rounds, we present as our main contribution a new secret-key distinguisher on 3-round AES with the smallest data complexity in the literature (that does not require adaptive chosen plaintexts/ciphertexts), i.e. approximately half of the data necessary to set up a 3-round truncated differential distinguisher (which is currently the distinguisher in the literature with the lowest data complexity). E.g. for a probability of success of 95%, our distinguisher requires just 10 chosen plaintexts versus 20 chosen plaintexts necessary to set up the truncated differential one.
Besides that, we present new competitive low-data key-recovery attacks on 3- and 4-round AES, both in the case in which the S-Box is known and in the case in which it is secret.
DDH-based Multisignatures with Public Key Aggregation
A multisignature scheme allows a group of signers to produce a joint signature on a common message, which is more compact than a collection of distinct signatures from all signers. Given this signature and the list of signers' public keys, a verifier is able to check if every signer in the group participated in signing. Recently, a multisignature scheme with public key aggregation has drawn a lot of attention due to their applications into the blockchain technology. Such multisignatures provide not only a compact signature, but also a compact aggregated public key, that is both the signature size and the public key size used to verify the correctness of the signature are independent from the number of signers. This is useful for a blockchain because of its duplication over a distributed network, and thus it is required to be as compact as possible. In this paper, we introduce a new multisignature scheme with such a feature. Our scheme is proven secure under the Decisional Diffie-Hellman assumption. In addition, in the presence of rogue key attacks, the security of our scheme is proven in the plain public key model.
Practical Attacks on Reduced-Round AES
In this paper we investigate the security of 5-round AES against two different attacks in an adaptive setting. We present a practical key-recovery attack on 5-round AES with a secret s-box that requires $2^{32}$ adaptively chosen ciphertexts, which is as far as we know a new record. In addition, we present a new and practical key-independent distinguisher for 5-round AES which requires $2^{27.2}$ adaptively chosen ciphertexts. While the data complexity of this distinguisher is in the same range as the current best 5-round distinguisher, it exploits new structural properties of 5-round AES.
Exploiting Determinism in Lattice-based Signatures - Practical Fault Attacks on pqm4 Implementations of NIST candidates
In this paper, we analyze the implementation level fault vulnerabilities of deterministic lattice-based
signature schemes. In particular, we extend the practicality of skip-addition fault
attacks through exploitation of determinism in certain variants of Dilithium (Deterministic variant) and qTESLA signature scheme (originally submitted deterministic version), which are two leading candidates for the NIST standardization of post-quantum cryptography. We show that single targeted faults injected in the signing procedure allow to recover an important portion of the secret key. Though faults injected in the signing procedure do not recover all the secret key elements, we propose a novel forgery algorithm that allows the attacker to sign any given message with only the extracted portion of the secret key. We perform experimental validation of our attack using Electromagnetic fault injection on reference implementations taken from the pqm4 library, a benchmarking and testing framework for post quantum cryptographic implementations for the ARM Cortex-M4 microcontroller. We also show that our attacks break two well known countermeasures known to protect against skip-addition fault attacks. We further propose an efficient mitigation strategy against our attack that exponentially increases the attacker's complexity at almost zero increase in computational complexity.
Distributing any Elliptic Curve Based Protocol
We show how to perform a full-threshold $n$-party actively secure MPC protocol over a subgroup of order $p$ of an elliptic curve group $E(K)$. This is done by utilizing a full-threshold $n$-party actively secure MPC protocol over $\mathbb{F}_p$ in the pre-processing model (such as SPDZ), and then locally mapping the Beaver triples from this protocol into equivalent triples for the elliptic curve. This allows us to transform essentially any one-party protocol over an elliptic curve, into an $n$-party one. As an example we show how to transform the shuffle protocol of Abe into an $n$-party protocol. This application requires us to also give an MPC protocol to derive the switches in a Waksman network from a generic permutation, which may be of independent interest.
On cryptographic parameters of permutation polynomials of the form $x^rh(x^{(q-1)/d})$
The differential uniformity, the boomerang uniformity, and the extended Walsh spectrum etc are important parameters to evaluate the security of S(substitution)-box. In this paper, we introduce efficient formulas to compute these cryptographic parameters of permutation polynomials of the form $x^rh(x^{(q-1)/d})$ over a finite field of $q=2^n$ elements, where $r$ is a positive integer and $d$ is a positive divisor of $q-1$. The computational cost of those formulas is proportional to $d$. We investigate differentially 4-uniform permutation polynomials of the form $x^rh(x^{(q-1)/3})$ and compute the boomerang spectrum and the extended Walsh spectrum of them using the suggested formulas when $4\le n\le 10$ is even, where $d=3$ is the smallest nontrivial $d$ for even $n$. We also investigate the differential uniformity of some permutation polynomials introduced in some recent papers for the case $d=2^{n/2}+1$
Complexity of Estimating Renyi Entropy of Markov Chains
Estimating entropy of random processes is one of the fundamental problems of
machine learning and property testing. It has numerous applications to anything
from DNA testing and predictability of human behaviour to modeling neural
activity and cryptography. We investigate the problem of Renyi entropy estimation
for sources that form Markov chains.
Kamath and Verdú (ISIT’16) showed that good mixing properties are essential for that task. We show that even with very good mixing time, estimation of min-entropy requires $\Omega(K^2)$ sample size, while collision entropy requires $\Omega(K^{3/2})$
samples, where K is the size of the alphabet. Our results hold both in asymptotic
and non-asymptotic regimes.
We achieve the results by applying Le Cam’s method to two Markov chains which
differ by an appropriately chosen sparse perturbation; the discrepancy between
these chains is estimated with help of perturbation theory. Our techniques might be
of independent interest.
SPQCop: Side-channel protected Post-Quantum Cryptoprocessor
The past few decades have seen significant progress in practically realizable quantum technologies. It is well known since the work of Peter Shor that large scale quantum computers will threaten the security of most of the currently used public key cryptographic algorithms. This has spurred the cryptography community to design algorithms which will remain safe even with the emergence of large scale quantum computing systems. An effort in this direction is the currently ongoing post-quantum cryptography (PQC) competition, which has led to the design and analysis of many concrete cryptographic constructions. Among these, Lattice based algorithms have emerged to be promising candidates. Therefore, we focus on the efficient implementation of Ring-LWE based quantum-safe key-exchange algorithms. Further, deployment of hardware implementing such algorithms in critical applications requires security against implementation attacks.
In this work, we design a side channel resistant post-quantum cryptoprocessor which supports NewHope-NIST, NewHope-USENIX and HILA5 key-exchange schemes. The implemented cryptoprocessor is highly optimized with minimal overhead due to the countermeasures. It requires about 13,500 LUTs and 8,100 FFs. Due to a significantly pipelined architecture, an operating speed of 406 MHz could be achieved on the latest 16nm FPGAs; resulting in a key-exchange time of only 158uS, 157uS and 148uS for the above mentioned designs respectively. We also present detailed area and performance metrics for different modules required for all the designs. To the best of our knowledge, this work presents the first side-channel leakage resistant post quantum accelerator. Furthermore, this is also the fastest hardware implementation of NewHope-NIST.
The Adversarial Robustness of Sampling
Random sampling is a fundamental primitive in modern algorithms, statistics, and machine learning, used as a generic method to obtain a small yet ``representative'' subset of the data. In this work, we investigate the robustness of sampling against adaptive adversarial attacks in a streaming setting: An adversary sends a stream of elements from a universe $U$ to a sampling algorithm (e.g., Bernoulli sampling or reservoir sampling), with the goal of making the sample ``very unrepresentative'' of the underlying data stream. The adversary is fully adaptive in the sense that it knows the exact content of the sample at any given point along the stream, and can choose which element to send next accordingly, in an online manner.
Well-known results in the static setting indicate that if the full stream is chosen in advance (non-adaptively), then a random sample of size $\Omega(d / \varepsilon^2)$ is an $\varepsilon$-approximation of the full data with good probability, where $d$ is the VC-dimension of the underlying set system $(U, \mathcal{R})$.
Does this sample size suffice for robustness against an adaptive adversary?
The simplistic answer is \emph{negative}: We demonstrate a set system where a constant sample size (corresponding to a VC-dimension of $1$) suffices in the static setting, yet an adaptive adversary can make the sample very unrepresentative, as long as the sample size is (strongly) sublinear in the stream length, using a simple and easy-to-implement attack.
However, this attack is ``theoretical only'', requiring the set system size to (essentially) be exponential in the stream length. This is not a coincidence: We show that in order to make the sampling algorithm robust against adaptive adversaries, the modification required is solely to replace the VC-dimension term $d$ in the sample size with the cardinality term $\log |\mathcal{R}|$.
That is, the Bernoulli and reservoir sampling algorithms with sample size $\Omega(\log |\mathcal{R}|/\varepsilon^2)$ output a representative sample of the stream with good probability, even in the presence of an adaptive adversary. This nearly matches the bound imposed by the attack.
Fast Correlation Attacks on Grain-like Small State Stream Ciphers and Cryptanalysis of Plantlet, Fruit-v2 and Fruit-80
The fast correlation attack (FCA) is one of the most important cryptanalytic techniques against LFSR-based stream ciphers. In CRYPTO 2018, Todo et al. found a new property for the FCA and proposed a novel algorithm which was successfully applied to the Grain family of stream ciphers. Nevertheless, these techniques can not be directly applied to Grain-like small state stream ciphers with keyed update, such as Plantlet, Fruit-v2, and Fruit80. In this paper, we study the security of Grain-like small state stream ciphers by the fast correlation attack. We first observe that the number of required parity-check equations can be reduced when there are multiple different parity-check equations. With exploiting the Skellam distribution, we introduce a sufficient condition to identify the correct LFSR initial state and derive a new relationship between the number and bias of the required parity-check equations. Then a modified algorithm is presented based on this new relationship, which can recover the LFSR initial state no matter what the round key bits are. Under the condition that the LFSR initial state is known, an algorithm is given against the degraded system and to recover the NFSR state at some time instant, along with the round key bits.
As cases study, we apply our cryptanalytic techniques to Plantlet, Fruit-v2 and Fruit-80. As a result, for Plantlet our attack takes $ 2^{73.75} $ time complexity and $ 2^{73.06} $ keystream bits to recover the full 80-bit key. Regarding Fruit-v2, $ 2^{55.34} $ time complexity and $ 2^{55.62} $ keystream bits are token to determine the secret key. As for Fruit-80, $2^{64.47}$ time complexity and $2^{62.82}$ keystream bits are required to recover the secret key. More flexible attacks can be obtained with lower data complexity at cost of increasing attack time. Especially, for Fruit-v2 a key recovery attack can be launched with data complexity of $2^{42.38}$ and time complexity of $2^{72.63}$. Moreover, we have implemented our attack methods on a toy version of Fruit-v2. The attack matches the expected complexities predicted by our theoretical analysis quite well, which proves the validity of our cryptanalytic techniques.
Verifiable Computing for Approximate Computation
Verifiable computing (VC) is a complexity-theoretic method to secure the integrity of computations. The need is increasing as more computations are outsourced to untrusted parties, e.g., cloud platforms. Existing techniques, however, only deal with exact computations, without the capability of rounding (e.g., "$1.11 \times 2.22 = 2.4642$" is verifiable, but $1.11 \times 2.22 \simeq 2.46$" is not). Hence, in a long sequence of calculations (e.g., multiplications), the number of digits of the result keeps increasing and will quickly exceed the precision limit of the underlying system. Because of this limitation, VC is currently missing the opportunity in the whole AI space where approximate computations are unavoidable.
In pursuit of the vision of verifiable AI computing, a solution to support the rounding operation is necessary. In this paper, we present an efficient verifiable computing scheme to achieve it. The main idea is to reduce the rounding operation into an efficient arithmetic circuit representation, and reuse the machinery of the Goldwasser, Kalai, and Rothblum's protocol (also known as the GRK protocol), the state-of-the-art interactive proof protocol. Specifically, we shift the algebraic structure from a field to a ring to better deal with the notion of "digits", and generalize the original GKR protocol over a ring. Then, we reduce the rounding operation to a low-degree polynomial in a ring, and present a novel, optimal circuit construction of an arbitrary polynomial to transform the rounding polynomial to an optimal arithmetic circuit representation. Moreover, we further optimize the proof generation cost for rounding by employing a Galois ring. We provide experimental results that show the efficiency of our scheme for approximate computations. For example, our implementation performed two orders of magnitude better than the existing GKR protocol for a nested 128x128 matrix multiplication of depth 12 on the 16-bit fixed-point arithmetic.
Athena: A verifiable, coercion-resistant voting system with linear complexity
Seminal work by Juels, Catalano & Jakobsson delivered a verifiable,
coercion-resistant voting system with quadratic complexity. This
manuscript attempts to advance the state-of-the-art by delivering a voting
system with equivalent security and linear complexity.
Differential Fault Analysis of NORX
In recent literature, there has been a particular interest in studying nonce based AE schemes in the light of fault based attacks as they seem to present an automatic protection against Differential Fault Attacks (DFA). In this work, we present the first DFA on nonce based CAESAR scheme NORX. We demonstrate a scenario when faults introduced in NORX in parallel mode can be used to collide the internal state to produce an \emph{all-zero} state.
We later show how this can be used to replay NORX despite being instantiated by different nonces, messages. Once replayed, we show how the key of NORX can be recovered using secondary faults and using the faulty tags. We use different fault models to showcase the versatility of the attack strategy. A detailed theoretical analysis of the expected number of faults required under various models is also furnished. Under the random bit flip model, around 1384 faults are to be induced to reduce the key space from $2^{128}$ to $2^{32}$ while the random byte flip model requires 136 faults to uniquely identify the key. To the best of our knowledge, this is the first fault attack that uses \emph{both internal} and \emph{classical differentials} to mount a DFA on a nonce based authenticated cipher which is otherwise believed to be immune to DFA.
Code Constructions for Physical Unclonable Functions and Biometric Secrecy Systems
The two-terminal key agreement problem with biometric or physical identifiers is considered. Two linear code constructions based on Wyner-Ziv coding are developed. The first construction uses random linear codes and achieves all points of the key-leakage-storage regions of the generated-secret and chosen-secret models. The second construction uses nested polar codes for vector quantization during enrollment and error correction during reconstruction. Simulations show that the nested polar codes achieve privacy-leakage and storage rates that improve on existing code designs. One proposed code achieves a rate tuple that cannot be achieved by existing methods.
Genus 2 Supersingular Isogeny Oblivious Transfer
We present an oblivious transfer scheme that extends the proposal made by Barreto, Oliveira and Benits, based in supersingular isogenies, to the setting of principally polarized supersingular abelian surfaces.
EverCrypt: A Fast, Verified, Cross-Platform Cryptographic Provider
We present EverCrypt: a comprehensive collection of verified, high-performance cryptographic functionalities available via a carefully designed API. The API provably supports agility (choosing between multiple algorithms for the same functionality) and multiplexing (choosing between multiple implementations of the same algorithm). Through abstraction and zero-cost generic programming, we show how agility can simplify verification without sacrificing performance, and we demonstrate how C and assembly can be composed and verified against shared specifications. We substantiate the effectiveness of these techniques with new verified implementations (including hashes, Curve25519, and AES-GCM) whose performance matches or exceeds the best unverified implementations. We validate the API design with two high-performance verified case studies built atop EverCrypt, resulting in line-rate performance for a secure network protocol and a Merkle-tree library, used in a production blockchain, that supports 2.7 million insertions/sec. Altogether, EverCrypt consists of over 124K verified lines of specs, code, and proofs, and it produces over 29K lines of C and 14K lines of assembly code.
SKIVA: Flexible and Modular Side-channel and Fault Countermeasures
We describe SKIVA, a customized 32-bit processor enabling the design of software countermeasures for a broad range of implementation attacks covering fault injection and side-channel analysis of timing-based and power-based leakage. We design the countermeasures as variants of bitslice programming. Our protection scheme is flexible and modular, allowing us to combine higher-order masking -- fending off side-channel analysis -- with complementary spatial and temporal redundancy -- protecting against fault injection. Multiple configurations of side-channel and fault protection enable the programmer to select the desired number of shares and the desired redundancy level for each slice. Recurring and security-sensitive operations are supported in hardware through a custom instruction set extension. The new instructions support bitslicing, secret-share generation, redundant logic computation, and fault detection. We demonstrate and analyze multiple versions of AES from a side-channel analysis and a fault-injection perspective, in addition to providing a detailed performance evaluation of the protected designs.
Generic Attacks on Hash Combiners
Hash combiners are a practical way to make cryptographic hash functions more tolerant to future attacks and compatible with existing infrastructure. A combiner combines two or more hash functions in a way that is hopefully more secure than each of the underlying hash functions, or at least remains secure as long as one of them is secure. Two classical hash combiners are the exclusive-or (XOR) combiner $H_1(M) \oplus H_2(M)$ and the concatenation combiner $H_1(M) \parallel H_2(M)$. Both of them process the same message using the two underlying hash functions in parallel. Apart from parallel combiners, there are also cascade constructions sequentially calling the underlying hash functions to process the message repeatedly, such as Hash-Twice $H_2(H_1(IV,M),M)$ and the Zipper hash $H_2(H_1(IV,M),\overleftarrow{M})$, where $\overleftarrow{M}$ is the reverse of the message $M$.
In this work, we study the security of these hash combiners by devising the best-known generic attacks. The results show that the security of most of the combiners is not as high as commonly believed. We summarize our attacks and their computational complexities (ignoring the polynomial factors) as follows:
1. Several generic preimage attacks on the XOR combiner:
-- A first attack with a best-case complexity of $2^{5n/6}$ obtained for messages of length $2^{n/3}$. It relies on a novel technical tool named Interchange Structure. It is applicable for combiners whose underlying hash functions follow the Merkle-Damgård construction or the HAIFA framework.
-- A second attack with a best-case complexity of $2^{2n/3}$ obtained for messages of length $ 2^{n/2} $. It exploits properties of functional graphs of random mappings. It achieves a significant improvement over the first attack but is only applicable when the underlying hash functions use the Merkle-Damgård construction.
-- An improvement upon the second attack with a best-case complexity of $2^{5n/8}$ obtained for messages of length $2^{5n/8}$. It further exploits properties of functional graphs of random mappings and uses longer messages.
These attacks show a rather surprising result: regarding preimage resistance, the sum of two $n$-bit narrow-pipe hash functions following the considered constructions can never provide $ n $-bit security.
2. A generic second-preimage attack on the concatenation combiner of two Merkle Damgård hash functions. This attack finds second preimages faster than $2^n$ for challenges longer than $2^{2n/7}$ and has a best-case complexity of $2^{3n/4}$ obtained for challenges of length $2^{3n/4}$. It also exploits properties of functional graphs of random mappings.
3. The first generic second-preimage attack on the Zipper hash with underlying hash functions following the Merkle-Damgård construction. The best-case complexity is $2^{3n/5}$, obtained for challenge messages of length $2^{2n/5}$.
4. An improved generic second-preimage attack on Hash-Twice with underlying hash functions following the Merkle-Damgård construction. The best-case complexity is $2^{13n/22}$, obtained for challenge messages of length $2^{13n/22}$.
The last three attacks show that regarding second-preimage resistance, the concatenation and cascade of two $n$-bit narrow-pipe Merkle-Damgård hash functions do not provide much more security than that can be provided by a single $n$-bit hash function.
Our main technical contributions include the following:
1. The interchange structure, which enables simultaneously controlling the behaviours of two hash computations sharing the same input.
2. The simultaneous expandable message, which is a set of messages of length covering a whole appropriate range and being multi-collision for both of the underlying hash functions.
3. New ways to exploit the properties of functional graphs of random mappings generated by fixing the message block input to the underlying compression functions.
Is it Easier to Prove Theorems that are Guaranteed to be True?
Consider the following two fundamental open problems in complexity theory: (a) Does a hard-on-average language in NP imply the existence of one-way functions?, or (b) Does a hard-on-average language in NP imply a hard-on-average problem in TFNP (i.e., the class of total NP search problem)? Our main result is that the answer to (at least) one of these questions is yes.
Both one-way functions and problems in TFNP can be interpreted as promise-true distributional NP search problems---namely, distributional search problems where the sampler only samples true statements. As a direct corollary of the above result, we thus get that the existence of a hard-on-average distributional NP search problem implies a hard-on-average promise-true distributional NP search problem. In other words,” It is no easier to find witnesses (a.k.a. proofs) for efficiently-sampled statements (theorems) that are guaranteed to be true.”


This result follows from a more general study of interactive puzzles---a generalization of average-case hardness in NP—and in particular, a novel round-collapse theorem for computationally-sound protocols, analogous to Babai-Moran's celebrated round-collapse theorem for information-theoretically sound protocols. As another consequence of this treatment, we show that the existence of O(1)-round public-coin non-trivial arguments (i.e., argument systems that are not proofs) imply the existence
of a hard-on-average problem in NP/poly.
Design of Anonymous Endorsement System in Hyperledger Fabric
Permissioned Blockchain has become quite popular with enterprises forming consortium since it prioritizes trust over privacy. One of the popular platforms for distributed ledger solution, Hyperledger Fabric, requires a transaction to be endorsed or approved by a group of special members known as endorsers before undergoing validation. To endorse a transaction, an endorser mentions its identity along with the signature so that it can be verified later. However, for certain transactions, difference in opinion may exist among endorsers. Disclosing the identity of an endorser may lead to conflict within the consortium. In such cases, an endorsement policy which not only allows an endorser to support a transaction discreetly, but at the same time takes into account the decision of the majority is preferred. Thus we propose an Anonymous Endorsement System which uses a threshold endorsement policy in order to address the issue. To realize a t-out-of-n endorsement policy, using any of the existing threshold ring signature for our endorsement system would have violated the privacy of endorsers as either the identity or the secret key of the endorsers get revealed to the party who recombines the signature after collecting each signature share. All these factors motivated us to design a new ring signature scheme, called Fabric's Constant-Sized Linkable Ring Signature (FCsLRS) with Transaction-Oriented linkability for hiding identity of the endorsers. We have implemented the signature scheme in Golang and analyzed its security and performance by varying the RSA (Rivest-Shamir-Adleman) modulus size. Feasibility of implementation is supported by experimental analysis. Signature and tag generation time is quite fast and remains constant irrespective of change in message length or endorsement set size for a given RSA modulus value, assuming all the endorsers generates their signature in parallel. Each verifier is required to count and check individual valid ring signature. If the aggregate is above the threshold value, stated by the endorsement policy, then it confirms that the transaction is valid. This increases the verification time depending on the threshold value, but has very little effect on the scalability since generally $t<<n$. Lastly, we also discuss the integration of the scheme on v1.2 Hyperledger Fabric.
Fact and Fiction: Challenging the Honest Majority Assumption of Permissionless Blockchains
Honest majority is the key security assumption of Proof-of-Work (PoW) based blockchains. However, the recent 51% attacks render this assumption unrealistic in practice. In this paper, we challenge this assumption against rational miners in the PoW-based blockchains in reality. In particular, we show that the current incentive mechanism may encourage rational miners to launch 51% attacks in two cases. In the first case, we consider a miner of a stronger blockchain launches 51% attacks on a weaker blockchain, where the two blockchains share the same mining algorithm. In the second case, we consider a miner rents mining power from cloud mining services to launch 51% attacks. As 51% attacks lead to double-spending, the miner can profit from these two attacks. If such double-spending is more profitable than mining, miners are more intended to launch 51% attacks rather than mine honestly.
We formally model such behaviours as a series of actions through a Markov Decision Process. Our results show that, for most mainstream PoW-based blockchains, 51% attacks are feasible and profitable, so profit-driven miners are incentivised to launch 51% attacks to gain extra profit. In addition, we leverage our model to investigate the recent 51% attack on Ethereum Classic (on 07/01/2019), which is suspected to be an incident of 51% attacks. We provide insights on the attacker strategy and expected revenue, and show that the attacker’s strategy is near-optimal.
Discrete logarithms in quasi-polynomial time in finite fields of fixed characteristic
We prove that the discrete logarithm problem can be solved in quasi-polynomial expected time in the multiplicative group of finite fields of fixed characteristic. More generally, we prove that it can be solved in the field of cardinality $p^n$ in expected time $(pn)^{2\log_2(n) + O(1)}$.
A Short Note on a Weight Probability Distribution Related to SPNs
We report on a simple technique that supports some recent developments on AES by Grassi and Rechberger and Bao, Guo and List. We construct a weight transition probability matrix related to AES that characterises fixed configurations of active bytes in differences of ciphertexts when plaintext differences are fixed to some (possibly other) configuration of active bytes. The construction is very simple and requires only a little bit of linear algebra. The derived probabilities are essentially identical to recent results on 5- and 6-rounds AES derived through more sophisticated means, indicating that it might be worth a further investigation.
The privacy of the TLS 1.3 protocol
TLS (Transport Layer Security) is a widely deployed protocol that plays a vital role in securing Internet trafic. Given the numerous known attacks for TLS 1.2, it was imperative to change and even redesign the protocol in order to address them. In August 2018, a new version of the protocol, TLS 1.3, was standardized by the IETF (Internet Engineering Task Force). TLS 1.3 not only benefits from stronger security guarantees, but aims to protect the identities of the server and client by encrypting messages as soon as possible during the authentication. In this paper, we model the privacy guarantees of TLS 1.3 when parties execute a full handshake or use a session resumption, covering all the handshake modes of TLS. We build our privacy models on top of the one defined by Hermans et al. for RFIDs (Radio Frequency Identification Devices) that mostly targets authentication protocols. The enhanced models share similarities to the Bellare-Rogaway AKE (Authenticated Key Exchange) security model and consider adversaries that can compromise both types of participants in the protocol. In particular, modeling session resumption is non-trivial, given that session resumption tickets are essentially a state transmitted from one session to another and such link reveals information on the parties. On the positive side, we prove that TLS 1.3 protects the privacy of its users at least against passive adversaries, contrary to TLS 1.2, and against more powerful ones.
Temporary Censorship Attacks in the Presence of Rational Miners
Smart contracts allow for exchange of coins according to program rules. While it is well known that so called bribery contracts can influence the incentive mechanism of a Nakamotostyle consensus, we present a more fine-grained bribery attack incentivizing a temporary censorship against a specific account. To this end, we introduce three different bribery contracts on the blockchain where each uniquely manipulates the rewards that a rational miner would receive. Additionally, we formalize the
established bribery mechanisms as a Markov game and show for each game the existence of equilibria leading to successful censorships. Finally, we compare the bribery mechanisms with respect to the scalability of the attack costs and the strategic dominance. Our work is motivated by off-chain protocols including payment and state channels which require to publish transactions within a certain amount of time. In such off-chain protocols a temporary censorship attack can result into significant financial damage.
Efficient Lattice-Based Zero-Knowledge Arguments with Standard Soundness: Construction and Applications
We provide new zero-knowledge argument of knowledge systems that work directly for a wide class of language, namely, ones involving the satisfiability of matrix-vector relations and integer relations commonly found in constructions of lattice-based cryptography. Prior to this work, practical arguments for lattice-based relations either have a constant soundness error ( 2/3 ), or consider a weaker form of soundness, namely, extraction only guarantees that the prover is in possession of a witness that “approximates” the actual witness. Our systems do not suffer from these limitations.
The core of our new argument systems is an efficient zero-knowledge argument of knowledge of a solution to a system of linear equations, where variables of this solution satisfy a set of quadratic constraints. This argument enjoys standard soundness, a small soundness error ( 1/poly ), and a complexity linear in the size of the solution. Using our core argument system, we construct highly efficient argument systems for a variety of statements relevant to lattices, including linear equations with short solutions and matrix-vector relations with hidden matrices.
Based on our argument systems, we present several new constructions of common privacy-preserving primitives in the standard lattice setting, including a group signature, a ring signature, an electronic cash system, and a range proof protocol. Our new constructions are one to three orders of magnitude more efficient than the state of the art (in standard lattice). This illustrates the efficiency and expressiveness of our argument system.
Public-Key Function-Private Hidden Vector Encryption (and More)
We construct public-key function-private predicate encryption for the ``small superset functionality,'' recently introduced by Beullens and Wee (PKC 2019). This functionality captures several important classes of predicates:
- Point functions. For point function predicates, our construction is equivalent to public-key function-private anonymous identity-based encryption.
- Conjunctions. If the predicate computes a conjunction, our construction is a public-key function-private hidden vector encryption scheme. This addresses an open problem posed by Boneh, Raghunathan, and Segev (ASIACRYPT 2013).
- $d$-CNFs and read-once conjunctions of $d$-disjunctions for constant-size $d$.
Our construction extends the group-based obfuscation schemes of Bishop et al. (CRYPTO 2018), Beullens and Wee (PKC 2019), and Bartusek et al. (EUROCRYPT 2019) to the setting of public-key function-private predicate encryption. We achieve an average-case notion of function privacy, which guarantees that a decryption key $sk_f$ reveals nothing about $f$ as long as $f$ is drawn from a distribution with sufficient entropy. We formalize this security notion as a generalization of the (enhanced) real-or-random function privacy definition of Boneh, Raghunathan, and Segev (CRYPTO 2013). Our construction relies on bilinear groups, and we prove security in the generic bilinear group model.
Efficient Perfectly Sound One-message Zero-Knowledge Proofs via Oracle-aided Simulation
In this paper we put forth new efficient one-message proof systems for several practical applications, like proving that an El Gamal ciphertext (over a multiplicative group) decrypts to a given value and correctness of a shuffle. Our proof systems are built from multiplicative groups of hidden order, are not based on any setup/trust assumption like the RO or the common reference string model and are perfectly sound, that is they are written proofs in the sense of mathematics.
Our proof systems satisfy a generalization of zero-knowledge (ZK) that we call harmless zero-knowledge (HZK).
The simulator of an $O$-HZK proof for a relation over a language $L$ is given the additional capability of invoking an oracle $O$ relative to which $L$ is hard to decide. That is, the proof does not leak any knowledge that an adversary might not compute by itself interacting with an oracle $O$ that does not help to decide the language.
Unlike ZK, non-interactivity and perfect soundness do not contradict HZK and HZK can replace ZK in any application in which, basically, the computational assumptions used in the application hold even against adversaries with access to $O$. An $O$-HZK proof is witness hiding (WH) for distributions hard against adversaries with access to $O$, and strong-WI when quantifying over distributions that are indistinguishable by adversaries with access to $O$. Moreover, an $O$-HZK proof is witness indistinguishable (and the property does not depend on the oracle).
We provide a specific oracle DHInvO that is enough powerful to make our main proof systems DHInvO-HZK but not trivial: indeed, we show concrete and practical cryptographic protocols that can be proven secure employing a DHInvO-HZK proof in the reduction and that are instead not achievable using traditional ZK (unless resorting to the CRS/RO models).
Efficient one-message proof systems with perfect soundness were only known for relations over bilinear groups and were proven only witness indistinguishable.
As byproduct, we also obtain a perfectly sound non-interactive ZAP, WH and HZK proof system for $NP$ relations from number-theoretic assumptions over multiplicative groups of hidden order. No non-interactive WH proof system for $NP$ (neither for simpler non-trivial relations) was previously known.
Privacy-Preserving Classification of Personal Text Messages with Secure Multi-Party Computation: An Application to Hate-Speech Detection
Classification of personal text messages has many useful applications in surveillance, e-commerce, and mental health care, to name a few. Giving applications access to personal texts can easily lead to (un)intentional privacy violations. We propose the first privacy-preserving solution for text classification that is provably secure. Our method, which is based on Secure Multiparty Computation (SMC), encompasses both feature extraction from texts, and subsequent classification with logistic regression and tree ensembles. We prove that when using our secure text classification method, the application does not learn anything about the text, and the author of the text does not learn anything about the text classification model used by the application beyond what is given by the classification result itself. We perform end-to-end experiments with an application for detecting hate speech against women and immigrants, demonstrating excellent runtime results without loss of accuracy.
Lattice-Based Remote User Authentication from Reusable Fuzzy Signature
In this paper, we introduce a new construction of lattice-based reusable fuzzy signature for remote user authentication that is secure against quantum computers. We define formal security models for the proposed construction, and we prove that it can achieve user authenticity, biometrics reusability and user privacy. In particular, the proposed new construction ensures that: 1) biometrics reusability is achieved such that fuzzy signatures remain secure even when the same biometrics is reused multiple times; 2) a third party having access to the communication channel between an authorized user and the authentication server cannot identify the authorized user.
Vulnerability Analysis of a Soft Core Processor through Fine-grain Power Profiling
Embedded microprocessors are an important component of reconfigurable architectures. Fine-grain (e.g., cycle-accurate) power analysis of such processors has been used to improve power and energy efficiency, and detect implementation vulnerabilities, in embedded applications. However, such analysis is difficult to conduct; it requires either specialized and often expensive equipment, or construction of test architectures using disparate acquisition and analysis tools. In this research, we expand the Flexible Open-source workBench fOr Side-channel analysis (FOBOS) to facilitate exact time-domain correlation of clock cycle and device state to power measurements, and to perform power analysis on a soft core processor. We first validate the fine-grain power analysis capabilities of FOBOS through cycle-accurate analysis of power consumption of AES encryption running on a soft core processor in the Spartan-6 FPGA. We then analyze the results in the context of Simple Power Analysis side-channel attacks, and confirm power correlation of certain instructions with Hamming Weight or Hamming Distance of secret key bytes. Finally, we show that an assumption of a pure Hamming Distance power model for load-to-register instructions is not sufficient for this embedded processor architecture, and that power models using both Hamming Distance and Hamming Weight should be considered for Differential Power Analysis.
Comprehensive Security Analysis of CRAFT
CRAFT is a lightweight block cipher, designed to provide efficient protection against differential fault attacks. It is a tweakable cipher that includes 32 rounds to produce a ciphertext from a 64-bit plaintext using a 128-bit key and 64-bit public tweak.
In this paper, compared to the designers' analysis, we provide a more detailed analysis of CRAFT against differential and zero-correlation cryptanalysis, aiming to provide better distinguishers for the reduced rounds of the cipher. Our distinguishers for reduced-round CRAFT cover a higher number of rounds compared to the designers' analysis. In our analysis, we observed that, for any number of rounds, the differential effect of CRAFT has an extremely higher probability compared to any differential trail. As an example, while the best trail for 11 rounds of the cipher has a probability of at least $2^{-80}$, we present a differential with probability $2^{-49.79}$, containing $2^{29.66}$ optimal trails, all with the same optimum probability of $2^{-80}$. Next, we use a partitioning technique, based on optimal expandable truncated trails to provide a better estimation of the differential effect on CRAFT. Thanks to this technique, we are able to find differential distinguishers for 9, 10, 11, 12, 13, and 14 rounds of the cipher in single tweak model with the probabilities of at least $2^{-40.20}$, $ 2^{-45.12} $, $ 2^{-49.79}$, $ 2^{-54.49}$, $ 2^{-59.13}$, and $ 2^{-63.80}$, respectively. These probabilities should be compared with the best distinguishers provided by the designers in the same model for 9 and 10 rounds of the cipher with the probabilities of at least $ 2^{-54.67}$ and $ 2^{-62.61}$, respectively.
In addition, we consider the security of CRAFT against the new concept of related tweak zero-correlation (ZC) linear cryptanalysis and present a new distinguisher which covers 14 rounds of the cipher, while the best previous ZC distinguisher covered 13 rounds. Thanks to the related tweak ZC distinguisher for 14 rounds of the cipher, we also present 14 rounds integral distinguishers in related tweak mode of the cipher. Although the provided analysis does not compromise the cipher, we think it provides a better insight into the designing of CRAFT.
A Secure Publish/Subscribe Protocol for Internet of Things
The basic concept behind the emergence of Internet of Things (IoT) is to connect as many objects to the Internet as possible in an attempt to make our lives better in some way. However, connecting everyday objects like your car or house to the Internet can open up major security concerns. In this paper, we present a novel security framework for the Message Queue Transport Telemetry (MQTT) protocol based on publish/subscribe messages in order to enhance secure and privacy-friendly Internet of Things services. MQTT has burst onto the IoT scene in recent years due to its lightweight design and ease of use implementation necessary for IoT. Our proposed solution provides 3 security levels. The first security level suits for lightweight data exchanges of non-tampered messages. The second security level enhances the privacy protection of data sources and data receivers. The third security level offers robust long-term security with mutual authentication for all parties. The security framework is based on light cryptographic schemes in order to be suitable for constrained and small devices that are widely used in various IoT use cases. Moreover, our solution is tailored to MQTT without using additional security overhead.
A Survey on Authenticated Encryption -- ASIC Designer's Perspective
Authenticated encryption (AE) has been a vital operation in cryptography due to its ability to provide confidentiality, integrity, and authenticity at the same time. Its use has soared in parallel with widespread use of the Internet and has led to several new schemes. There have been studies investigating software performance of various schemes. However, the same is yet to be done for hardware. We present a comprehensive survey of hardware (specifically ASIC) performance of the most commonly used AE schemes in the literature. These schemes include encrypt-then-MAC combination, block cipher based AE modes, and the recently-introduced permutation-based AE scheme. For completeness, we implemented each scheme with various standardized block ciphers and/or hash algorithms, and their lightweight versions. Our evaluation targets minimizing the time-area product while maximizing the throughput on an ASIC platform. We used 45nm NANGATE Open Cell Library for syntheses. We present area, speed, time-area product, throughput, and power figures for both standard and lightweight versions of each scheme. We also provide an unbiased discussion on the impact of the structure and complexity of each scheme on hardware implementation. Our results reveal 13-30% performance boost in permutation-based AE compared to conventional schemes and they can be used as a benchmark in the ongoing AE competition CAESAR.
Last updated: 2019-07-07
Scrutinizing the Tower Field Implementation of the $\mathbb{F}_{2^8}$ Inverter -- with Applications to AES, Camellia, and SM4
The tower field implementation of the $\mathbb{F}_{2^8}$ inverter is not only the key technique for compact implementations of the S-boxes of several internationally standardized block ciphers such as AES, Camellia, and SM4, but also the underlying structure many side-channel attack resistant AES implementations rely on. In this work, we conduct an exhaustive study of the tower field representations of the $\mathbb{F}_{2^8}$ inverter with normal bases by applying several state-of-the-art combinatorial logic minimization techniques. As a result, we achieve improved implementations of the AES, Camellia and SM4 S-boxes in terms of area footprint. Surprisingly, we are still able to improve the currently known most compact implementation of the AES S-box from CHES 2018 by 5.5 GE, beating the record again (Excluding this work, the latest improvement was proposed at CHES 2018, which achieves 11.75 GE improvement over the optimal implementation at the time). For Camellia and SM4, the improvements are even more significant. The Verilog codes of our implementations of the AES, Camellia and SM4 S-boxes are openly available.
Highly Efficient Key Exchange Protocols with Optimal Tightness -- Enabling real-world deployments with theoretically sound parameters
In this paper we give nearly tight reductions for modern implicitly authenticated Diffie-Hellman protocols in the style of the Signal and Noise protocols, which are extremely simple and efficient. Unlike previous approaches, the combination of nearly tight proofs and efficient protocols enables the first real-world instantiations for which the parameters can be chosen in a theoretically sound manner, i.e., according to the bounds of the reductions. Specifically, our reductions have a security loss which is only linear in the number of users $\mu$ and constant in the number of sessions per user
$\ell$. This is much better than most other key exchange proofs which are typically quadratic in the product $\mu \ell$. Combined with the simplicity of our protocols, this implies that our protocols are more efficient than the state of the art when soundly instantiated.
We also prove that our security proofs are optimal: a linear loss in the number of users is unavoidable for our protocols for a large and natural class of reductions.
Onion Ring ORAM: Efficient Constant Bandwidth Oblivious RAM from (Leveled) TFHE
Oblivious RAM (ORAM) is a cryptographic primitive that allows a client to hide access pattern to its data encrypted and stored at a remote server. Traditionally, ORAM algorithms assume the server acts purely as a storage device. Under this assumption, ORAM has at least $\log(N)$ bandwidth blowup for $N$ data entries. After three decades of improvements, ORAM algorithms have reached the optimal logarithmic bandwidth blowup. Nonetheless, in many practical use-cases, a constant bandwidth overhead is desirable. To this purpose, Devadas et al. (TCC 2016) formalized the server computation model for ORAM and proposed Onion ORAMwhich relies on homomorphic computation to achieve constant worst-case bandwidth blowup. This line of work is generally believed to be purely theoretical, due to the large overheads of homomorphic computation. In this paper, we present Onion Ring ORAM, the first efficient constant bandwidth ORAM scheme in the single server model, based on the Onion ORAM construction and the leveled version of the TFHE scheme by Chillotti et al.. We propose a series of improvements, most notably including a more efficient homomorphic permutation protocol. We implement Onion Ring ORAM and show that it can outperform state-of-the-art logarithmic-bandwidth ORAM like Path ORAMs and Ring ORAM when the network throughput is limited. Under one setting, our construction reduces monetary cost per access by 40\% and end-to-end latency by 35\% over Ring ORAM.
SoK of Used Cryptography in Blockchain
The underlying fundaments of blockchain are cryptography and cryptographic concepts that provide reliable and secure decentralized solutions. Although many recent papers study the use-cases of blockchain in different industrial areas, such as finance, health care, legal relations, IoT, information security, and consensus building systems, only few studies scrutinize the cryptographic concepts used in blockchain. To the best of our knowledge, there is no Systematization of Knowledge (SoK) that gives a complete picture of the existing cryptographic concepts which have been deployed or have the potential to be deployed in blockchain. In this paper, we thoroughly review and systematize all cryptographic concepts which are already used in blockchain. Additionally, we give a list of cryptographic concepts which have not yet been applied but have big potentials to improve the current blockchain solutions. We also include possible instantiations of these cryptographic concepts in the blockchain domain. Last but not least, we explicitly postulate 21 challenging problems that cryptographers interested in blockchain can work on.
From Usability to Secure Computing and Back Again
Secure multi-party computation (MPC) allows multiple parties to jointly compute the output of a function while preserving the privacy of any individual party's inputs to that function. As MPC protocols transition from research prototypes to real-world applications, the usability of MPC-enabled applications is increasingly critical to their successful deployment and wide adoption.
Our Web-MPC platform, designed with a focus on usability, has been deployed for privacy-preserving data aggregation initiatives with the City of Boston and the Greater Boston Chamber of Commerce. After building and deploying an initial version of this platform, we conducted a heuristic evaluation to identify additional usability improvements and implemented corresponding application enhancements. However, it is difficult to gauge the effectiveness of these changes within the context of real-world deployments using traditional web analytics tools without compromising the security guarantees of the platform. This work consists of two contributions that address this challenge: (1) the Web-MPC platform has been extended with the capability to collect web analytics using existing MPC protocols, and (2) this capability has been leveraged to conduct a usability study comparing the two version of Web-MPC (before and after the heuristic evaluation and associated improvements).
While many efforts have focused on ways to enhance the usability of privacy-preserving technologies, this study can serve as a model for using a privacy-preserving data-driven approach in evaluating or enhancing the usability of privacy-preserving websites and applications deployed in real-world scenarios. The data collected in this study yields insights about the interplay between usability and security that can help inform future implementations of applications that employ MPC.
Compressible FHE with Applications to PIR
Homomorphic encryption (HE) is often viewed as impractical, both in communication and computation. Here we provide an additively homomorphic encryption scheme based on (ring) LWE with nearly optimal rate ($1-\epsilon$ for any $\epsilon>0$). Moreover, we describe how to compress many FHE ciphertexts that may have come from a homomorphic evaluation (e.g., of the Gentry-Sahai-Waters (GSW) scheme), into fewer high-rate ciphertexts.
Using our high-rate HE scheme, we are able for the first time to describe a single-server private information retrieval (PIR) scheme with sufficiently low computational overhead so as to be practical for large databases. Single-server PIR inherently requires the server to perform at least one bit operation per database bit, and we describe a rate-(4/9) scheme with computation which is not so much worse than this inherent lower bound. In fact it is probably faster than whole-database AES encryption -- specifically under 1.8 mod-$q$ multiplication per database byte, where $q$ is about 50 to 60 bits.
Asymptotically, the computational overhead of our PIR scheme is $\tilde{O}(\log \log \secparam + \log \log \log N)$, where $\secparam$ is the security parameter and $N$ is the number of database files, which are assumed to be sufficiently large.
Fully Homomorphic NIZK and NIWI Proofs
In this work, we define and construct fully homomorphic non-interactive zero knowledge (FH-NIZK) and non-interactive witness-indistinguishable (FH-NIWI) proof systems.
We focus on the NP complete language $L$, where, for a boolean circuit $C$ and a bit $b$, the pair $(C,b) \in L$ if there exists an input $w$ such that $C(w)=b$. For this language, we call a non-interactive proof system 'fully homomorphic' if, given instances $(C_i,b_i) \in L$ along with their proofs $\Pi_i$, for $i \in \{1,\ldots,k\}$, and given any circuit $D:\{0,1\}^k \rightarrow \{0,1\}$, one can efficiently compute a proof $\Pi$ for $(C^*,b) \in L$, where $C^*(w^{(1)}, \ldots,w^{(k)})=D(C_1(w^{(1)}),\ldots,C_k(w^{(k)}))$ and $D(b_1,\ldots,b_k)=b$. The key security property is 'unlinkability': the resulting proof $\Pi$ is indistinguishable from a fresh proof of the same statement.
Our first result, under the Decision Linear Assumption (DLIN), is an FH-NIZK proof system for L in the common random string model. Our more surprising second result (under a new decisional assumption on groups with bilinear maps) is an FH-NIWI proof system that requires no setup.
On the Complexity of ``Superdetermined'' Minrank Instances
The Minrank (MR) problem is a computational problem closely related to attacks on code- and multivariate-based schemes.
In this paper we revisit the so-called Kipnis-Shamir (KS) approach to this problem. We extend previous complexity analysis by exposing non-trivial syzygies through the analysis of the Jacobian of the resulting system, with respect to a group of variables. We focus on a particular set of instances that yield a very overdetermined system which we refer to as ``superdetermined''. We provide a tighter complexity estimate for such instances and discuss its implications for the key recovery attack on some multivariate schemes. For example, in HFE the speedup is roughly a square root.
PQDH: A Quantum-Safe Replacement for Diffie-Hellman based on SIDH
We present a post-quantum key agreement scheme that does not require distinguishing between the initiator and the responder. This scheme is based on elliptic curve isogenies and can be viewed as a variant of the well-known SIDH protocol. Then, we present an efficient countermeasure against a side-channel attack that applies to both static and ephemeral versions of SIDH and our scheme. Finally, we show how to obtain an isogeny-based password-authenticated key exchange protocol based on our scheme by applying a construction based on SIDH. Security and computational complexities summaries are also presented.
Linear Complexity of A Family of Binary pq2 -periodic Sequences From Euler Quotients
We first introduce a family of binary $pq^2$ -periodic sequences based on the Euler quotients modulo $pq$, where $p$ and $q$ are two distinct odd primes and $p$ divides $q - 1$. The minimal polynomials and linear complexities are determined for the proposed sequences provided that $2^{q-1} \not \equiv 1 \pmod q^2$ . The results show that the proposed sequences have high linear complexities.
Verifying Solutions to LWE with Implications for Concrete Security
A key step in Regev's (2009) reduction of the Discrete Gaussian Sampling (DGS) problem to that of solving the Learning With Errors (LWE)
problem is a statistical test required for verifying possible solutions to the LWE problem. In this work, we work out a concrete lower
bound on the success probability and its effect in determining an upper bound on the tightness gap of the reduction. The success probability
is determined by the value of the rejection threshold $t$ of the statistical test. Using a particular value of $t$, Regev showed that
asymptotically, the success probability of the test is exponentially close to one for all values of the LWE error $\alpha\in(0,1)$.
From the concrete
analysis point of view, the value of the rejection threshold used by Regev is sub-optimal. It leads to considering the lattice dimension
to be as high as 400000 to obtain somewhat meaningful tightness gap. We show that by using a different value of the rejection
threshold and considering $\alpha$ to be at most $1/\sqrt{n}$ results in the success probability going to 1 for small values of the
lattice dimension. Consequently, our work shows that it may be required to modify values of parameters used in an asymptotic analysis
to obtain much improved and meaningful concrete security.
Iterative Differential Characteristic of TRIFLE-BC
TRIFLE is a Round 1 candidate of the NIST Lightweight Cryptography Standardization process. In this paper, we present an interesting 1-round iterative differential characteristic of the underlying block cipher TRIFLE-BC used in TRIFLE, which holds with probability of $2^{-3}$. Consequently, it allows to mount distinguishing attack on TRIFLE-BC for up to 43 (out of 50) rounds with data complexity $2^{124}$ and time complexity $2^{124}$. Most importantly, with such an iterative differential characteristic, the forgery attack on TRIFLE can reach up to 21 (out of 50) rounds with data complexity $2^{63}$ and time complexity $2^{63}$. Finally, to achieve key recovery attack on reduced TRIFLE, we construct a differential characteristic covering three blocks by carefully choosing the positions of the iterative differential characteristic. As a result, we can mount key-recovery attack on TRIFLE for up to 11 rounds with data complexity $2^{63}$ and time complexity $2^{104}$. Although the result in this paper cannot threaten the security margin of TRIFLE, we hope it can help further understand the security of TRIFLE.
A Framework for Universally Composable Oblivious Transfer from One-Round Key-Exchange
Oblivious transfer is one of the main pillars of modern cryptography and plays a major role as a building block for other more complex cryptographic primitives.
In this work, we present an efficient and versatile framework for oblivious transfer (OT) using one-round key-exchange (ORKE), a special class of key exchange (KE) where only one message is sent from each party to the other. Our contributions can be summarized as follows:
i) We carefully analyze ORKE schemes and introduce new security definitions. Namely, we introduce a new class of ORKE schemes, called Alice-Bob one-round key-exchange (A-B ORKE), and the definitions of message and key indistinguishability.
ii) We show that OT can be obtained from A-B ORKE schemes fulfilling message and key indistinguishability. We accomplish this by designing a new efficient, versatile and universally composable framework for OT in the Random Oracle Model (ROM). The efficiency of the framework presented depends almost exclusively on the efficiency of the A-B ORKE scheme used since all other operations are linear in the security parameter.
Universally composable OT schemes in the ROM based on new hardness assumptions can be obtained from instantiating our framework.
Examples are presented using the classical Diffie-Hellman KE, RLWE-based KE and Supersingular Isogeny Diffie-Hellman KE.
He Gives C-Sieves on the CSIDH
Recently, Castryck, Lange, Martindale, Panny, and Renes proposed
CSIDH (pronounced "sea-side") as a candidate post-quantum
"commutative group action." It has attracted much attention and
interest, in part because it enables noninteractive
Diffie--Hellman-like key exchange with quite small
communication. Subsequently, CSIDH has also been used as a foundation
for digital signatures.
In 2003--04, Kuperberg and then Regev gave asymptotically
subexponential quantum algorithms for "hidden shift" problems, which
can be used to recover the CSIDH secret key from a public key. In
late 2011, Kuperberg gave a follow-up quantum algorithm called the
collimation sieve ("c-sieve" for short), which improves the
prior ones, in particular by using exponentially less quantum memory
and offering more parameter tradeoffs. While recent works have
analyzed the concrete cost of the original algorithms (and variants)
against CSIDH, nothing of this nature was previously available for the
c-sieve.
This work fills that gap. Specifically, we generalize Kuperberg's
collimation sieve to work for arbitrary finite cyclic groups, provide
some practical efficiency improvements, give a classical (i.e.,
non-quantum) simulator, run experiments for a wide range of parameters
up to the actual CSIDH-512 group order, and concretely quantify the
complexity of the c-sieve against CSIDH.
Our main conclusion is that the proposed CSIDH parameters provide
relatively little quantum security beyond what is given by the cost of
quantumly evaluating the CSIDH group action itself (on a uniform
superposition). For example, the cost of CSIDH-512 key recovery is
only about $2^{16}$ quantum evaluations using $2^{40}$ bits of
quantumly accessible classical memory (plus relatively small
other resources). This improves upon a prior estimate of $2^{32.5}$
evaluations and $2^{31}$ qubits of quantum memory, for a
variant of Kuperberg's original sieve.
Under the plausible assumption that quantum evaluation does not cost
much more than what is given by a recent "best case" analysis,
CSIDH-512 can therefore be broken using significantly less
than $2^{64}$ quantum T-gates. This strongly invalidates its claimed
NIST level 1 quantum security, especially when accounting for the
MAXDEPTH restriction. Moreover, under analogous assumptions for
CSIDH-1024 and -1792, which target higher NIST security levels, except
near the high end of the MAXDEPTH range even these instantiations fall
short of level 1.
Breaking Tweakable Enciphering Schemes using Simon's Algorithm
We show the applicability of Simon's period finding quantum algorithm to the cryptanalysis of several tweakable enciphering schemes (TESs), namely,
CMC, EME, XCB, TET and FAST. For all of the five TESs, we show distinguishing attacks, while for XCB, TET and FAST, the attacks reveal portions of the secret keys.
On Deploying Secure Computing: Private Intersection-Sum-with-Cardinality
In this work, we discuss our successful efforts for industry deployment of a cryptographic secure computation protocol. The problem we consider is privately computing aggregate conversion rate of advertising campaigns. This underlying functionality can be abstracted as Private Intersection-Sum (PI-Sum) with Cardinality. In this setting two parties hold datasets containing user identifiers, and one of the parties additionally has an integer value associated with each of its user identifiers. The parties want to learn the number of identifiers they have in common and the sum of the integer values associated with these users without revealing any more information about their private inputs.
We identify the major properties and enabling factors which make the deployment of a cryptographic protocol possible, practical, and uniquely positioned as a solution for the task at hand. We describe our deployment setting and the most relevant efficiency measure, which in our setting is communication overhead rather than computation. We also present a monetary cost model that can be used as a unifying cost measure and the computation model which reflect out use-case: a low-priority batch computing.
We present three PI-Sum with cardinality protocols: our currently deployed protocol, which relies on a Diffie-Hellman style double masking, and two new protocols which leverage more recent techniques for private set intersection (PSI) that use Random Oblivious Transfer and encrypted Bloom filters. We compare the later two protocol with our original solution when instantiated with different additively homomorphic encryption schemes. We implement our constructions and compare their costs. We also compare with recent generic approaches for computing on the intersection of two datasets and show that our best protocol has monetary cost that is 20× less than the best known generic approach.
Neural Network Model Assessment for Side-Channel Analysis
Leakage assessment of cryptographic implementations with side-channel analysis relies on two important assumptions: leakage model and the number of side-channel traces. In the context of profiled side-channel attacks, having these assumptions correctly defined is a sufficient first step to evaluate the security of a crypto implementation with template attacks. This method assumes that the features (leakages or points of interest) follow a univariate or multi-variate Gaussian distribution for the estimation of the probability density function. When trained machine learning or neural network models are employed as classifiers for profiled attacks, a third assumption must be taken into account that it the correctness of the trained model or learning parameters. It was already proved that convolutional neural networks have advantages for side-channel analysis like bypassing trace misalignments and defeating first-order masking countermeasures in software implementations. However, if this trained model is incorrect and the test classification accuracy is close to random guessing, the correctness of the two first assumptions (number of traces and leakage model) will be insufficient and the security of the target under evaluation can be overestimated. This could lead to wrong conclusions in leakage certifications. One solution to verify if the trained model is acceptable relies on the identifying of input features that the neural network considers as points of interest. In this paper, we implement the assessment of neural network models by using the proposed backward propagation path method. Our method is employed during the profiling phase as a tool to verify what the neural network is learning from side-channel traces and to support the optimization of hyper-parameters. The method is tested against masked AES implementation. One of the main results highlights the importance of L2 regularization for the automated points of interest selection from a neural network.
Optimized SIKE Round 2 on 64-bit ARM
In this work, we present the rst highly-optimized implementation
of Supersingular Isogeny Key Encapsulation (SIKE) submitted
to NIST's second round of post quantum standardization process,
on 64-bit ARMv8 processors. To the best of our knowledge, this work
is the rst optimized implementation of SIKE round 2 on 64-bit ARM
over SIKEp434 and SIKEp610. The proposed library is explicitly optimized
for these two security levels and provides constant-time implementation
of the SIKE mechanism on ARMv8-powered embedded devices.
We adapt dierent optimization techniques to reduce the total number
of underlying arithmetic operations on the led level. In particular, the
benchmark results on embedded processors equipped with ARM Cortex-
A53@1.536GHz show that the entire SIKE round 2 key encapsulation
mechanism takes only 84 ms at NIST's security level 1. Considering
SIKE's extremely small key size in comparison to other candidates, our
result implies that SIKE is one of the promising candidates for key encapsulation
mechanism on embedded devices in the quantum era.
Leveraging Linear Decryption: Rate-1 Fully-Homomorphic Encryption and Time-Lock Puzzles
We show how to combine a fully-homomorphic encryption scheme with linear decryption and a linearly-homomorphic encryption schemes to obtain constructions with new properties. Specifically, we present the following new results.
(1) Rate-1 Fully-Homomorphic Encryption: We construct the first scheme with message-to-ciphertext length ratio (i.e., rate) $1-\sigma$ for $\sigma = o(1)$. Our scheme is based on the hardness of the Learning with Errors (LWE) problem and $\sigma$ is proportional to the noise-to-modulus ratio of the assumption. Our building block is a construction of a new high-rate linearly-homomorphic encryption.
One application of this result is the first general-purpose secure function evaluation protocol in the preprocessing model where the communication complexity is within additive factor of the optimal insecure protocol.
(2) Fully-Homomorphic Time-Lock Puzzles: We construct the first time-lock puzzle where one can evaluate any function over a set of puzzles without solving them, from standard assumptions. Prior work required the existence of sub-exponentially hard indistinguishability obfuscation.
Last updated: 2019-08-28
The Key is Left under the Mat: On the Inappropriate Security Assumption of Logic Locking Schemes
Uncategorized
Uncategorized
Logic locking has been proposed as an obfuscation technique to protect outsourced IC designs from Intellectual Property (IP) piracy by untrusted entities in the design and fabrication process. It obfuscates the netlist by adding extra key-gates, to mislead an adversary, whose aim is to reverse engineer the netlist. The correct functionality will be obtained only if a correct key is applied to the key-gates. The key is written into a nonvolatile memory (NVM) after the fabrication by the IP owner.
In the past several years, the focus of the research community has been mostly on Oracle-guided attacks, such as SAT attacks, on logic locking and proposing proper countermeasures against such attacks. However, none of the reported researches in the literature has ever challenged a more fundamental assumption of logic locking, which is the security of the key itself. In other words, if an adversary can read out the correct key after insertion, the security of the entire scheme is broken, no matter how robust is the logic-locking scheme. In this work, we first review possible adversaries for the locked circuits and their capabilities. Afterward, we demonstrate that even with the assumption of having a tamper and read-proof memory for the key storage, which is not vulnerable to any physical attacks, the key transfer between the memory and the key-gates through registers and buffers make the key extraction by an adversary possible. To support our claim, we implemented a proof-of-concept locked circuit as well as one of the common logic locking benchmarks on a 28 nm Flash-based FPGA and extract their keys using optical probing. Finally, we discuss the feasibility of the proposed attack in different scenarios and propose potential countermeasures.
Improved Building Blocks for Secure Multi-Party Computation based on Secret Sharing with Honest Majority
Secure multi-party computation permits evaluation of any desired functionality on private data without disclosing the data to the participants and is gaining its popularity due to increasing collection of user, customer, or patient data and the need to analyze data sets distributed across different organizations without disclosing them. Because adoption of secure computation techniques depends on their performance in practice, it is important to continue improving their performance. In this work, we focus on common non-trivial operations used by many types of programs, and any advances in their performance would impact the runtime of programs that rely on them. In particular, we treat the operation of reading or writing an element of an array at a private location and integer multiplication. The focus of this work is secret sharing setting with honest majority in the semi-honest security model. We demonstrate improvement of the proposed techniques over prior constructions via analytical and empirical evaluation.
Homomorphism learning problems and its applications to public-key cryptography
We present a framework for the study of a learning problem over abstract groups, and introduce a new technique which allows for public-key encryption using generic groups. We proved, however, that in order to obtain a quantum resistant encryption scheme, commutative groups cannot be used to instantiate this protocol.
On the Quantum Complexity of the Continuous Hidden Subgroup Problem
Uncategorized
Uncategorized
The Hidden Subgroup Problem (HSP) aims at capturing all problems that are susceptible to be solvable in quantum polynomial time following the blueprints of Shor's celebrated algorithm. Successful solutions to this problems over various commutative groups allow to efficiently perform number-theoretic tasks such as factoring or finding discrete logarithms.
The latest successful generalization (Eisentraeger et al. STOC 2014) considers the problem of finding a full-rank lattice as the hidden subgroup of the continuous vector space $\mathbb{R}^m$, even for large dimensions $m$. It unlocked new cryptanalytic algorithms (Biasse-Song SODA 2016, Cramer et al. EUROCRYPT 2016 and 2017), in particular to find mildly short vectors in ideal lattices.
The cryptanalytic relevance of such a problem raises the question of a more refined and quantitative complexity analysis. In the light of the increasing physical difficulty of maintaining a large entanglement of qubits, the degree of concern may be different whether the above algorithm requires only linearly many qubits or a much larger polynomial amount of qubits.
This is the question we start addressing with this work. We propose a detailed analysis of (a variation of) the aforementioned HSP algorithm, and conclude on its complexity as a function of all the relevant parameters. Our modular analysis is tailored to support the optimization of future specialization to cases of cryptanalytic interests. We suggest a few ideas in this direction.
On the Security of Lattice-based Fiat-Shamir Signatures in the Presence of Randomness Leakage
Leakages during the signing process, including partial key exposure and partial (or complete) randomness exposure, may be devastating for the security of digital signatures. In this work, we investigate the security of lattice-based Fiat-Shamir signatures in the presence of randomness leakage. To this end, we present a generic key recovery attack that relies on minimum leakage of randomness, and then theoretically connect it to a variant of Integer-LWE (ILWE) problem. The ILWE problem, introduced by Bootle et al. at Asiacrypt 2018, is to recover the secret vector ${\bf s}$ given polynomially many samples of the form $({\bf a}, \langle {\bf a}, {\bf s} \rangle + e) {\color{black}\in \mathbb{Z}^{n+1}}$, and it is solvable if the error $e {\color{black}\in \mathbb{Z}}$ is not superpolynomially larger than the inner product $\langle {\bf a}, {\bf s} \rangle$. However, in our variant (we call the variant FS-ILWE problem in this paper), ${\bf a}{\color{black}\in \mathbb{Z}^{n}}$ is a sparse vector whose coefficients are NOT independent any more, and $e$ is related to ${\bf a}$ and ${\bf s}$ as well. We prove that the FS-ILWE problem can be solved in polynomial time, and present an efficient algorithm to solve it.
Our generic key recovery method directly implies that many lattice-based Fiat-Shamir signatures will be totally broken with one (deterministic or probabilistic) bit of randomness leakage per signature. Our attack has been validated by experiments on two NIST PQC signatures Dilithium and qTESLA. For example, as to Dilithium-III of $125$-bit quantum security, the secret key will be recovered within $10$ seconds over an ordinary PC desktop, with about one million signatures. Similarly, key recovery attacks on Dilithium under other parameters and qTESLA will be completed within $20$ seconds and $31$ minutes respectively.
In addition, we also present a non-profiled attack to show how to obtain the required randomness bit in practice through power analysis attacks on a proof-of-concept implementation of polynomial addition. The experimental results confirm the practical feasibility of our method.
Generalized Related-Key Rectangle Attacks on Block Ciphers with Linear Key Schedule: Applications to SKINNY and GIFT
This paper gives a new generalized key-recovery model of related-key rectangle attacks on block ciphers with linear key schedules. The model is quite optimized and applicable to various block ciphers with linear key schedule. As a proof of work, we apply the new model to two very important block ciphers, i.e. SKINNY and GIFT, which are basic modules of many candidates of the Lightweight Cryptography (LWC) standardization project by NIST.
For SKINNY, we reduce the complexity of the best previous 27-round related-tweakey rectangle attack on SKINNY-128-384 from $2^{331}$ to $2^{294}$. In addition, the first 28-round related-tweakey rectangle attack on SKINNY-128-384 is given, which gains one more round than before. For the case of GIFT-64, we give the first 24-round related-key rectangle attack with a time complexity $2^{91.58}$, while the best previous attack on GIFT-64 only reaches 23 rounds at most.
Public Ledger for Sensitive Data
Satoshi Nakamoto's Blockchain allows to build publicly verifiable and almost immutable ledgers, but sometimes privacy has to be factored in.
In this work an original protocol is presented that allows sensitive data to be stored on a ledger where its integrity may be publicly verified, but its privacy is preserved and owners can tightly manage the sharing of their information with efficient revocation.
SimpleENC and SimpleENCsmall -- an Authenticated Encryption Mode for the Lightweight Setting
Block cipher modes of operation provide a way to securely encrypt using a block cipher, and different modes of operation achieve different tradeoffs of security, performance and simplicity. In this paper, we present a new authenticated encryption scheme that is designed for the lightweight cryptography setting, but can be used in standard settings as well. Our mode of encryption is extremely simple, requiring only a single block cipher primitive (in forward direction) and minimal padding, and supports streaming (online encryption). In addition, our mode achieves very strong security bounds, and can even provide good security when the block size is just 64 bits. As such, it is highly suitable for lightweight settings, where the lifetime of the key and/or overall amount encrypted may be high. Our new scheme can be seen as an improved version of CCM that supports streaming, and provides much better bounds.
SIKE'd Up: Fast and Secure Hardware Architectures for Supersingular Isogeny Key Encapsulation
In this work, we present a fast parallel architecture to perform supersingular isogeny key encapsulation (SIKE). We propose and implement a fast isogeny accelerator architecture that uses fast and parallelized isogeny formulas. On top of our isogeny accelerator, we build a novel architecture for the SIKE primitive, which provides both quantum and IND-CCA security. Since SIKE can support static keys, we propose and implement additional differential power analysis countermeasures. We synthesized this architecture on the Xilinx Artix-7, Virtex-7, and Kintex UltraScale+ FPGA families. Over Virtex-7 FPGA's, our constant-time implementations are roughly 14% faster than the state-of-the-art with a better area-time product. At the NIST security level 5 on a Kintex UltraScale+ FPGA, we can execute the entire SIKE protocol in 15.3 ms. This work continues to improve the speed of isogeny-based computations and also features all parameter sets of the SIKE round 2 specification, with results applicable to NIST's post-quantum standardization process.
Last updated: 2019-06-18
A Comprehensive Formal Security Analysis and Revision of the Two-phase Key Exchange Primitive of TPM 2.0
The Trusted Platform Module (TPM) version 2.0, which has been demonstrated as a key element of Industry 4.0, presents a two-phase key exchange primitive for secure communications between Industry 4.0 components. The key exchange primitive of TPM 2.0 can be used to implement three widely-standardized authenticated key exchange protocols: the Full Unified Model, the Full MQV, and the SM2 key exchange protocols. However, vulnerabilities have been found in all of these protocols. Fortunately, it seems that the protections offered by TPM chips can mitigate these vulnerabilities. In this paper, we present a security model which captures TPM's protections on keys and protocols' computation environments and in which multiple protocols can be analyzed in a unified way. Based on the unified security model, we give the first formal security analysis of the key exchange primitive of TPM 2.0, and the analysis results show that, with the help of hardware protections of TPM chips, the key exchange primitive indeed satisfies the well-defined security property of our security model, but unfortunately under some impractical limiting conditions, which would prevent the application of the key exchange primitive in real-world networks. To make TPM 2.0 applicable to real-world networks, we present a revision of the key exchange primitive of TPM 2.0, which can keep secure without the limiting conditions. We give a rigorous analysis of our revision, and the results show that our revision achieves not only the basic security property of modern AKE security models but also some further security properties.
Secure Computation for Cloud data Storage
One of the main goals of securing data transmission is focused on the security of cloud data storage. In this paper, we describe several cryptographic techniques which can be used to address the relevant threats and security goals for analyzing cloud computing security. Private semi-trusted clouds, allow researchers to design private clouds by using cryptographic techniques, to protect the semi-trusted ones. Finally, we elaborate on semi-trusted clouds which are related to real-world deployments of cloud resources, and how optimizing cryptographic protocols, would indeed lead to the usage of this certain cloud and therefore practical ways of securing this type of data.
Upper Bounds on the Multiplicative Complexity of Symmetric Boolean Functions
Uncategorized
Uncategorized
A special metric of interest about Boolean functions is multiplicative complexity (MC): the minimum number of AND gates sufficient to implement a function with a Boolean circuit over the basis {XOR, AND, NOT}. In this paper we study the MC of symmetric Boolean functions, whose output is invariant upon reordering of the input variables. Based on the Hamming weight method from Muller and Preparata (1975), we introduce new techniques that yield circuits with fewer AND gates than upper bounded by Boyar et al. in 2000 and by Boyar and Peralta in 2008. We generate circuits for all such functions with up to 25 variables. As a special focus, we report concrete upper bounds for the MC of elementary symmetric functions $\Sigma^n_k$ and counting functions $E^n_k$ with up to n = 25 input variables. In particular, this allows us to answer two questions posed in 2008: both the elementary symmetric $\Sigma^8_4$ and the counting $E^8_4$ functions have MC 6. Furthermore, we show upper bounds for the maximum MC in the class of n-variable symmetric Boolean functions, for each n up to 132.
Post-Quantum UC-Secure Oblivious Transfer in the Standard Model with Adaptive Corruptions
Since the seminal result of Kilian, Oblivious Transfer has proven to be a
fundamental primitive in cryptography. In such a scheme, a user is able
to gain access to an element owned by a server, without learning more than
this single element, and without the server learning which element the user
has accessed. This primitive has received a lot of study in the literature,
among which very few schemes are based on lattices.
The recent NIST call for post-quantum encryption and signature
schemes has revived the interest for cryptographic protocols based on
post-quantum assumptions and the need for a secure post-quantum
oblivious transfer scheme.
In this paper, we show how to construct an oblivious transfer
scheme based on lattices, from a collision-resistant chameleon hash
scheme (CH) and a CCA encryption scheme accepting a smooth projective
hash function (SPHF). Note that our scheme does not rely on random
oracles and provides UC security against adaptive corruptions assuming
reliable erasures.
Endemic Oblivious Transfer
Oblivious Transfer has played a crucial role in the design of secure multi party computation. Nevertheless, there are not many practical solutions that achieve simulation based security and at the same time instantiable based on different assumptions.
In this work, we consider a simulation based security notion that we call endemic security. We show how to construct highly efficient oblivious transfer in the random oracle model that achieves endemic security under a wide range of assumptions, among them DDH, CDH, LWE and coding based assumptions. We construct a secure oblivious transfer based on DDH that
takes only a single communication round which allows significant performance gains. We also instantiate our oblivious transfer with the Crystals.Kyber key agreement. Our implementation shows that both instantiations can be computed in under one millisecond.
Further, we revisit, correct and improve existing oblivious transfer extension techniques. We provide an implementation of an oblivious transfer extension protocol in the ideal cipher model that is actively secure, processing up to 23 million OTs per second and up to 10 times faster than previous secure implementations. We also show that our framework can compute endemically secure OT extension and the base OTs in just two rounds.
Commodity-Based 2PC for Arithmetic Circuits
We revisit the framework of Commodity-based Cryptography presented by Beaver (STOC'97) with a focus on updating the framework to fit with modern multiparty computation (MPC) protocols. We study the possibility of replacing the well-known preprocessing model with a commodity-based setting, where a set of independent servers (some of which may be corrupt) provide clients with correlated randomness. From this, the clients then distill correct and secure correlated randomness that they can use during the online phase of the MPC protocol.
Beaver showed how to do OT with semi-honest security in the commodity setting. We improve on Beaver's result as follows: In a model where one of two clients and a constant fraction of the servers may be maliciously corrupted, we obtain unconditionally secure multiplication triples and oblivious linear evaluations (OLEs) such that the amortized communication cost of one triple/OLE is a constant number of field elements (when the field is sufficiently large). We also report on results from an implementation of the OLE protocol. Finally, we suggest an approach to practical realization of a commodity based system where servers need no memory and can be accessed asynchronously by clients, but still a maliciously corrupt client cannot get data he should not have access to.
Arcula: A Secure Hierarchical Deterministic Wallet for Multi-asset Blockchains
This work presents Arcula, a new design for hierarchical deterministic wallets that brings identity-based addresses to the blockchain. Arcula is built on top of provably secure cryptographic primitives. It generates all its cryptographic secrets from a user-provided seed and enables the derivation of new public keys based on the identities of users, without requiring any secret information. Unlike other wallets, it achieves all these properties while being secure against privilege escalation. We formalize the security model of hierarchical deterministic wallets and prove that an attacker compromising an arbitrary number of users within an Arcula wallet cannot escalate his privileges and compromise users higher in the access hierarchy. Our design works out-of-the-box with any blockchain that enables the verification of signatures on arbitrary messages. We evaluate its usage in a real-world scenario on the Bitcoin Cash network.
A Cautionary Note Regarding the Usage of Leakage Detection Tests in Security Evaluation
An established ingredient in the security evaluation of cryptographic devices is leakage detection, whereby physically observable characteristics such as the power consumption are measured during operation and statistically analysed in search of sensitive data dependencies. However, depending on its precise execution, this approach potentially suffers several drawbacks: a risk of false positives, a difficulty interpreting negative outcomes, and the infeasibility of covering every possible eventuality. Moreover, efforts to mitigate for these drawbacks can be costly with respect to the data complexity of the testing procedures. In this work, we clarify the (varying) goals of leakage detection and assess how well-geared current practice is towards meeting each of those goals. We introduce some new innovations on existing methodologies and make recommendations for best practice. Ultimately, though, we find that many of the obstacles cannot be fully overcome according to existing statistical procedures, so that it remains to be highly cautious and to clearly state the limitations of the methods used when reporting outcomes.
Cryptanalysis of Plantlet
Plantlet is a lightweight stream cipher designed by Mikhalev, Armknecht and Müller in \texttt{IACR ToSC} 2017. It has a Grain-like structure with two state registers of size $40$ and $61$ bits. In spite of this, the cipher does not seem to lose in security against generic Time-Memory-Data Tradeoff attacks due to the novelty of its design. The cipher uses a 80-bit secret key and a 90-bit IV.
In this paper, we first present a key recovery attack on Plantlet that requires around $2^{76.26}$ Plantlet encryptions. The attack leverages the fact that two internal states of Plantlet that differ in the 43rd LFSR location are guaranteed to produce keystream that are either equal or unequal in 45 locations with probability 1. Thus an attacker can with some probability guess that when 2 segments of keystream blocks possess the 45 bit difference just mentioned, they have been produced by two internal states that differ only in the 43rd LFSR location. Thereafter by solving a system of polynomial equations representing the keystream bits, the attacker can find the secret key if his guess was indeed correct, or reach some kind of contradiction if his guess was incorrect. In the latter event, he would repeat the procedure for other keystream blocks with the given difference. We show that the process when repeated a finite number of times, does indeed yield the value of the secret key.
In the second part of the paper, we observe that the previous attack was limited to internal state differences that occurred at time instances that were congruent to $0\bmod 80$. We further observe that by generalizing the attack to include internal state differences that are congruent to all equivalence classed modulo 80, we lower the total number of keystream bits required to perform the attack and in the process reduce the attack complexity to $2^{69.98}$ Plantlet encryptions.
Decentralized Multi-authority Anonymous Authentication for Global Identities with Non-interactive Proofs
A decentralized multi-authority anonymous authentication scheme that is suitable for IoT and blockchains is proposed, in which a prover and a verifier are non-interactive. The proposed scheme can treat dynamically increasing/decreasing independent attribute authorities. When an entity wants the authorities to issue attribute credentials, the authorities only have to generate digital signatures on her global identity. Two security definitions are given; resistance against eavesdrop-and-collude attacks that cause misauthentication, and anonymity for privacy protection. Then a construction of our scheme is described under a principle of ``commit-to-ID'' to attain resistance against the collusion attacks. There are two building blocks; the structure-preserving signature scheme and the Groth-Sahai non-interactive proof system, the both of which are in the setting of bilinear groups. The proposed scheme is proved to be secure in the standard model.
SAEB: A Lightweight Blockcipher-Based AEAD Mode of Operation
Lightweight cryptography in computationally constrained devices is actively studied. In contrast to advances of lightweight blockcipher in the last decade, lightweight mode of operation is seemingly not so mature, yet it has large impact in performance. Therefore, there is a great demand for lightweight mode of operation, especially that for authenticated encryption with associated data (AEAD). Among many known properties of conventional modes of operation, the following four properties are essential for constrained devices:
1. Minimum State Size: the state size equals to a block size of a blockcipher.
2. Inverse Free: no need for a blockcipher decryption.
3. XOR Only: only XOR is needed in addition to a blockcipher encryption.
4. Online: a data block is processed only once.
The properties 1 and 4 contribute to small memory usage, and the properties 2 and 3 contribute to small program/circuit footprint. On top of the above properties, the fifth property regarding associated data (AD) is also important for performance:
5. Efficient Handling of Static AD: static AD can be precomputed.
We design a lightweight blockcipher-based AEAD mode of operation called SAEB: the first mode of operation that satisfies all the five properties to the best of our knowledge. Performance of SAEB is evaluated in various software and hardware platforms. The evaluation results show that SAEB outperforms conventional blockcipher-based AEAD modes of operation in various performance metrics for lightweight cryptography.