Papers updated in last 183 days (Page 3 of 1677 results)
The Black-Box Simulation Barrier Persists in a Fully Quantum World
Zero-Knowledge (ZK) protocols have been a subject of intensive study due to their fundamental importance and versatility in modern cryptography. However, the inherently different nature of quantum information significantly alters the landscape, necessitating a re-examination of ZK designs.
A crucial aspect of ZK protocols is their round complexity, intricately linked to $\textit{simulation}$, which forms the foundation of their formal definition and security proofs. In the $\textit{post-quantum}$ setting, where honest parties and their communication channels are all classical but the adversaries could be quantum, Chia, Chung, Liu, and Yamakawa [FOCS'21] demonstrated the non-existence of constant-round $\textit{black-box-simulatable}$ ZK arguments (BBZK) for $\mathbf{NP}$ unless $\mathbf{NP} \subseteq \mathbf{BQP}$. However, this problem remains widely open in the full-fledged quantum future that will eventually arrive, where all parties (including the honest ones) and their communication are naturally quantum.
Indeed, this problem is of interest to the broader theory of quantum computing. It has been an important theme to investigate how quantum power fundamentally alters traditional computational tasks, such as the $\textit{unconditional}$ security of Quantum Key Distribution and the incorporation of Oblivious Transfers in MiniQCrypt. Moreover, quantum communication has led to round compression for commitments and interactive arguments. Along this line, the above problem is of great significance in understanding whether quantum computing could also change the nature of ZK protocols in some fundamentally manner.
We resolved this problem by proving that only languages in $\mathbf{BQP}$ admit constant-round $\textit{fully-quantum}$ BBZK. This result holds significant implications. Firstly, it illuminates the nature of quantum zero-knowledge and provides valuable insights for designing future protocols in the quantum realm. Secondly, it relates ZK round complexity with the intriguing problem of $\mathbf{BQP}$ vs $\mathbf{QMA}$, which is out of the reach of previous analogue impossibility results in the classical or post-quantum setting. Lastly, it justifies the need for the $\textit{non-black-box}$ simulation techniques or the relaxed security notions employed in existing constant-round fully-quantum BBZK protocols.
Proving CPU Executions in Small Space
zkVMs are SNARKs for verifying CPU execution. They allow an untrusted prover to show that it correctly ran a specified program on a witness, where the program is given as bytecode conforming to an instruction set architecture like RISC-V. Existing zkVMs still struggle with high prover resource costs, notably large runtime and memory usage. We show how to implement Jolt—an advanced, sum-check- based zkVM—with a significantly reduced memory footprint, without relying on SNARK recursion, and with only modest runtime overhead (potentially well below a factor of two). We discuss benefits of this approach compared to prevailing recursive techniques.
Silent Circuit Relinearisation: Sublinear-Size (Boolean and Arithmetic) Garbled Circuits from DCR
We introduce a general template for building garbled circuits with low communication, under the decisional composite residuosity (DCR) assumption. For the case of layered Boolean circuits, we can garble a circuit of size $s$ with communication proportional to $O(s/\log\log s)$ bits, plus an additive factor that is polynomial in the security parameter. For layered arithmetic circuits with $B$-bounded integer computation, we obtain a similar result: the garbled arithmetic circuit has size $O(s/\log\log s) \cdot (\lambda + \log B)$ bits, where $\lambda$ is the security parameter. These are the first constructions of general-purpose, garbled circuits with sublinear size, without relying on heavy tools like indistinguishability obfuscation or attribute-based and fully homomorphic encryption.
To achieve these results, our main technical tool is a new construction of a form of homomorphic secret sharing where some of the inputs are semi-private, that is, known to one of the evaluating parties. Through a new relinearisation technique that allows performing arbitrary additions and multiplications on semi-private shares, we build such an HSS scheme that supports evaluating any function of the form $C(x) \cdot C'(y)$, where $C$ is any polynomially-sized circuit applied to the semi-private input $y$, and $C'$ is a restricted-multiplication (or, NC1) circuit applied to the private input $x$. This significantly broadens the expressiveness of prior known HSS constructions.
Clubcards for the WebPKI: smaller certificate revocation tests in theory and practice
CRLite is a low-bandwidth, low-latency, privacy-preserving mechanism for distributing certificate revocation data. A CRLite aggregator periodically encodes revocation data into a compact static hash set, or membership test, which can can be downloaded by clients and queried privately. We present a novel data-structure for membership tests, which we call a clubcard, and we evaluate the encoding efficiency of clubcards using data from Mozilla's CRLite infrastructure.
As of November 2024, the WebPKI contains over 900 million valid certificates and over 8 million revoked certificates. We describe an instantiation of CRLite that encodes the revocation status of these certificates in a 6.7 MB package. This is $54\%$ smaller than the original instantiation of CRLite presented at the 2017 IEEE Symposium on Security and Privacy, and it is $21\%$ smaller than the lower bound claimed in that work.
A sequence of clubcards can encode a dynamic dataset like the WebPKI revocation set. Using data from late 2024 again, we find that clubcards encoding 6 hour delta updates to the WebPKI can be compressed to 26.8 kB on average---a size that makes CRLite truly practical.
We have extended Mozilla's CRLite infrastructure so that it can generate clubcards, and we have added client-side support for this system to Firefox. We report on some performance aspects of our implementation, which is currently the default revocation checking mechanism in Firefox Nightly, and we propose strategies for further reducing the bandwidth requirements of CRLite.
Random Oracle Combiners: Merkle-Damgård Style
A Random Oracle Combiner (ROC), introduced by Dodis et al. (CRYPTO ’22), takes two hash functions $h_1, h_2$ from m bits to n bits and outputs a new hash function $C$ from $m$' to $n$' bits. This function C is guaranteed to be indifferentiable from a fresh random oracle as long as one of $h_1$ and $h_2$ (say, $h_1$) is a random oracle, while the other h2 can “arbitrarily depend” on $h_1$.
The work of Dodis et al. also built the first length-preserving ROC, where $n$′ = $n$. Unfortunately, despite this feasibility result, this construction has several deficiencies. From the practical perspective, it could not be directly applied to existing Merkle-Damgård-based hash
functions, such as SHA2 or SHA3. From the theoretical perspective, it required $h_1$ and $h_2$ to have input length $m$ > 3λ, where λ is the security parameter.
To overcome these limitations, Dodis et al. conjectured — and left as the main open question — that the following (salted) construction is a length-preserving ROC:
$C^{h1,h2}_{\mathcal{Z}_1,\mathcal{Z}_2} (M ) = h_1^*(M, \mathcal{Z}_1) \oplus h^*_2(M,\mathcal{Z}_2),$
where $\mathcal{Z}_1, \mathcal{Z}_2$ are random salts of appropriate length, and $f^*$ denotes the Merkle-Damgård-extension of a given compression function $f$. As our main result, we resolve this conjecture in the affirmative. For practical use, this makes the resulting combiner applicable to existing, Merkle-Damgård-based hash functions. On the theory side, it shows the existence of ROCs only requiring optimal input length $m$ = λ+O(1).
On some non-linear recurrences over finite fields linked to isogeny graphs
This paper presents new results that establish connections between isogeny graphs and nonlinear recurrences over finite fields. Specifically, we prove several theorems that link these two areas, offering deeper insights into the structure of isogeny graphs and their relationship with nonlinear recurrence sequences. We further provide two related conjectures which may be worth of further research. These findings contribute to a better understanding of the endomorphism ring of a curve, advancing progress toward the resolution of the Endomorphism Ring Problem, which aims to provide a computational characterization of the endomorphism ring of a supersingular elliptic curve.
Analytic and Simulation Results of a Gaussian Physically Unclonable Constant Based on Resistance Dispersion
Physically Unclonable Constants (PUCs) are a special type of Physically Unclonable Constants and they can be used to embed secret bit-strings in chips. Most PUCs are an array of cells where each cell is a digital circuit that evolve spontaneously toward one of two states, the chosen state being function of random manufacturing process variations. In this paper we propose an Analog Physically Unclonable Constant (APUC) whose output is an analog value to be transformed in digital by a digitizer circuit. The ratio behind this proposal is that an APUC cell has the potential of providing more than one bit, reducing the required footprint. Preliminary theoretical analysis and simulation results are presented. The proposed APUC has interesting performances (e.g., it can provide up to 5 bits per cell) that grant for further investigation.
TockOwl: Asynchronous Consensus with Fault and Network Adaptability
BFT protocols usually have a waterfall-like degradation in performance in the face of crash faults. Some BFT protocols may not experience sudden performance degradation under crash faults. They achieve this at the expense of increased communication and round complexity in fault-free scenarios. In a nutshell, existing protocols lack the adaptability needed to perform optimally under varying conditions.
We propose TockOwl, the first asynchronous consensus protocol with fault adaptability. TockOwl features quadratic communication and constant round complexity, allowing it to remain efficient in fault-free scenarios. TockOwl also possesses crash robustness, enabling it to maintain stable performance when facing crash faults. These properties collectively ensure the fault adaptability of TockOwl.
Furthermore, we propose TockOwl+ that has network adaptability. TockOwl+ incorporates both fast and slow tracks and employs hedging delays, allowing it to achieve low latency comparable to partially synchronous protocols without waiting for timeouts in asynchronous environments. Compared to the latest dual-track protocols, the slow track of TockOwl+ is simpler, implying shorter latency in fully asynchronous environments.
An attack on ML-DSA using an implicit hint
The security of ML-DSA, like most signature schemes, is partially based on the fact that the nonce used to generate the signature is unknown to any attacker. In this work, we exhibit a lattice-based attack that is possible if the nonces share implicit or explicit information. From a collection of signatures whose nonces share certain coefficients, it is indeed possible to build a collection of non full-rank lattices. Intersecting them, we show how to create a low-rank lattice that contains one of the polynomials of the secret key, which in turn can be recovered using lattice reduction techniques.
There are several interpretations of this result: firstly, it can be seen as a generalization of a fault-based attack on BLISS presented at SAC'16 by Thomas Espitau et al. Alternatively, it can be understood as a side-channel attack on ML-DSA, in the case where an attacker is able to recover only one of the coefficients of the nonce used during the generation of the signature. For ML-DSA-II, we show that $4 \times 160$ signatures and few hours of computation are sufficient to recover the secret key on a desktop computer. Lastly, our result shows that simple countermeasures, such as permuting the generation of the nonce coefficients, are not sufficient.
Honorific Security: Efficient Two-Party Computation with Offloaded Arbitration and Public Verifiability
Secure two-party computation (2PC) allows two distrustful parties to jointly compute some functions while keeping their private secrets unrevealed. Adversaries are often categorized as semi-honest and malicious, depending on whether they follow the protocol specifications or not. While a semi-honest secure protocol is fast but strongly assumed that all participants will follow the protocol, a malicious protocol often requires heavy verification steps and preprocessing phase to prohibit any cheat. Covert security [10] was first introduced by Aumann and Lindell, which looks into the "middle ground" between semi-honest security and malicious security, such that active adversaries who cheat will be caught with a predefined probability. However, it is still an open question that how to properly determine such a probability before protocol execution, and the misbehavior detection must be made by other honest participants with cut-and-choose in current constructions. To achieve public verifiability and meanwhile outsource the verification steps, [12] presented publicly auditable security to enable an external auditor to verify the result correctness. Essentially, an additional existence assumption of a bulletin board functionality is required to keep tracking the broadcasted messages for the auditor. And moreover, the auditor cannot identify the cheater, but only points out the incorrect result. The (robust) accountability family [40, 62, 76] achieves both output delivery guarantee and public verifiability, which relies on heavy offline and online constructions with zero knowledge proofs.
In this paper, we propose a new security notion called honorific security, where an external arbiter can find the cheater with overwhelming probability under the malicious corruption. With honorific security, we do not prohibit cheat of a corrupted party during the online stage, but enable the honest party to detect and punish the cheater later on in public. We show that a maliciously secure garbled circuit (GC) [83] protocol can thus be constructed with only slightly more overhead than a passively secure protocol. Our construction performs up to 2.37 times and 13.30 times as fast as the state-of-the-art protocols with covert and malicious security, respectively.
Laconic Cryptography with Preprocessing
Laconic cryptography focuses on designing two-message protocols that allow secure computation on large datasets while minimizing communication costs. While laconic cryptography protocols achieve asymptotically optimal communication complexity for many tasks, their concrete efficiency is prohibitively expensive due to the heavy use of public-key techniques or the non-black-box of cryptographic primitives.
In this work, we initiate the study of "laconic cryptography with preprocessing", introducing a model that includes an offline phase to generate database-dependent correlations, which are then used in a lightweight online phase. These correlations are conceptually simple, relying on linear-algebraic techniques. This enables us to develop a protocol for private laconic vector oblivious linear evaluation (plvOLE). In such a protocol, the receiver holds a large database $\mathsf{DB}$, and the sender has two messages $v$ and $w$, along with an index $i$. The receiver learns the value $v \cdot \mathsf{DB}_i + w$ without revealing other information.
Our protocol, which draws from ideas developed in the context of private information retrieval with preprocessing, serves as the backbone for two applications of interest: laconic private set intersection (lPSI) for large universes and laconic function evaluation for RAM-programs (RAM-LFE). Based our plvOLE protocol, we provide efficient instantiations of these two primitives in the preprocessing model.
A Method for Securely Comparing Integers using Binary Trees
In this paper, we propose a new protocol for secure integer comparison which consists of parties having each a private integer. The goal of the computation is to compare both integers securely and reveal to the parties a single bit that tells which integer is larger. Nothing more should be revealed. To achieve a low communication overhead, this can be done by using homomorphic encryption (HE). Our protocol relies on binary decision trees that is a special case of branching programs and can be implemented using HE. We assume a client-server setting where each party holds one of the integers, the client also holds the private key of a homomorphic encryption scheme and the evaluation is done by the server. In this setting, our protocol outperforms the original DGK protocol of Damgård et al. and reduces the running time by at least 45%. In the case where both inputs are encrypted, our scheme reduces the running time of a variant of DGK by 63%.
Deny Whatever You Want: Dual-Deniable Public-Key Encryption
We introduce an enhanced requirement of deniable public key encryption that we call dual-deniability. It asks that a sender who is coerced should be able to produce fake randomness, which can explain the target ciphertext as the encryption of any alternative message under any valid key she/he desires to deny. Compared with the original notion of deniability (Canetti et al. in CRYPTO ’97, hereafter named message-deniability), this term further provides a shield for the anonymity of the receiver against coercion attacks.
We first give a formal definition of dual-deniability, along with its weak-mode variant. For conceptual understanding, we then show dual-deniability implies semantic security and anonymity against CPA, separates full robustness, and even contradicts key-less or mixed robustness, while is (constructively) implied by key-deniability and full robustness with a minor assumption for bits encryption. As for the availability of dual-deniability, our main scheme is a generic approach from ciphertext-simulatable PKE, where we devise a subtle multi-encryption schema to hide the true message within random masking ciphertexts under plan-ahead setting. Further, we leverage the weak model to present a more efficient scheme having negligible detection probability and constant ciphertext size. Besides, we revisit the notable scheme (Sahai and Waters in STOC ’14) and show it is inherently dual-deniable. Finally, we extend the Boneh-Katz transform to capture CCA security, deriving dual-deniable and CCA-secure PKE from any selectively dual-deniable IBE under multi-TA setting. Overall our work mounts the feasibility of anonymous messaging against coercion attacks.
Pseudorandom Multi-Input Functional Encryption and Applications
We construct the first multi-input functional encryption (MIFE) and indistinguishability obfuscation (iO) schemes for pseudorandom functionalities, where the output of the functionality is pseudorandom for every input seen by the adversary. Our MIFE scheme relies on LWE and evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) for constant arity functions, and a strengthening of evasive LWE for polynomial arity. Thus, we obtain the first MIFE and iO schemes for a nontrivial functionality from conjectured post-quantum assumptions.
Along the way, we identify subtle issues in the proof of witness encryption from evasive LWE by prior work and believe that a similar strengthening of evasive LWE should also be required for their proof, for the same reasons as ours. We demonstrate the power of our new tools via the following applications:
1. Multi Input Predicate Encryption for Constant Arity. Assuming evasive LWE and LWE, we construct a multi-input predicate encryption scheme (MIPE) for P, supporting constant arity. The only prior work to support MIPE for P with constant arity by Agrawal et al. (Crypto, 2023) relies on a strengthening of Tensor LWE in addition to LWE and evasive LWE.
2. Multi Input Predicate Encryption for Polynomial Arity. Assuming a stronger variant of evasive LWE and LWE, we construct MIPE for P for polynomial arity. MIPE for polynomial arity supporting P was not known before, to the best of our knowledge.
3. Two Party ID Based Key Exchange. Assuming a stronger variant of evasive LWE and LWE, along with Decision Bilinear Diffie-Hellman, we provide the first two-party ID based Non-Interactive Key Exchange (ID-NIKE) scheme in the standard model. This leads to the first ID-NIKE in the standard model without using multilinear maps or indistinguishability obfuscation.
4. Instantiating the Random Oracle. We use our pseudorandom iO to instantiate the random oracle in several applications that previously used iO (Hohenberger, Sahai and Waters, Eurocrypt 2014) such as full-domain hash signature based on trapdoor permutations and more.
Our tools of MIFE and iO for pseudorandom functionalities appear quite powerful and yield extremely simple constructions when used in applications. We believe they provide a new pathway for basing “extreme” cryptography, which has so far required full fledged iO, on the presumably weaker evasive LWE in the post quantum regime.
Compact Pseudorandom Functional Encryption from Evasive LWE
We provide the first construction of compact Functional Encryption (FE) for pseudorandom functionalities from the evasive LWE and LWE assumptions. Intuitively, a pseudorandom functionality means that the output of the circuit is indistinguishable from uniform for every input seen by the adversary. This yields the first compact FE for a nontrivial class of functions which does not rely on pairings.
We demonstrate the power of our new tool by using it to achieve optimal parameters for both key-policy and ciphertext-policy Attribute Based Encryption (ABE) schemes for circuits of unbounded depth, from just the LWE and evasive LWE assumptions. This improves prior work along the twin axes of assumptions and performance. In more detail, this allows to: (i) replace the assumption of circular evasive LWE used in the work of Hseih, Lin and Luo (FOCS 2023) by plain evasive LWE, (ii) remove the need for the circular tensor LWE assumption in the work of Agrawal, Kumari and Yamada (CRYPTO, 2024), (iii) improve parameters obtained by both aforementioned works to achieve asymptotic optimality.
Previously, optimal parameters for ABE schemes were only achieved using compact FE for P (Jain, Lin and Luo, Eurocrypt 2023) – we show that compact FE for a much weaker class (albeit with incomparable security) suffices. Thus we obtain the first optimal ABE schemes for unbounded depth circuits which can be conjectured post-quantum secure. Along the way, we define and construct a new primitive which we term laconic pseudorandom obfuscation from the same assumptions – this may be of independent interest.
On the success rate of simple side-channel attacks against masking with unlimited attack traces
Side-channel attacks following a classical differential power
analysis (DPA) style are well understood, along with the effect the mask-
ing countermeasure has on them. However, simple attacks (SPA) where
the target variable does not vary thanks to a known value, such as the
plaintext, are less studied. In this paper, we investigate how the masking
countermeasure affects the success rate of simple attacks. To this end, we
provide theoretical, simulated, and practical experiments. Interestingly,
we will see that masking can allow us to asymptotically recover more
information on the secret than in the case of an unprotected implemen-
tation, depending on the masking type. We will see that this is true for
masking encodings that add non-linearity with respect to the leakages,
such as arithmetic masking, while it is not for Boolean masking. We be-
lieve this context provides interesting results, as the average information
of arithmetic encoding is proven less informative than the Boolean one.
Mobile Byzantine Agreement in a Trusted World
In this paper, we address the Byzantine Agreement problem in synchronous systems where Byzantine agents can move from process to process, corrupting their host.
We focus on three representative models: \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models.
In \emph{Garay's model} when a process has been left by the Byzantine, it is in the \emph{cured} state and it is aware of its condition and thus can remain silent for a round to prevent the dissemination of wrong information.
In \emph{Bonnet's model} a cured process may send messages (based on a state corrupted by the malicious agent), however it will behave correctly in the way it sends those messages: i.e., send messages according to the algorithm.
In \emph{Buhrman's model} Byzantine agents move together with the message.
It has been shown that in order to solve Byzantine Agreement in the \emph{Garay's model} at least $4t+1$ processors are needed, for \emph{Bonnet's model} at least $5t+1$ processors are needed, while for \emph{Buhrman's model} at least $3t+1$ processors are needed.
In this paper we target to increase the tolerance to mobile Byzantines by integrating a trusted counter abstraction to the above models. This abstraction prevents nodes to equivocate. In the new models we prove that at least $3t+1$, respectively $4t+1$, and $2t+1$ processors are needed to tolerate $t$ mobile Byzantine agents. Furthermore, we propose novel Mobile Byzantine Agreement algorithms that match these new lower bounds for \emph{Garay's}, \emph{Bonnet's} and \emph{Buhrman's} models.
Lattice-Based Sanitizable Signature Schemes: Chameleon Hash Functions and More
Sanitizable Signature Schemes (SSS) enable a designated party, the sanitizer, to modify predefined parts of a signed message without invalidating the signature, making them useful for applications like pseudonymization and redaction. Since their introduction by Ateniese et al. (ESORICS'05), several classical SSS constructions have been proposed, but none have been instantiated from quantum-resistant assumptions. In this work, we develop the first quantum-secure sanitizable signature schemes based on lattice assumptions. Our primary focus is on SSS constructions that rely on chameleon hash functions (CHFs), a key component for enabling the controlled modification of messages. While lattice-based CHFs exist, they do not meet the required security guarantees for SSS, becoming insecure under adversarial access to an adapt oracle. To address this, we construct a novel lattice-based CHF that achieves collision resistance even in such settings, called full collision resistance. However, our CHF lacks the uniqueness property, a limitation we show to be inherent in lattice-based CHFs. As a result, our SSS constructions initially fall short of achieving the critical security property of accountability. To overcome this, we apply a transformation based on verifiable ring signatures (VRS), for which we present the first lattice-based instantiation. Additionally, we provide a comprehensive analysis of existing classical SSS constructions, explore their potential for post-quantum instantiations, and present new attacks on previously assumed secure SSS schemes. Our work closes the gap in constructing quantum-secure SSS and lays the groundwork for further research into advanced cryptographic primitives based on lattice assumptions.
Voting with coercion resistance and everlasting privacy using linkable ring signatures
We propose an e-voting protocol based on a novel linkable ring signature scheme with unconditional anonymity. In our system, all voters create private credentials and register their public counterparts. To vote, they create a ring (anonymity set) consisting of public credentials together with a proof of knowledge of their secret credential via our signature. Its unconditional anonymity prevents an attacker, no matter how powerful, from deducing the identity of the voter, thus attaining everlasting privacy. Additionally, our protocol provides coercion resistance in the JCJ framework; when an adversary tries to coerce a voter, the attack can be evaded by creating a signature with a fake but indistinguishable credential. During a moment of privacy, they will cast their real vote. Our scheme also provides verifiability and ballot secrecy.
Relations among new CCA security notions for approximate FHE
In a recent Eurocrypt'24 paper, Manulis and Nguyen have proposed a new CCA security notion, vCCA, and associated construction blueprints to leverage both CPA-secure and correct FHE beyond the CCA1 security barrier. However, because their approach is only valid under the correctness assumption, it leaves a large part of the FHE spectrum uncovered as many FHE schemes used in practice turn out to be approximate and, as such, do not satisfy the correctness assumption. In this paper, we improve their work by defining and investigating a variant of their security notion which is suitable for a more general case where approximate FHE are included. As the passive security of approximate FHE schemes is more appropriately captured by CPAD rather than CPA security, we start from the former notion to define our vCCAD new security notion. Although, we show that vCCA and vCCAD are equivalent when the correctness assumption holds, we establish that vCCAD security is strictly stronger than vCCA security in the general case. In doing so, we interestingly establish several new separation results between variants of CPAD security of increasing strength. This allows us to clarify the relationship between vCCA security and CPAD security, and to reveal that the security notions landscape is much simpler for correct FHE than when approximate ones are included --- in which case, for example, we establish that multiple challenges security notions are strictly stronger than single-challenge ones for both CPAD and vCCAD security. Lastly, we also give concrete construction blueprints, showing how to leverage some of the blueprints proposed by Manulis and Nguyen to achieve vCCAD security. As a result, vCCAD security is the strongest CCA security notion so far known to be achievable by both correct and approximate FHE schemes.
PHOENIX: Crypto-Agile Hardware Sharing for ML-KEM and HQC
The transition to quantum-safe public-key cryptography has begun: for key agreement, NIST has standardized ML-KEM and selected HQC for future standardization. The relative immaturity of these schemes encourages crypto-agile implementations, to facilitate easy transitions between them. Intelligent crypto-agility requires efficient sharing strategies to compute operations from different cryptosystems using the same resources. This is particularly challenging for cryptosystems with distinct mathematical foundations, like lattice-based ML-KEM and code-based HQC.
We introduce PHOENIX, the first crypto-agile hardware coprocessor for lattice- and code-based cryptosystems--specifically, ML-KEM and HQC, at all three NIST security levels--with an effective agile sharing strategy.
PHOENIX accelerates polynomial multiplication, which is the main operation in both cryptosystems, and the current bottleneck of HQC. To maximise sharing, we replace HQC's Karatsuba-based polynomial multiplication with the Frobenius Additive FFT (FAFFT), which is similar on an abstract level to ML-KEM's Number Theoretic Transform (NTT).
We show that the FAFFT already brings substantial performance improvements in software. In hardware, our sharing strategy for the FAFFT and NTT is based on a new SuperButterfly unit that seamlessly switches between these two FFT variants over completely different rings. This is, to our knowledge, the first FAFFT hardware accelerator of any kind. We have integrated PHOENIX in a real System-on-Chip FPGA scenario, where our performance measurements show that efficient crypto-agility for lattice- and code-based KEMs can be achieved with low overhead.
Improved Round-by-round Soundness IOPs via Reed-Muller Codes
We give an IOPP (interactive oracle proof of proximity) for trivariate Reed-Muller codes that achieves the best known query complexity in some range of security parameters. Specifically, for degree $d$ and security parameter $\lambda\leq \frac{\log^2 d}{\log\log d}$ , our IOPP has $2^{-\lambda}$ round-by-round soundness, $O(\lambda)$ queries, $O(\log\log d)$ rounds and $O(d)$ length. This improves upon the FRI [Ben-Sasson, Bentov, Horesh, Riabzev, ICALP 2018] and the STIR [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] IOPPs for Reed-Solomon codes, that have larger query and round complexity standing at $O(\lambda \log d)$ and $O(\log d+\lambda\log\log d)$ respectively. We use our IOPP to give an IOP for the NP-complete language Rank-1-Constraint-Satisfaction with the same parameters.
Our construction is based on the line versus point test in the low-soundness regime. Compared to the axis parallel test (which is used in all prior works), the general affine lines test has improved soundness, which is the main source of our improved soundness.
Using this test involves several complications, most significantly that projection to affine lines does not preserve individual degrees, and we show how to overcome these difficulties. En route, we extend some existing machinery to more general settings. Specifically, we give proximity generators for Reed-Muller codes, show a more systematic way of handling "side conditions" in IOP constructions, and generalize the compiling procedure of [Arnon, Chiesa, Fenzi, Yogev, Crypto 2024] to general codes.
Insecurity of One Decentralized Attribute-based Signature Scheme for Social Co-governance
We show that the attribute-based signature scheme [Information Sciences, 654(2024), 119839] is insecure, because an adversary can generate valid signatures for any message even though he cannot access the signer's secret key. The four components of signature $\{\delta_1, \delta_2, \delta_3, \delta_4\}$ are not tightly bound to the target message $M$ and the signer's public key. The dependency between the signer's public key and secret key is not properly used to construct any intractable problem. The inherent flaw results in that the adversary can find an efficient signing algorithm functionally equivalent to the valid signing algorithm.
Nominal State-Separating Proofs
State-separting proofs are a powerful tool to structure cryptographic arguments, so that they are amenable for mechanization, as has been shown through implementations, such as SSProve. However, the treatment of separation for heaps has never been satisfactorily addressed. In this work, we present the first comprehensive treatment of nominal state separation in state-separating proofs using nominal sets. We provide a Coq library, called Nominal-SSProve, that builds on nominal state separation supporting mechanized proofs that appear more concise and arguably more elegant.
Endomorphisms for Faster Cryptography on Elliptic Curves of Moderate CM Discriminants
This article generalizes the widely-used GLV decomposition for (multi-)scalar multiplication to a much broader range of elliptic curves with moderate CM discriminant \( D < 0 \). Previously, it was commonly believed that this technique can only be applied efficiently for small values of \( D \) (e.g., up to \( 100 \)). In practice, curves with \( j \)-invariant \( 0 \) are most frequently employed, as they have the smallest possible \( D = -3 \). However, $j = 0$ curves are either too suspicious for conservative government regulators (e.g., for Russian ones, which prefer $D = -619$) or unavailable under imposed extra restrictions in a series of cryptographic settings. The article thus participates in the decade-long development of numerous curves with moderate \( D \) in the context of zk-SNARKs. Such curves are typically derived from others, which limits the ability to generate them while controlling the magnitude of \( D \).
Highway to Hull: An Algorithm for Solving the General Matrix Code Equivalence Problem
The matrix code equivalence problem consists, given two matrix spaces $\mathcal{C},\mathcal{D}\subset \mathbb{F}_q^{m\times n}$ of dimension $k$, in finding invertible matrices $P\in\textrm{GL}_m(\mathbb{F}_q)$ and $Q\in\textrm{GL}_n(\mathbb{F}_q)$ such that $\mathcal{D} =P\mathcal{C} Q^{-1}$. Recent signature schemes such as MEDS and ALTEQ relate their security to the hardness of this problem. Naranayan et. al. recently published an algorithm solving this problem in the case $k = n =m$ in $\widetilde{\mathcal{O}}(q^{\frac k 2})$ operations. We present a different algorithm which solves the problem in the general case. Our approach consists in reducing the problem to the matrix code conjugacy problem, i.e. the case $P=Q$. For the latter problem, similarly to the permutation code equivalence problem in Hamming metric, a natural invariant based on the \emph{Hull} of the code can be used. Next, the equivalence of codes can be deduced using a usual list collision argument. For $k=m=n$, our algorithm achieves the same complexity as in the aforementioned reference. However, it extends to a much broader range of parameters.
Partial Key Exposure Attacks on UOV and Its Variants
In CRYPTO 2022, Esser et al. proposed a partial key exposure attack on several post-quantum cryptographic schemes including Rainbow which is a variant of UOV. The task of the attack is to recover a full secret key from its partial information such as a secret key with symmetric/asymmetric bit errors. One of the techniques Esser et al. developed is a partial enumeration that combines the standard algorithms to solve the MQ problem with enumeration.
Although an efficient attack on Rainbow was proposed, UOV and its variants have still been paid much attention since UOV and its three variants, i.e., MAYO, QR-UOV and SNOVA, were selected as the Round 2 candidates of the additional call for digital signature schemes proposal by NIST.
In this paper, we analyze partial key exposure attacks on UOV, MAYO, and QR-UOV. Although our proposed attacks use the partial enumeration, we refine their enumeration strategy. We employ two enumeration strategies and analyze the complexity of the proposed attacks. Then, we find a structural difference between UOV and its variants to resist partial enumeration. Specifically, the partial enumeration is effective if the number of vinegar variables is smaller than the number of equations and the order of a finite field is small.
As a result, the proposed attack is the most effective on MAYO. While our attacks on UOV and QR-UOV are effective only when the symmetric error probabilities are 0.11 and 0.05, respectively, that on MAYO is effective even when the probability is close to 0.5.
Bayesian Leakage Analysis: A Framework for Analyzing Leakage in Cryptography
We introduce a framework based on Bayesian statistical inference for analyzing leakage in cryptography and its vulnerability to inference attacks. Our framework naturally integrates auxiliary information, defines a notion of adversarial advantage, and provides information-theoretic measures that capture the security of leakage patterns against both full and functional recovery attacks.
We present two main theorems that bound the advantage of powerful inference techniques: the maximum a posteriori (MAP), the maximum likelihood estimate (MLE) and the MAP test. Specifically, we show that the advantage of these methods is exponentially bounded by new entropy measures that capture the susceptibility of leakage patterns to inference.
To demonstrate the applicability of our framework, we design and implement an automated leakage attack engine, Bayle, which leverages a novel inference algorithm that efficiently computes MAP estimates for a large class of i.i.d. leakage models. These models include query equality leakage, the combination of query equality and volume leakage, and leakage patterns arising from naive conjunctions.
Oblivious Immutable Memory
An oblivious RAM (ORAM) compiler is a cryptographic tool that transforms a program $P$ running in time $n$ into an equivalent program $\tilde P$, with the property that the sequence of memory addresses read from/written to by $\tilde P$ reveal nothing about $\tilde P$'s data (Goldreich and Ostrovsky, JACM'96). An efficient ORAM compiler $C$ should achieve some combination of the following:
- Low bandwidth blow-up: $\tilde P$ should read/write a similar amount of data as does P.
- Low latency: $\tilde P$ should incur a similar number of roundtrips to the memory as does P.
- Low space complexity: $\tilde P$ should run in as few words of local memory as possible.
It is well known that for a generic compiler (i.e. one that works for any RAM program $P$), certain combinations of efficiencies are impossible. Any generic ORAM compiler must incur $\Omega(\log n)$ bandwidth blow-up, and any ORAM compiler with no latency blow-up must incur either $\Omega(\sqrt n)$ bandwidth blow-up and/or local space. Moreover, while a $O(\log n)$ bandwidth blow-up compiler is known, it requires the assumption that one-way functions exist and incurs enormous constant factors.
To circumvent the above problems and improve efficiency of particular ORAM programs, we develop a compiler for a specific class of programs. Let $P$ be a program that interacts with an immutable memory. Namely, $P$ may write values to memory, then read them back, but it cannot change values that were already written. Using only information-theoretic techniques, we compile any such $P$ into an oblivious form $\tilde P$ with a combination of efficiencies that no generic ORAM compiler can achieve:
- $\tilde P$ incurs $\Theta(\log n)$ amortized bandwidth blow-up.
- $\tilde P$ incurs $O(1)$ amortized latency blow-up.
- $\tilde P$ runs in $O(\lambda)$ words of local space ($\tilde P$ incurs an error with probability $2^{-\Omega(\lambda)}$).
We show that this, for instance, implies that any pure functional program can be compiled with the same asymptotics.
Our work builds on and is compatible with prior work (Appan et al., CCS'24) that showed similar results for pointer machine programs that manipulate objects with constant in-degree (i.e., the program may only maintain a constant number of pointers to any one memory cell; our immutable memory approach does not have this limitation). By combining techniques, we can consider programs that interact with a mixed memory that allows each memory cell to be updated until it is frozen, after which it becomes immutable, allowing further reads to be compiled with the above asymptotics, even when in-degree is high. Many useful algorithms/data structures can be naturally implemented as mixed memory programs, including suffix trees (powerful data structures used in computational biology) and deterministic finite automata (DFAs).
The Quantum Decoherence Model: Everlasting Composable Secure Computation and More
Quantum cryptography allows to achieve security goals which are unobtainable using classical cryptography alone: it offers the promise of everlasting privacy. Thatis, an adversary trying to attack a protocol must succeed during the run of the protocol.
After the protocol has terminated, security holds unconditionally.
In this work, we initiate the study of a new model which we call the quantum decoherence model (QDM). In a nutshell, this model captures adversaries that are computationally bounded during the run of a protocol (and some time after), but become computationally unbounded long after the protocol terminates. Importantly, once the adversary becomes computationally unbounded, he can only remember a bounded number of qubits from before the computational bound was lifted.
We provide a variant of the Universal Composability framework which captures the new notion of quantum decoherence and augment it with quantum random oracles. As our main contribution, we construct a non-interactive commitment scheme achieving unconditional and statistical security against malicious senders and everlasting security against malicious receivers under our new security notion. Such commitments imply general secure multiparty computation with everlasting security.
Finally, we show that our core technique can be applied to a broader spectrum of problems. We show that it gives rise to everlasting public key encryption and OT in the QDM. Finally, we also consider the weaker notion of incompressible encryption in the setting of quantum decoherence, and show that post-quantum IND-CPA secure public
key encryption is sufficient to realize this notion without resorting to random oracles.
At the technical core of our constructions is a new, conceptually simple yet powerful reverse entropic uncertainty relation.
DSM: Decentralized State Machine - The Missing Trust Layer of the Internet
The modern internet relies heavily on centralized trust systems controlled by corporations, governments, and intermediaries to manage authentication, identity, and value transfer. These models introduce fundamental vulnerabilities, including censorship, fraud, and systemic insecurity. The Decentralized State Machine (DSM) addresses these issues by introducing a mathematically enforced trust layer that eliminates the need for consensus mechanisms, third-party validators, and centralized infrastructure. DSM enables quantum-resistant, deterministic state transitions for digital identity and value exchange—offering immediate finality, offline capability, and tamper-proof forward-only state progression.
DSM replaces traditional blockchain execution models with deterministic, pre-committed state transitions, enabling secure, multi-path workflows without requiring Turing-completeness or global consensus. The protocol architecture is based on a straight hash chain with sparse indexing and Sparse Merkle Trees (SMTs), ensuring efficient verification, scalability, and privacy. A bilateral isolation model supports asynchronous, offline operation with built-in consistency guarantees. DSM introduces a sustainable, gas-free economic model based on cryptographic subscription commitments.
This paper outlines the architecture, cryptographic foundations, and security guarantees of DSM, and demonstrates how it achieves verifiable, trustless interaction between peers—both online and offline. By decoupling security from consensus and enabling self-validating state transitions, DSM offers a practical and scalable alternative to conventional internet trust models.
Structured Encryption for Indirect Addressing
The Structured Encryption (StE) framework can be used to capture the encryption and querying of complex data structures on an honest-but-curious server. In this work, we introduce a new type of StE called indirectly addressed multimap encryption (IA-MME). We propose two IA-MME schemes: the layered multimaps approach" which extends and generalizes the existing "multimap chaining" approach, and a novel technique called the single multimap approach which has comparable efficiency and strictly better security. We demonstrate that our formalisms simplify and modularize StE solutions for real-world use cases in searchable encryption and SQL databases, and provide simulations demonstrating that our IA-MME constructions lead to tangible efficiency and security gains on realistic data. As a part of our techniques, we identify and correct a technical error in prior constructions — providing greater insight into issues that can arise when composing StE schemes.
Counter Galois Onion (CGO) for Tor: Fast Non-Malleable Onion Encryption
In 2012, the Tor project expressed the need to upgrade Tor's onion encryption scheme to protect against tagging attacks and thereby strengthen its end-to-end integrity protection. Tor proposal 261, where each encryption layer is processed by a strongly secure, yet relatively expensive tweakable wide-block cipher, is the only concrete candidate replacement to be backed by formal, yet partial, security proofs (Degabriele and Stam, EUROCRYPT 2018, and Rogaway and Zhang, PoPETS 2018).
We propose an alternative onion encryption scheme, called Counter Galois Onion (CGO), that follows a minimalistic, modular design and includes several improvements over proposal 261. CGO's underlying primitive is an updatable tweakable split-domain cipher accompanied with a new security notion, that augments the recently introduced rugged pseudorandom permutation (Degabriele and Karadžić, CRYPTO 2022). Thus, we relax the security compared to a tweakable wide-block cipher, allowing for more efficient designs. We suggest a concrete instantiation for the updatable tweakable split-domain cipher and report on our experiments comparing the performance of CGO with Tor's existing onion encryption scheme.
On the Adaptive Security of Free-XOR-based Garbling Schemes in the Plain Model
A Garbling Scheme is a fundamental cryptographic primitive, with numerous theoretical and practical applications. Since its inception by Yao (FOCS'82, '86), optimizing the communication and computation complexities of securely garbling circuits has been an area of active research. One such optimization, and perhaps the most fundamental, is the `Free-XOR' technique (Kolesnikov and Schneider, ICALP'08) which allows XOR gates in a function garbling to not require representation, and therefore communication.
Since then, several works have designed and analysed the security of schemes that adopt the Free-XOR optimisation. In particular: (1) Applebaum (JoC'16) proved that this can be securely instantiated assuming symmetric-key encryption satisfying a notion called RK-KDM security; and (2) Zahur, Rosulek and Evans (Eurocrypt'15) proposed the so-called `Half Gates' scheme, and proved that it can be instantiated assuming hash functions satisfying a notion called CCR security. Although both schemes have been proven selectively secure, prior work leaves it open to analyze whether they satisfy a stronger security notion -- adaptive security -- in the plain model.
In this work, we formally show that the selective security of these two schemes cannot be lifted to adaptive security under the same assumptions. To establish these barriers, we adopt techniques from the work of Kamath et al (Crypto'21), who proved similar negative results for Yao's garbling. We use that as a starting point and introduce new techniques tailored towards addressing Free-XOR-based schemes.
Accelerating Isogeny Walks for VDF Evaluation
VDFs are characterized by sequential function evaluation but an immediate output verification. In order to ensure secure use of VDFs in real-world applications, it is important to determine the fastest implementation. Considering the point of view of an attacker (say with unbounded resources), this paper aims to accelerate the isogeny-based VDF proposed by De Feo-Mason-Petit-Sanso in 2019. It is the first work that implements a hardware accelerator for the evaluation step of an isogeny VDF. To meet our goal, we use redundant representations of integers and introduce a new lookup table-based algorithm for modular reduction. We also provide both a survey of elliptic curve arithmetic to arrive at the most cost-effective curve computations and an in-depth cost analysis of the different base degree isogeny and method for the isogeny evaluation. The evaluation step of a VDF is defined to be sequential, which means that there is limited scope for parallelism. Nevertheless, taking this constraint into account our proposed design targets the highest levels of parallelism possible on an architectural level of an isogeny VDF implementation.
We provide a technology-independent metric to model the delay of isogeny evaluation, which a VDF developer can use to derive secure parameters. ASIC synthesis results in 28nm are used as a baseline to estimate VDF parameters.
Violating Clauser-Horne-Shimony-Holt Inequality Represents Nothing
The Clauser-Horne-Shimony-Holt (CHSH) inequality is of great importance to quantum entanglement and quantum computers. We present a purely mathematical argument for the famous inequality. The essential assumptions in the new argument are totally independent of any physical interpretation, including the hidden variable interpretation for EPR thought experiment and the Copenhagen interpretation for quantum mechanics. The findings are helpful to comprehend the inequality from a new point of view.
Pre-Constructed Publicly Verifiable Secret Sharing and Applications
Conventional Publicly Verifiable Secret Sharing (PVSS) protocols allow a dealer to share a secret among $n$ parties without interaction, ensuring that any $t + 1$ parties (where $t+1 \le n$) can recover the secret, while anyone can publicly verify the validity of both the individual shares and the reconstructed secret. PVSS schemes are shown to be a key tool in a wide range of practical applications. In this paper, we introduce Pre-constructed PVSS (PPVSS), an extension of standard PVSS schemes, highlighting its enhanced utility and efficiency in various protocols. Unlike standard PVSS, PPVSS requires the dealer to publish a commitment or encryption of the main secret and incorporates a novel secret reconstruction method. We show that these refinements make PPVSS more practical and versatile than conventional PVSS schemes.
To build a PPVSS scheme, we first point out that the well-known PVSS scheme by Schoenmakers (CRYPTO'99) and its pairing-based variant presented by Heidarvand and Villar (SAC'08) can be seen as special cases of PPVSS, where the dealer also publishes a commitment to the main secret. However, these protocols are not practical for many applications due to efficiency limitations and are less flexible compared to a standard PPVSS scheme. To address this, we propose a general strategy for transforming a Shamir-based PVSS scheme into a PPVSS scheme. Using this strategy, we construct two practical PPVSS schemes in both the Random Oracle (RO) and plain models, grounded in state-of-the-art PVSS designs. Leveraging the new RO-based PPVSS scheme, we revisit some applications and present more efficient variants. Notably, we propose a new universally verifiable e-voting protocol that improves on the alternative scheme by Schoenmakers (CRYPTO'99), reducing the verification complexity with $m$ voters from $O(n^2m)$ to $O(nm)$ exponentiations--a previously unattainable goal with standard PVSS schemes. Our implementation results demonstrate that both our proposed PPVSS schemes and the new universally verifiable e-voting protocol significantly outperform existing alternatives in terms of efficiency.
Defeating AutoLock: From Simulation to Real-World Cache-Timing Exploits against TrustZone
In this article, we present for the first time a cross-core Prime+Probe attack on ARM
TrustZone, which bypasses the AutoLock mechanism. We introduce our simulation-
driven methodology based on gem5 for vulnerability analysis. We demonstrate its
utility in reverse engineering a SoC platform in order to study its microarchitectural
behavior (caches, etc.), inside a simulator, in spite of hardware protection. We present
a novel vulnerability analysis technique, which takes into account the cache set
occupancy for targeted victim executable. This proves to be essential in identifying
information leakage in presence of AutoLock. The above tool also identifies the cache
lines leaking a maximum amount of information. A cross-core Prime+Probe attack is
then mounted on these max-leakage cache lines both in simulation for fine-tuning,
and in real hardware. We validate our analysis and attack method on OP-TEE, an
open-source trusted execution environment running on RockPi4 a board based on
RK3399 SoC. More specifically we target the RSA subroutine in the MbedTLS library
used inside OP-TEE. Despite the presence of AutoLock, multiplier obfuscation, and
assuming a cross-core attack, we are able to retrieve 30% of the key bits, which can
later be used in Branch-and-Prune methods to recover the full key.
DFS: Delegation-friendly zkSNARK and Private Delegation of Provers
Zero-Knowledge Succinct Non-interactive Arguments of Knowledge (zkSNARKs) lead to proofs that can be succinctly verified but require huge computational resources to generate. Prior systems outsource proof generation either through public delegation, which reveals the witness to the third party, or, more preferably, private delegation that keeps the witness hidden using multiparty computation (MPC). However, current private delegation schemes struggle with scalability and efficiency due to MPC inefficiencies, poor resource utilization, and suboptimal design of zkSNARK protocols.
In this paper, we introduce DFS, a new zkSNARK that is delegation-friendly for both public and private scenarios. Prior work focused on optimizing the MPC protocols for existing zkSNARKs, while DFS uses co-design between MPC and zkSNARK so that the protocol is efficient for both distributed computing and MPC. In particular, DFS achieves linear prover time and logarithmic verification cost in the non-delegated setting. For private delegation, DFS introduces a scheme with zero communication overhead in MPC and achieves malicious security for free, which results in logarithmic overall communication; while prior work required linear communication. Our evaluation shows that DFS is as efficient as state-of-the-art zkSNARKs in public delegation; when used for private delegation, it scales better than previous work. In particular, for $2^{24}$ constraints, the total communication of DFS is less than $500$KB, while prior work incurs $300$GB, which is linear to the circuit size. Additionally, we identify and address a security flaw in prior work, EOS (USENIX'23).
Quantum Attacks on Sum of Even-Mansour Construction Utilizing Online Classical Queries
The Sum of Even-Mansour (SoEM) construction, proposed by Chen et al. at Crypto 2019, has become the basis for designing some symmetric schemes, such as
the nonce-based MAC scheme $\text{nEHtM}_{p}$ and the nonce-based encryption scheme $\text{CENCPP}^{\ast}$. In this paper, we make the first attempt to study the quantum security of SoEM under the Q1 model where the targeted encryption oracle can only respond to classical queries rather than quantum ones.
Firstly, we propose a quantum key recovery attack on SoEM21 with a time complexity of $\tilde{O}(2^{n/3})$ along with $O(2^{n/3})$ online classical queries. Compared with the current best classical result which requires $O(2^{2n/3})$, our method offers a quadratic time speedup while maintaining the same number of queries. The time complexity of our attack is less than that observed for quantum exhaustive search by a factor of $2^{n/6}$. We further propose classical and quantum key recovery attacks on the generalized SoEMs1 construction (consisting of $s\geq 2$ independent public permutations), revealing that the application of quantum algorithms can provide a quadratic acceleration over the pure classical methods. Our results also imply that the quantum security of SoEM21 cannot be strengthened merely by increasing the number of permutations.
A Place for Everyone vs Everyone in its Place: Measuring and Attacking the Ethereum Global Network
The Ethereum Global Network (EGN) is the peer-to-peer (P2P) network underlying Ethereum and thousands of subsequent blockchain services. Deviating from traditional single-service P2P networks, EGN's multi-service architecture has gained widespread acceptance for supposedly improving node discovery efficiency and security. This paper challenges this belief by critically examining EGN's design and its purported benefits. Our analysis reveals significant shortcomings in EGN's node discovery process. EGN nodes struggle to connect with peers offering the desired service: over three-quarters of connection attempts reach nodes of other services. In an extreme case, one node spent an average of $45\,908$ connection attempts to find each neighbor. Moreover, this blended architecture compromises EGN's security. The network demonstrates high susceptibility to DHT pollution and partition attacks. Even with only $300$ malicious nodes in EGN, an attacker can isolate thousands of nodes, significantly hindering recovery. In contrast, such a small number of malicious nodes has minimal impact on every single-service P2P network. We propose solutions to improve EGN's node discovery efficiency and strengthen its resilience against attacks.
Lifeboats on the Titanic Cryptography
The Titanic was the ship that "could not sink," fortunately its designers installed lifeboats (not enough) despite having no logical grounding for this waste of space and material. It was out of respect for unforeseen surprises. NIST-Post Quantum Ciphers represent the best and the brightest in world crypto intelligence. They are certified as good for their purpose. And likely so, alas, not surely so. If we could find a crypto equivalent for the Titanic Lifeboats, should not we load them up for our journey? Indeed, pattern-devoid cryptography is the crypto equivalent of the lifeboats that mitigated the Titanic disaster. Pattern-Devoid cryptography (PDC) may be deemed inelegant, inconvenient, and bloated, but it will hold up against quantum computers more powerful than expected, and more importantly, it will hold up against adversarial mathematical talent greater than expected. Which is why we should put up with its negatives, and install it just in case the Titanic story repeats itself in cyberspace. This article elaborates on this proposition.
Heuristic Algorithm for Solving Restricted SVP and its Applications
In lattice-based cryptography, many attacks are performed by finding a short enough vector on a specific lattice. However, it is possible that length is not the only restriction on the vector to be found. A typical example is SVP with infinity norm: since most SVP solving algorithms only aim to find short vector under Euclidean norm, the infinity norm is in fact another restriction on the vector. In the literature, such problems are usually solved by performing exhaustive search on a list of short vectors generated from lattice sieving. However, the sieving list might either be too large or too small to pass the additional restriction, which makes the solving algorithm inefficient in some cases.
Our contribution in this work is as follows: (1) We formally define a new lattice hard problem called restricted SVP, and show that it can be used to generalize many lattice hard problems, including SVP with non-Euclidean norm and Kannan's embedding on approximate CVP. (2) We extend the dimension for free technique and the enumerate-then-slice technique into approximate SVP where the goal is to output a list of short vectors with a certain size. (3) We give the heuristic algorithm for solving restricted SVP, and design a hardness estimator for this algorithm, which can be used to estimate the concrete hardness of signature forgery in Dilithium and other lattice-based signatures. Using this estimator, we present a concrete security analysis for Dilithium against signature forgery under the gate-count model for the first time. Our estimation matches well with the security estimation from core-SVP model in the document of Dilithium, and we also combine our estimator with the rescaling technique to generate a tighter estimation.
Protecting Computations Against Continuous Bounded-Communication Leakage
We consider the question of protecting a general computation device, modeled by a stateful Boolean circuit, against leakage of partial information about its internal wires. Goyal et al. (FOCS 2016) obtained a solution for the case of bounded-communication leakage, where the wires are partitioned into two parts and the leakage can be any function computed using $t$ bits of communication between the parts. However, this solution suffers from two major limitations: (1) it only applies to a one-shot (stateless) computation, mapping an encoded input to an encoded output, and (2) the leakage-resilient circuit consumes fresh random bits, whose number scales linearly with the circuit complexity of the computed function.
In this work, we eliminate the first limitation and make progress on the second. Concretely:
- We present the first construction of stateful circuits that offer information-theoretic protection against continuous bounded-communication leakage. As an application, we extend a two-party ``malware-resilient'' protocol of Goyal et al. to the continuous-leakage case.
- For simple types of bounded-communication leakage, which leak $t$ parities or $t$ disjunctions of circuit wires or their negations, we obtain a deterministic variant that does not require any fresh randomness beyond the randomness in the initial state. Here we get computational security based on a subexponentially secure one-way function. This is the first deterministic leakage-resilient circuit construction for any nontrivial class of global leakage.
Adaptive Distributional Security: A Framework for Input-Adaptive Cryptography
It is often desirable to break cryptographic primitives into two components: an input-independent offline component, and a cheap online component used when inputs arrive. Security of such online/offline primitives must be proved in the input-adaptive setting: the adversary chooses its input adaptively, based on what it sees in the offline-phase. Proving security in the input-adaptive setting can be difficult, particularly when one wishes to achieve simulation security and avoid idealized objects like a random oracle (RO).
This work proposes a framework for reasoning about input-adaptive primitives: adaptive distributional security (ADS). Roughly, an ADS primitive provides security when it is used with inputs drawn from one of two distributions that are themselves hard to distinguish. ADS is useful as a framework for the following reasons:
- An ADS definition can often circumvent impossibility results imposed on the corresponding simulation-based definition. This allows us to decrease the online-cost of primitives, albeit by using a weaker notion of security.
- With care, one can typically upgrade an ADS-secure object into a simulation-secure object (by increasing cost in the online-phase).
- ADS is robust, in the sense that (1) it enables a form of composition and (2) interesting ADS primitives are highly interconnected in terms of which objects imply which other objects.
- Many useful ADS-secure objects are plausibly secure from straightforward symmetric-key cryptography.
We start by defining the notion of an ADS encryption (ADE) scheme.
A notion of input-adaptive encryption can be easily achieved from RO, and the ADE definition can be understood as capturing the concrete property provided by RO that is sufficient to achieve input-adaptivity. From there, we use ADE to achieve ADS variants of garbled circuits and oblivious transfer, to achieve simulation-secure garbled circuits, oblivious transfer, and two-party computation, and prove interconnectedness of these primitives. In sum, this results in a family of objects with extremely cheap online-cost.
Adaptively-Secure Big-Key Identity-Based Encryption
Key-exfiltration attacks on cryptographic keys are a significant threat to computer security. One proposed defense against such attacks is big-key cryptography which seeks to make cryptographic secrets so large that it is infeasible for an adversary to exfiltrate the key (without being detected). However, this also introduces an inconvenience to the user who must now store the large key on all of their different devices. The work of Döttling, Garg, Sekar and Wang (TCC 2022) introduces an elegant solution to this problem in the form of big-key identity-based encryption (IBE). Here, there is a large master secret key, but very short identity keys. The user can now store the large master secret key as her long-term key, and can provision each of her devices with short ephemeral identity keys (say, corresponding to the current date). In this way, the long-term secret key is protected by conventional big-key cryptography, while the user only needs to distribute short ephemeral keys to their different devices. Döttling et al. introduce and construct big-key IBE from standard pairing-based assumptions. However, their scheme only satisfies selective security where the adversary has to declare its challenge set of identities at the beginning of the security game. The more natural notion of security is adaptive security where the user can adaptively choose which identities it wants to challenge after seeing the public parameters (and part of the master secret key).
In this work, we give the first adaptively-secure construction of big-key IBE from standard cryptographic assumptions. Our first construction relies on indistinguishability obfuscation (and one-way functions), while our second construction relies on witness encryption for NP together with standard pairing-based assumptions (i.e., the SXDH assumption). To prove adaptive security, we show how to implement the classic dual-system methodology with indistinguishability obfuscation as well as witness encryption.
The Singularity Random Number Generator: Bridging Determinism and Unpredictability to Redefine Randomness, Secure Systems, and Adaptive Intelligence
Abstract
The Singularity Random Number Generator (SRNG) represents a groundbreaking advancement in the generation of random numbers by integrating two key properties - computational irreducibility and seed independence - into a deterministic algorithm. Unlike conventional pseudorandom number generators (PRNGs) whose randomness is intrinsically linked to seed quality or chaotic sensitivity, SRNG transforms even low-entropy seeds into complex, unpredictable outputs. SRNG demonstrates high-quality randomness can emerge independently of seed entropy or size. This paper explores how SRNG not only challenges classical paradigms of predictability in deterministic systems but also offers transformative applications in cryptography, artificial intelligence (AI), and interdisciplinary research. Furthermore, by drawing parallels with cognitive variability research - such as insights from the Forbes article “Why A ‘Productively Distracted’ Brain Is A Superpower” - we discuss how the emergent unpredictability of SRNG may contribute to enhanced adaptive learning and decision-making processes in AI systems. Ultimately, SRNG is presented as a model that bridges the realms of science and mystery, inviting a new understanding of randomness and the limits of scientific inquiry, thereby expanding the frontiers of interdisciplinary research.
One Tree to Rule Them All: Optimizing GGM Trees and OWFs for Post-Quantum Signatures
The use of MPC-in-the-Head (MPCitH)-based zero-knowledge proofs of knowledge (ZKPoK) to prove knowledge of a preimage of a one-way function (OWF) is a popular approach towards constructing efficient post-quantum digital signatures. Starting with the Picnic signature scheme, many optimized MPCitH signatures using a variety of (candidate) OWFs have been proposed. Recently, Baum et al. (CRYPTO 2023) showed a fundamental improvement to MPCitH, called VOLE-in-the-Head (VOLEitH), which can generically reduce the signature size by at least a factor of two without decreasing computational performance or introducing new assumptions. Based on this, they designed the FAEST signature which uses AES as the underlying OWF. However, in comparison to MPCitH, the behavior of VOLEitH when using other OWFs is still unexplored.
In this work, we improve a crucial building block of the VOLEitH and MPCitH approaches, the so-called all-but-one vector commitment, thus decreasing the signature size of VOLEitH and MPCitH signature schemes. Moreover, by introducing a small Proof of Work into the signing procedure, we can improve the parameters of VOLEitH (further decreasing signature size) without compromising the computational performance of the scheme.
Based on these optimizations, we propose three VOLEitH signature schemes FAESTER, KuMQuat, and MandaRain based on AES, MQ, and Rain, respectively. We carefully explore the parameter space for these schemes and implement each, showcasing their performance with benchmarks. Our experiments show that these three signature schemes outperform MPCitH-based competitors that use comparable OWFs, in terms of both signature size and signing/verification time.
Release the Power of Rejected Signatures: An Efficient Side-Channel Attack on Dilithium
The Module-Lattice-Based Digital Signature Standard (ML-DSA), formerly known as CRYSTALS-Dilithium, is a lattice-based post-quantum cryptographic scheme. In August 2024, the National Institute of Standards and Technology (NIST) officially standardized ML-DSA under FIPS 204. Dilithium generates one valid signature and multiple rejected signatures during the signing process. Most Side-Channel Attacks targeting Dilithium have focused solely on the valid signature, while neglecting the hints contained in rejected signatures. In this paper, we propose a method for recovering the private key by simultaneously leveraging side-channel leakages from both valid signatures and rejected signatures. This approach minimizes the number of signing attempts required for full key recovery. We construct a factor graph incorporating all relevant side-channel leakages and apply the Belief Propagation (BP) algorithm for private key recovery.
We conducted a proof-of-concept experiment on a Cortex M4 core chip, where the results demonstrate that utilizing rejected signatures reduces the required number of traces by at least $42\%$ for full key recovery. A minimum of a single trace can recover the private key with a success rate of $30\%$. Our findings highlight that protecting rejected signatures is crucial, as their leakage provides valuable side-channel information. We strongly recommend implementing countermeasures for rejected signatures during the signing process to mitigate potential threats.
Efficient Revocable Identity-Based Encryption from Middle-Product LWE
The Middle-Product Learning with Errors (MPLWE) assumption is a variant of the Learning with Errors (LWE) assumption. The MPLWE assumption reduces the key size of corresponding LWE-based schemes by setting keys as sets of polynomials. Moreover, MPLWE has more robust security than other LWE variants such as Ring-LWE
and Module-LWE. Lombardi et al. proposed an identity-based encryption (IBE) scheme (LVV-IBE) based on the MPLWE assumption in the random oracle model (ROM) by following Gentry et al.'s IBE scheme (GPV-IBE) based on LWE. Due to the benefit of MPLWE, LVV-IBE has a shorter master public key and a secret key than GPV-IBE without changing the size of a ciphertext. However, Lombardi et al.'s proof is not tight in the ROM, while Katsumata et al. proved that GPV-IBE achieves tight adaptive anonymity in the quantum ROM (QROM). Revocable IBE (RIBE) is a variant of IBE supporting a key revocation mechanism to remove malicious users from the system. Takayasu proposed the most efficient RIBE scheme (Takayasu-RIBE) based on LWE achieving tight adaptive anonymity in the QROM. Although a concrete RIBE scheme based on MPLWE has not been proposed, we can construct a scheme (LVV-based RIBE) by applying Ma and Lin's generic transformation to LVV-IBE. Due to the benefit of MPLWE, LVV-based RIBE has an asymptotically shorter master public key and a shorter secret key than Takayasu-RIBE although the former has a larger ciphertext than the latter. Moreover, the security proof is not tight and anonymous in the ROM due to security proofs of Ma-Lin and Lombardi et al. In this paper, we propose a concrete RIBE scheme based on MPLWE. Compared with the above RIBE schemes, the proposed RIBE scheme is the most asymptotically efficient since the sizes of a master public key and a secret key (resp. ciphertext) of the proposed scheme are the same as those of LVV-based RIBE scheme (resp. Takayasu-RIBE). Moreover, we prove the tight adaptive anonymity of the proposed RIBE scheme in the QROM. For this purpose, we also prove the tight adaptive anonymity of LVV-IBE in the QROM.
REGKYC: Supporting Privacy and Compliance Enforcement for KYC in Blockchains
Know Your Customer (KYC) is a core component of the Anti-Money Laundering (AML) framework, designed to prevent illicit activities within financial systems. However, enforcing KYC and AML on blockchains remains challenging due to difficulties in establishing accountability and preserving user privacy. This study proposes REGKYC, a privacy-preserving Attribute-Based Access Control (ABAC) framework that balances user privacy with externally mandated KYC and AML requirements. REGKYC leverages a structured ABAC model to support the flexible verification of KYC attributes and the enforcement of compliance policies, providing benefits to multiple stakeholders. First, it enables legitimate users to meet compliance requirements while preserving the privacy of their on-chain activities. Second, it empowers Crypto-asset Service Providers (CASPs) to tailor compliance policies to operational needs, ensuring adaptability to evolving regulations. Finally, it enhances regulatory accountability by enabling authorized deanonymization of malicious actors. We hope this work inspires future research to harmonize user privacy and regulatory compliance in blockchain systems.
Efficient Garbled Pseudorandom Functions and Lookup Tables from Minimal Assumption
Yao's garbled circuits have received huge attention in both theory and practice. While garbled circuits can be constructed using minimal assumption (i.e., the existence of pseudorandom functions or one-way functions), the state-of-the-art constructions (e.g., Rosulek-Roy, Crypto 2021) are based on stronger assumptions. In particular, the ``Free-XOR'' technique (Kolesnikov-Schneider, ICALP 2008) is essential in these state-of-the-art constructions, and their security can only be proven in the random oracle model, or rely on the ``circular-correlation robust hash'' assumption.
In this paper, we aim to develop new techniques to construct efficient garbling schemes using minimal assumptions. Instead of generically replacing the Free-XOR technique, we focus on garbling schemes for specific functionalities. We successfully eliminated the need for Free-XOR in several state-of-the-art schemes, including the one-hot garbling (Heath and Kolesnikov, CCS 2021) and the garbled pseudorandom functions, and the garbled lookup tables (Heath, Kolesnikov and Ng, Eurocrypt 2024). Our schemes are based on minimal assumptions, i.e., standard pseudorandom functions (PRFs)---we resolved the need for circular security. The performance of our scheme is almost as efficient as the best results except for a small constant factor. Namely, for any lookup table $\{0,1\}^n \to \{0,1\}^m$, our scheme takes $n + (5n+9)m\lambda + 2^n \cdot m$ bits of communication, where $\lambda$ is the security parameter of PRF.
Making GCM Great Again: Toward Full Security and Longer Nonces
The GCM authenticated encryption (AE) scheme is one of the most widely used AE schemes in the world, while it suffers from risk of nonce misuse, short message length per encryption and an insufficient level of security. The goal of this paper is to design new AE schemes achieving stronger provable security in the standard model and accepting longer nonces (or providing nonce misuse resistance), with the design rationale behind GCM.
As a result, we propose two enhanced variants of GCM and GCM-SIV, dubbed eGCM and eGCM-SIV, respectively. eGCM and eGCM-SIV are built on top of a new CENC-type encryption mode, dubbed eCTR: using 2n-bit counters, eCTR enjoys beyond-birthday-bound security without significant loss of efficiency. eCTR is combined with an almost uniform and almost universal hash function, yielding a variable input-length variable output-length pseudorandom function, dubbed HteC. GCM and GCM-SIV are constructed using eCTR and HteC as building blocks.
eGCM and eGCM-SIV accept nonces of arbitrary length, and provide almost the full security (namely, n-bit security when they are based on an n-bit block cipher) for a constant maximum input length, under the assumption that the underlying block cipher is a pseudorandom permutation (PRP). Their efficiency is also comparable to GCM in terms of the rate and the overall speed.
Linear Proximity Gap for Linear Codes within the 1.5 Johnson Bound
We establish a linear proximity gap for linear codes within the one-and-a-half Johnson bound. Specifically, we investigate the proximity gap for linear codes, revealing that any affine subspace is either entirely $\delta$-close to a linear code or nearly all its members are $\delta$-far from it. When $\delta$ is within the one-and-a-half Johnson bound, we prove an upper bound on the number of members (in the affine subspace) that are $\delta$-close to the linear code for the latter case. Our bound is linear in the length of codewords.
In comparison,
Ben-Sasson, Carmon, Ishai, Kopparty and Saraf [FOCS 2020] work on Reed-Solomon codes and prove a linear bound when $\delta$ is within the unique decoding bound and a quadratic bound when $\delta$ is within the Johnson bound. Note that when the minimum relative distance of the linear code is bigger than 0.77, the one-and-a-half Johnson bound is better than the unique decoding bound.
Proximity gaps for linear codes have implications in various code-based protocols. In many cases, a stronger property than individual distance—known as correlated agreement—is required, i.e., functions in the affine subspace are not only $\delta$-close to a linear code but also agree on the same evaluation domain. Our results support this stronger property. Furthermore, mutual correlated agreement, the further strengthening property, is also supported.
HRA-Secure Homomorphic Lattice-Based Proxy Re-Encryption with Tight Security
We construct an efficient proxy re-encryption (PRE) scheme secure against honest re-encryption attacks (HRA-secure) with precise concrete security estimates. To get these precise concrete security estimates, we introduce the tight, fine-grained noise-flooding techniques of Li et al. (CRYPTO'22) to RLWE-based (homomorphic) PRE schemes, as well as a mixed statistical-computational security definition to HRA security analysis. Our solution also supports homomorphic operations on the ciphertexts. Such homomorphism allows for advanced applications, e.g., encrypted computation of network statistics across networks, and unlimited hops in the case of full homomorphism, i.e., when bootstrapping is available.
We implement our PRE scheme in the OpenFHE software library and apply it to a problem of secure multi-hop data distribution in the context of 5G virtual network slices. We also experimentally evaluate the performance of our scheme, demonstrating that the implementation is practical.
Moreover, we compare our PRE method with other lattice-based PRE schemes and approaches targeting HRA security. These achieve HRA security, but not in a tight, practical scheme such as our work. Further, we present an attack on the PRE scheme proposed in Davidson et al.'s (ACISP'19), which was claimed to achieve HRA security without noise flooding, i.e., without adding large noise.
Wagner's Algorithm Provably Runs in Subexponential Time for SIS$^\infty$
At CRYPTO 2015, Kirchner and Fouque claimed that a carefully tuned variant of the Blum-Kalai-Wasserman (BKW) algorithm (JACM 2003) should solve the Learning with Errors problem (LWE) in slightly subexponential time for modulus $q=\mathrm{poly}(n)$ and narrow error distribution, when given enough LWE samples. Taking a modular view, one may regard BKW as a combination of Wagner's algorithm (CRYPTO 2002), run over the corresponding dual problem, and the Aharonov-Regev distinguisher (JACM 2005). Hence the subexponential Wagner step alone should be of interest for solving this dual problem - namely, the Short Integer Solution problem (SIS) - but this appears to be undocumented so far.
We re-interpret this Wagner step as walking backward through a chain of projected lattices, zigzagging through some auxiliary superlattices. We further randomize the bucketing step using Gaussian randomized rounding to exploit the powerful discrete Gaussian machinery. This approach avoids sample amplification and turns Wagner's algorithm into an approximate discrete Gaussian sampler for $q$-ary lattices.
For an SIS lattice with $n$ equations modulo $q$, this algorithm runs in subexponential time $\exp(O(n/\log \log n))$ to reach a Gaussian width parameter $s = q/\mathrm{polylog}(n)$ only requiring $m = n + \omega(n/\log \log n)$ many SIS variables. This directly provides a provable algorithm for solving the Short Integer Solution problem in the infinity norm ($\mathrm{SIS}^\infty$) for norm bounds $\beta = q/\mathrm{polylog}(n)$. This variant of SIS underlies the security of the NIST post-quantum cryptography standard Dilithium. Despite its subexponential complexity, Wagner's algorithm does not appear to threaten Dilithium's concrete security.
OpenPubkey: Augmenting OpenID Connect with User held Signing Keys
OpenPubkey makes a client-side modification to OpenID Connect so that an ID Token issued by an OpenID Provider commits to a user held public key. This transforms an ID Token into a certificate that cryptographically binds an OpenID Connect identity to a public key. We call such an ID Token, a PK Token. The user can then sign messages with their signing key and these signatures can be authenticated and attributed to the user’s OpenID Connect identity. This allows OpenPubkey to upgrade OpenID Connect from Bearer Authentication to Proof-of-Possession, eliminating trust assumptions in OpenID Connect and defeating entire categories of attacks present in OpenID Connect. OpenPubkey was designed to satisfy a decade-long need for this functionality. Prior to OpenPubkey, OpenID Connect did not have a secure way for users to sign statements under their OpenID identities.
OpenPubkey is transparent to users and OpenID Providers. An OpenID Provider can not even determine that OpenPubkey is being used. This makes OpenPubkey fully compatible with existing OpenID Providers. In fact a variant of OpenPubkey is currently deployed and used to authenticate signed messages and identities for users with accounts on Google, Microsoft, Okta, and Onelogin. OpenPubkey does not add new trusted parties to OpenID Connect and reduces preexisting trust assumptions. If used in tandem with our MFA-cosigner, OpenPubkey can maintain security even against a malicious OpenID Provider (the most trusted party in OpenID Connect).
Buffalo: A Practical Secure Aggregation Protocol for Asynchronous Federated Learning
Federated Learning (FL) has become a crucial framework for collaboratively training Machine Learning (ML) models while ensuring data privacy. Traditional synchronous FL approaches, however, suffer from delays caused by slower clients (called stragglers), which hinder the overall training process.
Specifically, in a synchronous setting, model aggregation happens once all the intended clients have submitted their local updates to the server. To address these inefficiencies, Buffered Asynchronous FL (BAsyncFL) was introduced, allowing clients to update the global model as soon as they complete local training. In such a setting, the new global model is obtained once the buffer is full, thus removing synchronization bottlenecks. Despite these advantages, existing Secure Aggregation (SA) techniques—designed to protect client updates from inference attacks—rely on synchronized rounds, making them unsuitable for asynchronous settings.
In this paper, we present Buffalo, the first practical SA protocol tailored for BAsyncFL. Buffalo leverages lattice-based encryption to handle scalability challenges in large ML models and introduces a new role, the assistant, to support the server in securely aggregating client updates. To protect against an actively corrupted server, we enable clients to verify that their local updates have been correctly integrated into the global model. Our comprehensive evaluation—incorporating theoretical analysis and real-world experiments on benchmark datasets—demonstrates that Buffalo is an efficient and scalable privacy-preserving solution in BAsyncFL environments.
Forking Lemma in EasyCrypt
Formal methods are becoming an important tool for ensuring correctness and security of cryptographic constructions. However, the support for certain advanced proof techniques, namely rewinding, is scarce among existing verification frameworks, which hinders their application to complex schemes such as multi-party signatures and zero-knowledge proofs.
We expand the support for rewinding in EasyCrypt by implementing a version of the general forking lemma by Bellare and Neven. We demonstrate its usability by proving EUF-CMA security of Schnorr signatures.
Zinnia: An Expressive and Efficient Tensor-Oriented Zero-Knowledge Programming Framework
Zero-knowledge proofs (ZKPs) are cryptographic protocols that enable a prover to convince a verifier of a statement's truth without revealing any details beyond its validity. Typically, the statement is encoded as an arithmetic circuit, and allows the prover to demonstrate that the circuit evaluates to true without revealing its inputs. Despite their potential to enhance privacy and security, ZKPs are difficult to write and optimize, limiting their adoption in machine learning and data science. To address these challenges, we introduce Zinnia, a zero-knowledge programming framework with high utility, expressiveness and efficiency for tensor-oriented computation. Zinnia provides a high-level programming language that enables developers to easily write ZKP programs, and it employs a novel symbolic execution-inspired approach to extracting semantics from these programs to generate arithmetic circuits. Zinnia supports tensor-oriented computations and provides a rich set of programming constructs, optimizations, and a powerful static type system for expressing and optimizing complex logic. We evaluate Zinnia across 25 real-world programming tasks and a user study, comparing it to existing solutions, including DSLs and zkVMs (Halo2, SP1, and RISC0). Our results demonstrate that Zinnia outperforms these baselines in utility, expressiveness, and efficiency, with a statistically significant reduction in development time, $2-3\times$ shorter code length, 19.3% smaller circuit size, and up to $245\times$ faster proving time compared to zkVMs, paving the way for practical ZKP applications in various domains.
The Complexity of Algebraic Algorithms for LWE
Arora & Ge introduced a noise-free polynomial system to compute the secret of a Learning With Errors (LWE) instance via linearization. Albrecht et al. later utilized the Arora-Ge polynomial model to study the complexity of Gröbner basis computations on LWE polynomial systems under the assumption of semi-regularity. In this paper we revisit the Arora-Ge polynomial and prove that it satisfies a genericity condition recently introduced by Caminata & Gorla, called being in generic coordinates. For polynomial systems in generic coordinates one can always estimate the complexity of DRL Gröbner basis computations in terms of the Castelnuovo-Mumford regularity and henceforth also via the Macaulay bound.
Moreover, we generalize the Gröbner basis algorithm of Semaev & Tenti to arbitrary polynomial systems with a finite degree of regularity. In particular, existence of this algorithm yields another approach to estimate the complexity of DRL Gröbner basis computations in terms of the degree of regularity. In practice, the degree of regularity of LWE polynomial systems is not known, though one can always estimate the lowest achievable degree of regularity. Consequently, from a designer's worst case perspective this approach yields sub-exponential complexity estimates for general, binary secret and binary error LWE.
In recent works by Dachman-Soled et al. the hardness of LWE in the presence of side information was analyzed.
Utilizing their framework we discuss how hints can be incorporated into LWE polynomial systems and how they affect the complexity of Gröbner basis computations.
Universally Composable Relaxed Asymmetric Password-Authenticated Key Exchange
Password-Authenticated Key Exchange (PAKE) establishes a secure channel between two parties who share a password. Asymmetric PAKE is a variant of PAKE, where one party stores a hash of the password to preserve security under the situation that the party is compromised. The security of PAKE and asymmetric PAKE is often analyzed in the framework of universal composability (UC).
Abdalla et al. (CRYPTO '20) relaxed the UC security of PAKE and showed that the relaxed security still guarantees reasonable properties. This relaxation makes it possible to prove the security in the UC framework for several PAKE protocols.
In this paper, we propose a relaxed functionality of asymmetric PAKE by following the approach of Abdalla et al. We prove that the SPAKE2+ protocol UC-realizes this functionality. We also define a more relaxed functionality and prove that a variant of the AuCPace protocol UC-realizes it.
Advancements in Distributed RSA Key Generation: Enhanced Biprimality Tests
RSA is widely used in modern cryptographic practice, with certain RSA-based protocols relying on the secrecy of $p$ and $q$. A common approach is to use secure multiparty computation to address the privacy concerns of $p$ and $q$.
Specifically constrained to distributed RSA moduli generation protocols, the biprimality test for Blum integers $N=pq$, where $p\equiv q\equiv 3 \mod4$ are two primes, proposed by Boneh and Franklin ($2001$) is the most commonly used. Over the past $20 $ years, the acceptance of this test with a probability in the worst-case for non-RSA moduli has been consistently assumed to be $1/2$ under the condition $\gcd(pq,p+q-1)=1$.
This paper demonstrates that the acceptance with a probability for the Boneh-Franklin test is at most $1/4$, rather than $1/2$, except in the specific case where $p = q = 3$. Notably,
$1/4$ is shown to be the tightest upper bound. This result substantially improves the practical effectiveness of the Boneh-Franklin test: achieving the same level of soundness for the RSA moduli now requires only half the iterations previously considered necessary.
Furthermore, we propose a generalized biprimality test based on the Lucas sequence. In the worst case, the acceptance with a probability of the proposed test is at most $1/4 + 1.25/(p_{\min} -3)$, where $p_{\min}$ is the smallest prime factors of $N$. To validate our approach, we implemented the variant Miller-Rabin test, the Boneh-Franklin test, and our proposed test, performing pairwise comparisons of their effectiveness. Simulations indicate that the proposed test is generally more efficient than the Boneh-Franklin test in detecting cases where $N$ is not an RSA modulus. Additionally, this test is applicable to generating RSA moduli for arbitrary odd primes $p,q$.
A corresponding protocol is developed for this test, validated for resilience against semi-honest adversaries, and shown to be applicable to most known distributed RSA moduli generation protocols. After thoroughly analyzing and comparing well-known protocols for Blum integers, including Burkhardt et al.'s protocol (CCS 2023), and the Boneh-Franklin protocol, our protocol is competitive for generating distributed RSA moduli.
FSSiBNN: FSS-based Secure Binarized Neural Network Inference with Free Bitwidth Conversion
Neural network inference as a service enables a cloud server to provide inference services to clients. To ensure the privacy of both the cloud server's model and the client's data, secure neural network inference is essential. Binarized neural networks (BNNs), which use binary weights and activations, are often employed to accelerate inference. However, achieving secure BNN inference with secure multi-party computation (MPC) is challenging because MPC protocols cannot directly operate on values of different bitwidths and require bitwidth conversion. Existing bitwidth conversion schemes expand the bitwidths of weights and activations, leading to significant communication overhead.
To address these challenges, we propose FSSiBNN, a secure BNN inference framework featuring free bitwidth conversion based on function secret sharing (FSS). By leveraging FSS, which supports arbitrary input and output bitwidths, we introduce a bitwidth-reduced parameter encoding scheme. This scheme seamlessly integrates bitwidth conversion into FSS-based secure binary activation and max pooling protocols, thereby eliminating the additional communication overhead. Additionally, we enhance communication efficiency by combining and converting multiple BNN layers into fewer matrix multiplication and comparison operations. We precompute matrix multiplication tuples for matrix multiplication and FSS keys for comparison during the offline phase, enabling constant-round online inference.
In our experiments, we evaluated various datasets and models, comparing our results with state-of-the-art frameworks. Compared with the two-party framework XONN (USENIX Security '19), FSSiBNN achieves approximately 7$\times$ faster inference times and reduces communication overhead by about 577$\times$. Compared with the three-party frameworks SecureBiNN (ESORICS '22) and FLEXBNN (TIFS '23), FSSiBNN is approximately 2.5$\times$ faster in inference time and reduces communication overhead by 1.3$\times$ to 16.4$\times$.
FssNN: Communication-Efficient Secure Neural Network Training via Function Secret Sharing
Privacy-preserving neural network based on secure multi-party computation (MPC) enables multiple parties to jointly train neural network models without revealing sensitive data. In privacy-preserving neural network, the high communication costs of securely computing non-linear functions is the primary performance bottleneck. For commonly used non-linear functions, such as ReLU, existing work adopts an offline-online computation paradigm and utilizes distributed comparison function (DCF) to reduce communication costs. Specifically, these works prepare DCF keys in the offline phase and perform secure ReLU using these DCF keys in the online phase. However, the practicality of existing work is significantly limited due to the substantial size of DCF keys and the heavy reliance on a trusted third party in the offline phase.
In this work, we introduce a communication-efficient secure two-party neural network framework called FssNN, which proposes a key-reduced DCF scheme without a trusted third party to enable practical secure training and inference. First, by analyzing the correlations between DCF keys to eliminate redundant parameters, we propose a key-reduced DCF scheme with a compact additive construction, which decreases the size of DCF keys by about $17.9\%$ and the offline communication costs by approximately $28.0\%$. Secondly, by leveraging an MPC-friendly pseudorandom number generator, we propose a secure two-party distributed key generation protocol for our key-reduced DCF, thereby eliminating the reliance on the trusted third party. Finally, we utilize the key-reduced DCF and additive secret sharing to compute non-linear and linear functions, respectively, and design secure computation protocols with constant online communication rounds for neural network operators, reducing the online communication costs by $28.9\% \sim 43.4\%$.
We provide formal security proofs and evaluate the performance of FssNN on various models and datasets. Experimental results show that compared to the state-of-the-art framework AriaNN, our framework reduces the total communication costs of secure training and inference by approximately $25.4\%$ and $26.4\%$ respectively.
New results in Share Conversion, with applications to evolving access structures
We say there is a share conversion from a secret-sharing scheme $\Pi$ to another scheme $\Pi'$ implementing the same access structure if each party can locally apply a deterministic function to their share to transform any valid secret-sharing under $\Pi$ to a valid (but not necessarily random) secret-sharing under $\Pi'$ of the same secret. If such a conversion exists, we say that $\Pi\ge\Pi'$.
This notion was introduced by Cramer et al. (TCC'05), where they particularly proved that for any access structure, any linear secret-sharing scheme over a given field $\mathbb{F}$, has a conversion from a CNF scheme, and is convertible to a DNF scheme.
In this work, we initiate a systematic study of convertability between secret-sharing schemes, and present a number of results with implications to the understanding of the convertibility landscape.
- In the context of linear schemes, we present two key theorems providing necessary conditions for convertibility, proved using linear-algebraic tools. It has several implications, such as the fact that Shamir secret-sharing scheme can be neither maximal or minimal. Another implication of it is that a scheme may be minimal if its share complexity is at least as high as that of DNF.
- Our second key result is a necessary condition for convertibility to CNF from a broad class of (not necessarily linear) schemes. This result is proved via information-theoretic techniques and implies non-maximality for schemes with share complexity smaller than that of CNF.
We also provide a condition which is both necessary and sufficient for the existence of a share conversion to some linear scheme. The condition is stated as a system of linear equations, such that a conversion exists if and only if a solution to the linear system exists. We note that the impossibility results for linear schemes may be viewed as identifying a subset of contradicting equations in the system.
Another contribution of our paper, is in defining and studying share conversion for evolving secret-sharing schemes. In such a schemes, recently introduced by Komargodski et al. (IEEE ToIT'18), the number of parties is not bounded apriori, and every party receives a share as it arrives, which never changes in the sequel. Our impossibility results have implications to the evolving setting as well. Interestingly, unlike in the standard setting, there is no maximum or minimum in a broad class of evolving schemes, even without any restriction on the share size.
Finally, we show that, generally, there is no conversion between additive schemes over different fields, even from CNF to DNF! However by relaxing from perfect to statistical security, it may be possible to convert, and examplify this for (n,n)-threshold access structures.
VeriSSO: A Privacy-Preserving Legacy-Compatible Single Sign-On Protocol Using Verifiable Credentials
Single Sign-On (SSO) is a popular authentication mechanism enabling users to access multiple web services with a single set of credentials. Despite its convenience, SSO faces outstanding privacy challenges. The Identity Provider (IdP) represents a single point of failure and can track users across different Relying Parties (RPs). Multiple colluding RPs may track users through common identity attributes. In response, anonymous credential-based SSO solutions have emerged to offer privacy-preserving authentication without revealing unnecessary user information. However, these solutions face two key challenges: supporting RP authentication without compromising user unlinkability and maintaining compatibility with the predominant Authorization Code Flow (ACF).
This paper introduces VeriSSO, a novel SSO protocol based on verifiable credentials (VC) that supports RP authentication while preserving privacy and avoiding single points of failure. VeriSSO employs an independent authentication server committee to manage RP and user authentication, binding RP authentication with credential-based anonymous user authentication. This approach ensures user unlinkability while supporting RP authentication and allows RPs to continue using their existing verification routines with identity tokens as in the ACF workflow. VeriSSO's design also supports lawful de-anonymization, ensuring user accountability for misbehavior during anonymity. Experimental evaluations of VeriSSO demonstrate its efficiency and practicality, with authentication processes completed within 100 milliseconds.
Distributional Private Information Retrieval
A private-information-retrieval (PIR) scheme lets a client fetch a record from a remote database without revealing which record it fetched. Classic PIR schemes treat all database records the same but, in practice, some database records are much more popular (i.e., commonly fetched) than others. We introduce distributional PIR, a new type of PIR that can run faster than classic PIR---both asymptotically and concretely---when the popularity distribution is skewed. Distributional PIR provides exactly the same cryptographic privacy as classic PIR. The speedup comes from a relaxed form of correctness: distributional PIR guarantees that in-distribution queries succeed with good probability, while out-of-distribution queries succeed with lower probability. Because of its relaxed correctness, distributional PIR is best suited for applications where "best-effort" retrieval is acceptable. Moreover, for security, a client's decision to query the server must be independent of whether its past queries were successful.
We construct a distributional-PIR scheme that makes black-box use of classic PIR protocols, and prove a lower bound on the server runtime of a natural class of distributional-PIR schemes. On two real-world popularity distributions, our construction reduces compute costs by $5$-$77\times$ compared to existing techniques. Finally, we build CrowdSurf, an end-to-end system for privately fetching tweets, and show that distributional-PIR reduces the end-to-end server cost by $8\times$.
Partial Key Overwrite Attacks in Microcontrollers: a Survey
Embedded devices can be exposed to a wide range of attacks. Some classes of attacks can be mitigated using security features or dedicated countermeasures. Examples include Trusted Execution Environments, and masking countermeasures against physical side-channel attacks. However, a system that incorporates such secure components is not automatically a secure system. Partial Key Overwrite attacks are one class of attacks that specifically target the interface between different components of the security system. These attacks may allow an adversary to extract otherwise protected cryptographic keys through careful manipulation of memory-mapped registers. So far this powerful class of attacks has received little attention in the academic literature. In this work, we provide an overview of known Partial Key Overwrite vulnerabilities and how they were used in real-world attacks. Additionally, we evaluated 31 common microcontrollers and embedded microprocessors from eleven distinct vendors and detail our findings. Based on a first high-level evaluation we selected 15 SoCs and performed an in-depth evaluation. This evaluation revealed that at least eight of these SoCs are vulnerable to partial key overwrite attacks.
Solving Data Availability Limitations in Client-Side Validation with UTxO Binding
Issuing tokens on Bitcoin remains a highly sought-after goal, driven by its market dominance and robust security. However, Bitcoin's limited on-chain storage and functionality pose significant challenges. Among the various approaches to token issuance on Bitcoin, client-side validation (CSV) has emerged as a prominent solution. CSV delegates data storage and functionalities beyond Bitcoin’s native capabilities to off-chain clients, while leveraging the blockchain to validate tokens and prevent double-spending. Nevertheless, these protocols require participants to maintain token ownership and transactional data, rendering them vulnerable to data loss and malicious data withholding. In this paper, we propose UTxO binding, a novel framework that achieves both robust data availability and enhanced functionality compared to existing CSV designs. This approach securely binds a Bitcoin UTxO, which prevents double-spending, to a UTxO on an auxiliary blockchain, providing data storage and programmability. We formally prove its security and implement our design using Nervos CKB as the auxiliary blockchain.
An in-depth security evaluation of the Nintendo DSi gaming console
The Nintendo DSi is a handheld gaming console released by Nintendo in 2008. In Nintendo's line-up the DSi served as a successor to the DS and was later succeeded by the 3DS. The security systems of both the DS and 3DS have been fully analysed and defeated. However, for over 14 years the security systems of the Nintendo DSi remained standing and had not been fully analysed. To that end this work builds on existing research and demonstrates the use of a second-order fault injection attack to extract the ROM bootloaders stored in the custom system-on-chip used by the DSi. We analyse the effect of the induced fault and compare it to theoretical fault models. Additionally, we present a security analysis of the extracted ROM bootloaders and develop a modchip using cheap off-the-shelf components. The modchip allows to jailbreak the console, but more importantly allows to resurrect consoles previously assumed irreparable.
Private Eyes: Zero-Leakage Iris Searchable Encryption
This work introduces Private Eyes, the first zero-leakage biometric database. The only leakage of the system is unavoidable: 1) the log of the dataset size and 2) the fact that a query occurred. Private Eyes is built from oblivious symmetric searchable encryption. Approximate proximity queries are used: given a noisy reading of a biometric, the goal is to retrieve all stored records that are close enough according to a distance metric.
Private Eyes combines locality sensitive-hashing or LSHs (Indyk and Motwani, STOC 1998) and oblivious maps which map keywords to values. One computes many LSHs of each record in the database and uses these hashes as keywords in the oblivious map with the matching biometric readings concatenated as the value. At search time with a noisy reading, one computes the LSHs and retrieves the disjunction of the resulting values from the map. The underlying oblivious map needs to answer disjunction queries efficiently.
We focus on the iris biometric which requires a large number of LSHs, approximately $1000$. Boldyreva and Tang's (PoPETS 2021) design yields a suitable map for a small number of LSHs (their application was in zero-leakage $k$-nearest-neighbor search).
Our solution is a zero-leakage disjunctive map designed for the setting when most clauses do not match any records. For the iris, on average at most $6\%$ of LSHs match any stored value.
We evaluate using the ND-0405 dataset; this dataset has $356$ irises suitable for testing. To scale our evaluation, we use a generative adversarial network to produce synthetic irises. Accurate statistics on sizes beyond available datasets is crucial to optimizing the cryptographic primitives. This tool may be of independent interest.
For the largest tested parameters of a $5000$ synthetic iris database, a search requires $18$ rounds of communication and $25$ms of parallel computation.
Our scheme is implemented and open-sourced.
Computing Isomorphisms between Products of Supersingular Elliptic Curves
The Deligne-Ogus-Shioda theorem guarantees the existence of isomorphisms between products of supersingular elliptic curves over finite fields. In this paper, we present methods for explicitly computing these isomorphisms in polynomial time, given the endomorphism rings of the curves involved. Our approach leverages the Deuring correspondence, enabling us to reformulate computational isogeny problems into algebraic problems in quaternions. Specifically, we reduce the computation of isomorphisms to solving systems of quadratic and linear equations over the integers derived from norm equations. We develop $\ell$-adic techniques for solving these equations when we have access to a low discriminant subring. Combining these results leads to the description of an efficient probabilistic Las Vegas algorithm for computing the desired isomorphisms. Under GRH, it is proved to run in expected polynomial time.
Starfish: A high throughput BFT protocol on uncertified DAG with linear amortized communication complexity
Current DAG-based BFT protocols face a critical trade-off: certified DAGs provide strong security guarantees but require additional rounds of communication to progress the DAG construction, while uncertified DAGs achieve lower latency at the cost of either reduced resistance to adversarial behaviour or higher communication costs.
This paper presents Starfish, a partially synchronous DAG-based BFT protocol that achieves the security properties of certified DAGs, the efficiency of uncertified approaches and linear amortized communication complexity. The key innovation is Encoded Cordial Dissemination, a push-based dissemination strategy that combines Reed-Solomon erasure coding with Data Availability Certificates (DACs). Each of the $n=3f+1$ validators disseminates complete transaction data for its own blocks while distributing encoded shards for others' blocks, enabling efficient data reconstruction with just $f+1$ shards. Building on the previous uncertified DAG BFT commit rule, Starfish extends it to efficiently verify data availability through committed leader blocks serving as DACs. For large enough transaction data, this design allows Starfish to achieve $O(n)$ amortized communication complexity per committed transaction byte. The average and worst-case end-to-end latencies for Starfish are rigorously proven to be bounded by $7.5\delta$ and $11\delta$ in the steady state, where $\delta$ denotes the actual network delay.
Experimental evaluation against state-of-the-art DAG BFT protocols demonstrates Starfish's robust performance under steady-state and Byzantine scenarios.
Our results show that strong Byzantine fault tolerance, high performance, and low communication complexity can coexist in DAG BFT protocols, making Starfish particularly suitable for large-scale distributed ledger deployments.
Cryptanalysis of Fruit-F: Exploiting Key-Derivation Weaknesses and Initialization Vulnerabilities
Fruit-F is a lightweight short-state stream cipher designed by Ghafari et al. The authors designed this version of the cipher, after earlier versions of the cipher viz. Fruit 80/v2 succumbed to correlation attacks. The primary motivation behind this design seemed to be preventing correlation attacks. Fruit-F has a Grain-like structure with two state registers of size 50 bits each. In addition, the cipher uses an 80-bit secret key and an 80-bit IV. The authors use a complex key-derivation function to update the non-linear register which prevents the same key-bit alignment across fixed-length window of keystream bits, which is essentially what stops the correlation attacks.
In this paper, we first present two attacks against Fruit-F. The first attack stems from the fact that the key-derivation can be rewritten as the Boolean xor of two key-dependent terms one of which is the Boolean OR of two bits of the key. Using this we show that the cipher does not offer 80-bit security: the effective key space of Fruit-F is slightly less than $2^{80}$, i.e. a simple brute force attack costs around $2^{80}-2^{49}$ time. The second is a differential attack using the cipher's complex initialization process. We show that under some given conditions, it is possible to have two initial vectors $V_1$ and $V_2$ that produce identical keystream vectors with any given key. Using this as a distinguisher, it is possible to collect enough linear and quadratic equations of the secret key to find it in practical time with very few keystream bits.
Evasive LWE: Attacks, Variants & Obfustopia
Evasive LWE (Wee, Eurocrypt 2022 and Tsabary, Crypto 2022) is a recently introduced, popular lattice assumption which has been used to tackle long-standing problems in lattice based cryptography. In this work, we develop new counter-examples against Evasive LWE, in both the private and public-coin regime, propose counter-measures that define safety zones, and finally explore modifications to construct full compact FE/iO.
Attacks: Our attacks are summarized as follows.
- The recent work by Hseih, Lin and Luo [HLL23] constructed the first ABE for unbounded depth circuits by relying on the (public coin) ''circular'' evasive LWE assumption, which incorporates circularity into the Evasive LWE assumption. We provide a new attack against this assumption by exhibiting a sampler such that the pre-condition is true but post-condition is false.
- We demonstrate a counter-example against public-coin evasive LWE which exploits the freedom to choose the error distributions in the pre and post conditions. Our attack crucially relies on the error in the pre-condition being larger than the error in the post-condition.
- The recent work by Agrawal, Kumari and Yamada [AKY24a] constructed the first functional encryption scheme for pseudorandom functionalities ($\mathsf{prFE}$) and extended this to obfuscation for pseudorandom functionalities ($\mathsf{prIO}$) [AKY24c] by relying on private-coin evasive LWE. We provide a new attack against the stated assumption.
- The recent work by Branco et al. [BDJ+24] (concurrently to [AKY24c]) provides a construction of obfuscation for pseudorandom functionalities by relying on private-coin evasive LWE. By adapting the counter-example against [AKY24a], we provide an attack against this assumption.
- Branco et al. [BDJ+24] showed that there exist contrived, somehow ''self-referential'', classes of pseudorandom functionalities for which pseudorandom obfuscation cannot exist. We develop an analogous result to the setting of pseudorandom functional encryption.
While Evasive LWE was developed to specifically avoid zeroizing attacks as discussed above, our attacks show that in some (contrived) settings, the adversary may nevertheless obtain terms in the zeroizing regime.
Counter-measures: Guided by the learning distilled from the above attacks, we develop counter-measures to prevent against them. Our interpretation of the above attacks is that Evasive LWE, as defined, is too general -- we suggest restrictions to identify safe zones for the assumption, using which, the broken applications can be recovered.
Variants to give full FE and iO: Finally, we show that certain modifications of Evasive LWE, which respect the counter-measures developed above, yield full compact FE in the standard model. We caution that the main goal of presenting these candidates is as goals for cryptanalysis to further our understanding of this regime of assumptions.
Fast Secure Two-Party ECDSA Signing
Uncategorized
Uncategorized
ECDSA is a standard digital signature schemes that is widely used in TLS, Bitcoin and elsewhere. Unlike other schemes like RSA, Schnorr signatures and more, it is particularly hard to construct efficient threshold signature protocols for ECDSA (and DSA). As a result, the best-known protocols today for secure distributed ECDSA require running heavy zero-knowledge proofs and computing many large-modulus exponentiations for every signing operation. In this paper, we consider the specific case of two parties (and thus no honest majority) and construct a protocol that is approximately two orders of magnitude faster than the previous best. Concretely, our protocol achieves good performance, with a single signing operation for curve P-256 taking approximately 37ms between two standard machine types in Azure (utilizing a single core only). Our protocol is proven secure for sequential composition under standard assumptions using a game-based definition. In addition, we prove security by simulation under a plausible yet non-standard assumption regarding Paillier. We show that partial concurrency (where if one execution aborts then all need to abort) can also be achieved.
Attacking soundness for an optimization of the Gemini Polynomial Commitment Scheme
We demonstrate an attack on the soundness of a widely
known optimization of the Gemini multilinear Polynomial Commitment
Scheme (PCS). The attack allows a malicious prover to falsely claim that
a multilinear polynomial takes a value of their choice, for any input point.
We stress that the original Gemini multilinear PCS and HyperKZG, an
adaptation of Gemini, are not affected by the attack.
MULTISS: a protocol for long-term secure distributed storage over multiple remote QKD networks
We introduce MULTISS, a new distributed storage protocol over multiple remote Quantum Key Distribution (QKD) networks that ensures long-term data confidentiality. Our protocol extends LINCOS, a secure storage protocol that uses Shamir secret sharing to distribute data in a single QKD network. Instead MULTISS uses a hierarchical secret scheme that makes certain shares mandatory for the reconstruction of the original secret. We prove that MULTISS ensures that the stored data remain secure even if an eavesdropper (1) gets full access to all storage servers of some of the QKD networks or (2) stores and breaks later all the classical communication between the QKD networks. We demonstrate that this is strictly more secure than LINCOS which is broken as soon as one QKD network is compromised.
Our protocol, like LINCOS, has a procedure to update the shares stored in each QKD network without reconstructing the original data. In addition, we provide a procedure to recover from a full compromission of one of the QKD network. In particular, we introduce a version of the protocol that can only be implemented over a restricted network topologies, but minimizes the communication required in the recovery procedure.
In practice, the MULTISS protocol is designed for the case of several QKD networks at the metropolitan scale connected to each other through channels secured by classical cryptography. Hence, MULTISS offers a secure distributed storage solution in a scenario that is compatible with the current deployment of quantum networks.
Combined Masking and Shuffling for Side-Channel Secure Ascon on RISC-V
Both masking and shuffling are very common software countermeasures against side-channel attacks. However, exploring possible combinations of the two countermeasures to increase and fine-tune side-channel resilience is less investigated. With this work, we aim to bridge that gap by both concretising the security guarantees of several masking and shuffling combinations presented in earlier work and additionally investigating their randomness cost. We subsequently implement these approaches to also analyse their performance. In this context, we present five different protected implementations of the new standard for lightweight cryptography, Ascon, on a 32-bit RISC-V architecture: A 3rd-order masked, unshuffled implementation and three combined 3rd-order masked and shuffled implementations. Additionally, we present a levelled implementation where only the particularly vulnerable keyed initialisation and finalisation of the permutation are masked and shuffled, while the rest is only shuffled. To further improve the security and performance of our implementations we make use of the Probe Isolating Non-Interference (PINI) masked AND gadget, coupled with techniques like bit-slicing and bit-interleaving. Utilising benchmarking and an MI-shortcut security analysis, we pinpoint the best masking-shuffling combinations that maximize security at reasonable overheads.
Further Improvements in AES Execution over TFHE: Towards Breaking the 1 sec Barrier
Making the most of TFHE advanced capabilities such as programmable or circuit bootstrapping and their generalizations for manipulating data larger than the native plaintext domain of the scheme is a very active line of research. In this context, AES is a particularly interesting benchmark, as an example of a nontrivial algorithm which has eluded "practical" FHE execution performances for years, as well as the fact that it will most likely be selected by NIST as a flagship reference in its upcoming call on threshold (homomorphic) cryptography. Since 2023, the algorithm has thus been the subject of a renewed attention from the FHE community and has served as a playground to test advanced operators following the LUT-based, $p$-encodings or several variants of circuit bootstrapping, each time leading to further timing improvements. Still, AES is also interesting as a benchmark because of the tension between boolean- and byte-oriented operations within the algorithm. In this paper, we resolve this tension by proposing a new approach, coined "Hippogryph", which consistently combines the (byte-oriented) LUT-based approach with a generalization of the (boolean-oriented) $p$-encodings one to get the best of both worlds. In doing so, we obtain the best timings so far, getting a single-core execution of the algorithm over TFHE from 46 down to 32 seconds and approaching the 1 second barrier with only a mild amount of parallelism. We should also stress that all the timings reported in this paper are consistently obtained on the same machine which is often not the case in previous studies. Lastly, we emphasize that the techniques we develop are applicable beyond just AES since the boolean-byte tension is a recurrent issue when running algorithms over TFHE.
X2X: Low-Randomness and High-Throughput A2B and B2A Conversions for $d+1$ shares in Hardware
The conversion between arithmetic and Boolean masking representations (A2B \& B2A) is a crucial component for side-channel resistant implementations of lattice-based (post-quantum) cryptography. In this paper, we first propose novel $d$-order algorithms for the secure addition (SecADDChain$_q$) and B2A (B2X2A). Our secure adder is well-suited for repeated ('chained') executions, achieved through an improved method for repeated masked modular reduction. The optimized B2X2A gadget removes a full secure addition compared to state-of-the-art B2A approaches, by relying on the X2B operation. This component directly converts a compositely shared variable, consisting of a mix of arithmetic and Boolean sharing, to $d+1$ Boolean shares. This approach reduces the required amount of SecADDs to $2d$, of which $2\cdot\lceil\text{log}_2(d)\rceil$ are max-order.
Secondly, we develop both a first- and high-order masked, unified hardware implementation that can compute both A2B & B2A conversions for power-of-two ($p$) and prime ($q$) moduli. Compared to state-of-the-art (high-throughput) hardware implementations that only support A2B$_k$, we reduce area utilization for a second-order implementation by 45% up to 60% and fresh randomness up to 62%, while supporting all four types of additive mask conversions. Our first-order design only requires 1,133/2,170 [LUT/FF] on Kintex-7 FPGAs.
Our proposed algorithms are proven secure in the robust probing model and their implementations are validated via practical lab analysis using the TVLA methodology. We experimentally show that our masked implementation is hardened against first-and second order univariate and multivariate power-based side-channel attacks using 100 million traces, for each mode of operation.
PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge
Uncategorized
Uncategorized
zk-SNARK constructions that utilize an updatable universal structured reference string remove one of the main obstacles in deploying zk-SNARKs [GKMMM, Crypto 2018]. The important work of Maller et al. [MBKM, CCS 2019] presented $\mathsf{Sonic}$ - the first potentially practical zk-SNARK with fully succinct verification for general arithmetic circuits with such an SRS.
However, the version of $\mathsf{Sonic}$ enabling fully succinct verification still requires relatively high proof construction overheads. We present a universal SNARK construction with fully succinct verification, and significantly lower prover running time (roughly 7.5-20 less group exponentiations than [MBKM] in the fully succinct verifier mode depending on circuit structure).
Similarly to [MBKM], we rely on a permutation argument based on Bayer and Groth [Eurocrypt 2012]. However, we focus on ``Evaluations on a subgroup rather than coefficients of monomials''; which enables simplifying both the permutation argument and the artihmetization step.
Differential Fault Attack on HE-Friendly Stream Ciphers: Masta, Pasta and Elisabeth
In this paper, we propose the Differential Fault Attack (DFA) on three Homomorphic Encryption (HE) friendly stream ciphers \textsf{Masta}, \textsf{Pasta}, and \textsf{Elisabeth}. Both \textsf{Masta} and \textsf{Pasta} are \textsf{Rasta}-like ciphers with publicly derived and pseudorandom affine layers. The design of \textsf{Elisabeth} is an extension of \textsf{FLIP} and \textsf{FiLIP}, following the group filter permutator paradigm. All these three ciphers operate on elements over $\mathbb{Z}_p$ or $\mathbb{Z}_{2^n}$, rather than $\mathbb{Z}_2$. We can recover the secret keys of all the targeted ciphers through DFA. In particular, for \textsf{Elisabeth}, we present a new method to determine the filtering path, which is vital to make the attack practical. Our attacks on various instances of \textsf{Masta} are practical and require only one block of keystream and a single word-based fault. By injecting three word-based faults, we can theoretically mount DFA on two instances of \textsf{Pasta}, \textsf{Pasta}-3 and \textsf{Pasta}-4. For \textsf{Elisabeth}-4, the only instance of the \textsf{Elisabeth} family, we present two DFAs in which we inject four bit-based faults or a single word-based fault. With 15000 normal and faulty keystream words, the DFA on \textsf{Elisabeth}-4 can be completed in just a few minutes.
Analysis of One Certificateless Authentication and Key Agreement Scheme for Wireless Body Area Network
We show that the certificateless authentication scheme [Mob. Networks Appl. 2022, 27, 346-356] fails to keep anonymity, not as claimed. The scheme neglects the basic requirement for bit-wise XOR, and tries to encrypt data by the operator. The negligence results in some trivial equalities. The adversary can retrieve the user's identity from one captured string via the open channel.
ThreatLens: LLM-guided Threat Modeling and Test Plan Generation for Hardware Security Verification
Current hardware security verification processes predominantly rely on manual threat modeling and test plan generation, which are labor-intensive, error-prone, and struggle to scale with increasing design complexity and evolving attack methodologies. To address these challenges, we propose ThreatLens, an LLM-driven multi-agent framework that automates security threat modeling and test plan generation for hardware security verification. ThreatLens integrates retrieval-augmented generation (RAG) to extract relevant security knowledge, LLM-powered reasoning for threat assessment, and interactive user feedback to ensure the generation of practical test plans. By automating these processes, the framework reduces the manual verification effort, enhances coverage, and ensures a structured, adaptable approach to security verification. We evaluated our framework on the NEORV32 SoC, demonstrating its capability to automate security verification through structured test plans and validating its effectiveness in real-world scenarios.
Exact Formula for RX-Differential Probability through Modular Addition for All Rotations
This work presents an exact and compact formula for the probability of rotation-xor differentials (RX-differentials) through modular addition, for arbitrary rotation amounts, which has been a long-standing open problem. The formula comes with a rigorous proof and is also verified by extensive experiments.
Our formula uncovers error in a recent work from 2022 proposing a formula for rotation amounts bigger than 1. Surprisingly, it also affects correctness of the more studied and used formula for the rotation amount equal to 1 (from TOSC 2016). Specifically, it uncovers rare cases where the assumptions of this formula do not hold. Correct formula for arbitrary rotations now opens up a larger search space where one can often find better trails.
For applications, we propose automated mixed integer linear programming (MILP) modeling techniques for searching optimal RX-trails based on our exact formula. They are consequently applied to several ARX designs, including Salsa, Alzette and a small-key variant of Speck, and yield many new RX-differential distinguishers, some of them based on provably optimal trails. In order to showcase the relevance of the RX-differential analysis, we also design Malzette, a 12-round Alzette-based permutation with maliciously chosen constants, which has a practical RX-differential distinguisher, while standard differential/linear security arguments suggest sufficient security.
Jump, It Is Easy: JumpReLU Activation Function in Deep Learning-based Side-channel Analysis
Deep learning-based side-channel analysis has become a popular and powerful option for side-channel attacks in recent years. One of the main directions that the side-channel community explores is how to design efficient architectures that can break the targets with as little as possible attack traces, but also how to consistently build such architectures.
In this work, we explore the usage of the JumpReLU activation function, which was designed to improve the robustness of neural networks. Intuitively speaking, improving the robustness seems a natural requirement for side-channel analysis, as hiding countermeasures could be considered adversarial attacks.
In our experiments, we explore three strategies: 1) exchanging the activation functions with JumpReLU at the inference phase, training common side-channel architectures with JumpReLU, and 3) conducting hyperparameter search with JumpReLU as the activation function. While the first two options do not yield improvements in results (but also do not show worse performance), the third option brings advantages, especially considering the number of neural networks that break the target. As such, we conclude that using JumpReLU is a good option to improve the stability of attack results.
Is Your Bluetooth Chip Leaking Secrets via RF Signals?
In this paper, we present a side-channel attack on the hardware AES accelerator of a Bluetooth chip used in millions of devices worldwide, ranging from wearables and smart home products to industrial IoT. The attack leverages information about AES computations unintentionally transmitted by the chip together with RF signals to recover the encryption key. Unlike traditional side-channel attacks that rely on power or near-field electromagnetic emissions as sources of information, RF-based attacks leave no evidence of tampering, as they do not require package removal, chip decapsulation, or additional soldered components. However, side-channel emissions extracted from RF signals are considerably weaker and noisier, necessitating more traces for key recovery. The presented profiled machine learning-assisted attack can recover the full encryption key from 90,000 traces captured at a one-meter distance from the target device, with each trace being an average of 10,000 samples per encryption. This is a twofold improvement over the correlation analysis-based attack on the same AES accelerator.
Breaking and Fixing Content-Defined Chunking
Content-defined chunking (CDC) algorithms split streams of data into smaller blocks, called chunks, in a way that preserves chunk boundaries when the data is partially changed. CDC is ubiquitous in applications that deduplicate data such as backup solutions, software patching systems, and file hosting platforms. Much like compression, CDC can introduce leakage when combined with encryption: fingerprinting attacks can exploit chunk length patterns to infer information about the data.
To address these risks, many systems—mainly in the cloud backup setting—have developed bespoke mitigations by mixing a cryptographic key into the chunking process. We study these keyed CDC (KCDC) schemes “in the wild”, presenting efficient key recovery attacks against five different KCDC schemes, deployed in the backup solutions Borg, Bupstash, Duplicacy, Restic, and Tarsnap. Our attacks are in a realistic threat model that relies only on weak known or chosen-plaintext capabilities. This shows, in particular, that they fail to protect against fingerprinting attacks. To demonstrate practical exploitability, we also present “end-to-end” attacks on three complete encrypted backup applications, namely Borg, Restic and Tarsnap. These build on our attacks on the underlying KCDC schemes.
In an effort to tackle these problems, we introduce the first formal treatment for KCDC schemes and propose a provably secure construction that fulfills a strong notion of security. We benchmark our construction against existing (broken) approaches, showing that it has competitive performance. In doing so, we take a step towards making real-world systems that rely on KCDC more resilient to attacks.
Soloist: Distributed SNARKs for Rank-One Constraint System
Distributed SNARKs enable multiple provers to collaboratively generate proofs, enhancing the efficiency and scalability of large-scale computations. The state-of-the-art distributed SNARK for Plonk, Pianist (S\&P '24), achieves constant proof size, constant amortized communication complexity, and constant verifier complexity. However, when proving the Rank-One Constraint System (R1CS), a widely used intermediate representation for SNARKs, Pianist must perform the transformation from R1CS into Plonk before proving, which can introduce a start-up cost of $10\times$ due to the expansion of the statement size. Meanwhile, existing distributed SNARKs for R1CS, e.g., DIZK (USENIX Sec. '18) and Hekaton (CCS '24), fail to match the superior asymptotic complexities of Pianist.
We propose $\textsf{Soloist}$, an optimized distributed SNARK for R1CS. $\textsf{Soloist}$ achieves constant proof size, constant amortized communication complexity, and constant verifier complexity, relative to the R1CS size $n$. Utilized with $\ell$ sub-provers, its prover complexity is $O(n/\ell \cdot \log(n/\ell))$. The concrete prover time is~$\ell\times$ as fast as the R1CS-targeted Marlin (Eurocrypt '20). For zkRollups, $\textsf{Soloist}$ can prove more transactions, with $2.5 \times$ smaller memory costs, $2.8\times$ faster preprocessing, and $1.8\times$ faster proving than Pianist.
$\textsf{Soloist}$ leverages an improved inner product argument and a new batch bivariate polynomial commitment variant of KZG (Asiacrypt '10). To achieve constant verification, we propose a new preprocessing method with a lookup argument for unprescribed tables, which are usually assumed pre-committed in prior works. Notably, all these schemes are equipped with scalable distributed mechanisms.
A new stand-alone MAC construct called SMAC
In this paper, we present a new efficient stand-alone MAC construct named SMAC, based on processing using the Finite State Machine (FSM) part of the stream cipher family SNOW, which in turn uses the AES round function. It offers a combination of very high speed in software and hardware with a truncatable tag. Three concrete base versions of SMAC are proposed, each offering a different security level. SMAC can also be directly integrated with an external ciphering engine in an AEAD mode. Every design decision is thoroughly justified and supported by the results of our cryptanalysis and simulations. Additionally, we introduce an aggregated mode version, SMAC-1$\times n$, in which software performance reaches up to 925 Gbps (around 0.038 cycles per byte) for long messages in a single thread. To the best of our knowledge, SMAC achieves a record-breaking software performance compared to all known MAC engines.
Identity-Based Matchmaking Encryption, Revisited: Efficient Constructions with Strong Security
Identity-based matchmaking encryption (IB-ME) [Ateniese et al., Crypto 2019] allows users to communicate privately, anonymously, and authentically. After the seminal paper by Ateniese et al., much work has been done on the security and construction of IB-ME. In this work, we revisit the security definitions of IB-ME and provide improved constructions. First, we classify the existing security notions of IB-ME, systematically categorizing privacy into three categories (CPA, CCA, and privacy in the case of mismatch) and authenticity into four categories (NMA and CMA, both against insiders and outsiders). In particular, we reconsider privacy when the sender's identity is mismatched during decryption and provide a new simple security game called mismatch security, capturing its essence. Second, we propose efficient and strongly secure IB-ME schemes from the bilinear Diffie-Hellman assumption in the random oracle model and from anonymous identity-based encryption and identity-based signature in the quantum random oracle model. The first scheme is based on Boneh-Franklin IBE, similar to the Ateniese et al. scheme, but ours achieves a more compact decryption key and ciphertext and stronger CCA-privacy, CMA-authenticity, and mismatch security. The second scheme is an improved generic construction, which achieves not only stronger security but also the shortest ciphertext among existing generic constructions. This generic construction provides a practical scheme from lattices in the quantum random oracle model.
Crescent: Stronger Privacy for Existing Credentials
We describe Crescent, a construction and implementation of privacy-preserving credentials. The system works by upgrading the privacy features of existing credentials, such as JSON Web Tokens (JWTs) and Mobile Driver's License (mDL) and as such does not require a new party to issue credentials. By using zero-knowledge proofs of possession of these credentials, we can add privacy features such as selective disclosure and unlinkability, without help from credential issuers. The system has practical performance, offering fast proof generation and verification times (tens of milliseconds) after a once-per-credential setup phase. We give demos for two practical scenarios, proof of employment for benefits eligibility (based on an employer-issued JWT), and online age verification (based on an mDL). We provide an open-source implementation to enable further research and experimentation.
This paper is a draft describing our work, aiming to include enough material to describe the functionality, and some details of the internals of our new library, available at https://github.com/microsoft/crescent-credentials.
Strong Federated Authentication With Password-based Credential Against Identity Server Corruption
We initiate the study of strong federated authentication with password-based credential against identity server corruption (SaPBC). We provide a refined formal security model, which captures all the necessary security properties in registration, authentication, and session key establishment between a user and an application server. The new model with fine-grained information leakage separates the leakage of password-related files and long-term secrets (including passwords and credentials). Moreover, we present two SaPBC protocols constructed from efficient cryptographic primitives for these corruption scenarios. In addition to rigorous security proofs, we also conduct comprehensive performance evaluation of the two protocols.
Concretely Efficient Correlated Oblivious Permutation
Oblivious permutation (OP) enables two parties, a sender with a private data vector $x$ and a receiver with a private permutation π, to securely obtain the shares of π(x). OP has been used to construct many important MPC primitives and applications such as secret shuffle, oblivious sorting, private set operations, secure database analysis, and privacy-preserving machine learning. Due to its high complexity, OP has become a performance bottleneck in several practical applications, and many efforts have been devoted to enhancing its concrete efficiency. Chase et al. (Asiacrypt'20) proposed an offline-online OP paradigm leveraging a pre-computable resource termed Share Translation. While this paradigm significantly reduces online costs, the substantial offline cost of generating Share Translation remains an area for further investigation.
In this work, we redefine the pre-computable resource as a cryptographic primitive known as Correlated Oblivious Permutation (COP) and conduct in-depth analyses and optimizations of the two COP generation solutions: network-based solution and matrix-based solution. The optimizations for the network-based solution halve the communication/computation cost of constructing a switch (the basic unit of the permutation network) and reduce the number of switches in the permutation network. The optimizations for the matrix-based solution halve the communication cost of small-size COP generation and reduce the cost of large-size COP generation with in-outside permutation decomposition.
We implement our two COP generation protocols and conduct comprehensive evaluations. Taking commonly used 128-bit input data as an example, our network-based and matrix-based solutions are up to 1.7x and 1.6x faster than baseline protocols, respectively.
We further facilitate the state-of-the-art (SOTA) PSU protocols with our optimized COP, achieving over 25% reduction in communication cost and 35% decrease in execution time. This shows that our COP optimizations bring significant improvements for real-world MPC primitives.
Improved Framework of Related-key Differential Neural Distinguisher and Applications to the Standard Ciphers
In recent years, the integration of deep learning with differential cryptanalysis has led to differential neural cryptanalysis, enabling efficient data-driven security evaluation of modern cryptographic algorithms. Compared to traditional differential cryptanalysis, differential neural cryptanalysis enhances the efficiency and automation of the analysis by training neural networks to automatically extract statistical features from ciphertext pairs. As research advances, neural distinguisher construction faces challenges due to the absence of a unified framework capable of cross-algorithm generalization and feature optimization. There's no systematic way to build a framework from data formats and network architectures, which limits their scalability across diverse ciphers and and their suitability for combining different cryptanalysis methods. While neural network training is data-driven, we lack interpretable explanations for the quality of differentially generated datasets. Therefore, there is an urgent need to combine cryptographic theory with data analysis methods to systematically evaluate dataset quality.
This paper proposes a novel framework for constructing related-key neural differential distinguishers that integrates three core innovations: (1) multi-ciphertext multi-difference formats to enhance dataset diversity and feature coverage, (2) structural filtering for prioritizing high-probability differential paths aligned with cryptographic architectures, and (3) Deep Residual Shrinkage Network (DRSN) with adaptive thresholding to suppress noise and amplify critical differential features. By applying this framework to two standardized algorithms DES and PRESENT, our results demonstrate significant advancements. For DES, the framework achieves an 8-round related-key neural distinguisher and improves 6/7-round distinguisher accuracy by over 40%. For PRESENT, we construct the first 9-round related-key neural distinguisher, which outperforms existing neutral distinguishers in both round coverage and accuracy. Additionally, we employ kernel principal component analysis (KPCA) and K-means clustering to evaluate the quality of differential datasets for DES and PRESENT, revealing that clustering compactness strongly correlates with distinguisher performance. Furthermore, we propose a validation algorithm to verify differential combinations with cryptographic advantages from a machine learning perspective, identifying ‘good’ plaintext-key differential combinations. We apply this approach to the SIMECK algorithm, demonstrating its broad applicability. These findings validate the framework’s effectiveness in bridging cryptographic analysis with data-driven feature extraction and offer new insights for automated security evaluation of block ciphers.
On the Sample Complexity of Linear Code Equivalence for all Code Rates
In parallel with the standardization of lattice-based cryptosystems, the research community in Post-quantum Cryptography focused on non-lattice-based hard problems for constructing public-key cryptographic primitives. The Linear Code Equivalence (LCE) Problem has gained attention regarding its practical applications and cryptanalysis.
Recent advancements, including the LESS signature scheme and its candidacy in the NIST standardization for additional signatures, supported LCE as a foundation for post-quantum cryptographic primitives. However, recent cryptanalytic results have revealed vulnerabilities in LCE-based constructions when multiple related public keys are available for one specific code rate. In this work, we generalize the proposed attacks to cover all code rates. We show that the complexity of recovering the private key from multiple public keys is significantly reduced for any code rate scenario. Thus, we advise against constructing specific cryptographic primitives using LCE.
Analyzing Group Chat Encryption in MLS, Session, Signal, and Matrix
We analyze the composition of symmetric encryption and digital signatures in secure group messaging protocols where group members share a symmetric encryption key. In particular, we analyze the chat encryption algorithms underlying MLS, Session, Signal, and Matrix using the formalism of symmetric signcryption introduced by Jaeger, Kumar, and Stepanovs (Eurocrypt 2024). We identify theoretical attacks against each of the constructions we analyze that result from the insufficient binding between the symmetric encryption scheme and the digital signature scheme. In the case of MLS and Session, these translate into practically exploitable replay and reordering attacks by a group-insider. For Signal this leads to a forgery attack by a group-outsider with access to a user’s signing key, an attack previously discovered by Balbás, Collins, and Gajland (Asiacrypt 2023). In Matrix there are mitigations in the broader ecosystem that prevent exploitation. We provide formal security theorems that each of the four constructions are secure up to these attacks. Additionally, in Session we identified two attacks outside the symmetric signcryption model. The first allows a group-outsider with access to an exposed signing key to forge arbitrary messages and the second allows outsiders to replay ciphertexts.
HIPR: Hardware IP Protection through Low-Overhead Fine-Grain Redaction
Hardware IP blocks have been subjected to various forms of confidentiality and integrity attacks in recent years due to the globalization of the semiconductor industry. System-on-chip (SoC) designers are now considering a zero-trust model for security, where an IP can be attacked at any stage of the manufacturing process for piracy, cloning, overproduction, or malicious alterations. Hardware redaction has emerged as a promising countermeasure to thwart confidentiality and integrity attacks by untrusted entities in the globally distributed supply chain. However, existing redaction techniques provide this security at high overhead costs, making them unsuitable for real-world implementation. In this paper, we propose HIPR, a fine-grain redaction methodology that is robust, scalable, and incurs significantly lower overhead compared to existing redaction techniques. HIPR redacts security-critical Boolean and sequential logic from the hardware design, performs interconnect randomization, and employs multiple overhead optimization steps to reduce overhead costs. We evaluate HIPR on open-source benchmarks and reduce area overheads by 1 to 2 orders of magnitude compared to state-of-the-art redaction techniques without compromising security. We also demonstrate that the redaction performed by HIPR is resilient against conventional functional and structural attacks on hardware IPs. The redacted test IPs used to evaluate HIPR are available at: https://github.com/UF-Nelms-IoT-Git-Projects/HIPR.