Paper 2024/840

Batching-Efficient RAM using Updatable Lookup Arguments

Moumita Dutta, Indian Institute of Science Bangalore
Chaya Ganesh, Indian Institute of Science Bangalore
Sikhar Patranabis, IBM Research India
Shubh Prakash, Indian Institute of Science Bangalore
Nitin Singh, IBM Research India
Abstract

RAM (random access memory) is an important primitive in verifiable computation. In this paper, we focus on realizing RAM with efficient batching property, i.e, proving a batch of $m$ updates on a RAM of size $N$ while incurring a cost that is sublinear in $N$. Classical approaches based on Merkle-trees or address ordered transcripts to model RAM correctness are either concretely inefficient, or incur linear overhead in the size of the RAM. Recent works explore cryptographic accumulators based on unknown-order groups (RSA, class-groups) to model the RAM state. While recent RSA accumulator based approaches offer significant improvement over classical methods, they incur linear overhead in the size of the accumulated set to compute witnesses, as well as prohibitive constant overheads. We realize a batching-efficient RAM with superior asymptotic and concrete costs as compared to existing approaches. Towards this: (i) we build on recent constructions of lookup arguments to allow efficient lookups even in presence of table updates, and (ii) we realize a variant of sub-vector relation addressed in prior works, which we call committed index lookup. We combine the two building blocks to realize batching-efficient RAM with sublinear dependence on size of the RAM. Our construction incurs an amortized proving cost of $\tilde{O}(m\log m + \sqrt{mN})$ for a batch of $m$ updates on a RAM of size $N$. Our results also benefit the recent arguments for sub-vector relation, by enabling them to be efficient in presence of updates to the table. We believe that this is a contribution of independent interest. We implement our solution to evaluate its concrete efficiency. Our experiments show that it offers significant improvement over existing works on batching-efficient accumulators/RAMs, with a substantially reduced resource barrier.

Note: This is the full version of a paper to appear at ACM CCS 2024, with detailed proofs and discussions.

Metadata
Available format(s)
PDF
Category
Cryptographic protocols
Publication info
Published elsewhere. Major revision. ACM CCS 2024
DOI
10.1145/3658644.3670356
Keywords
Succinct ArgumentsEfficient RAMIndexed lookupsLookup ArgumentsRollups
Contact author(s)
moumitadutta @ iisc ac in
chaya @ iisc ac in
sikhar patranabis @ ibm com
shubhprakash @ iisc ac in
nitisin1 @ in ibm com
History
2024-06-29: last of 3 revisions
2024-05-29: received
See all versions
Short URL
https://ia.cr/2024/840
License
Creative Commons Attribution
CC BY

BibTeX

@misc{cryptoeprint:2024/840,
      author = {Moumita Dutta and Chaya Ganesh and Sikhar Patranabis and Shubh Prakash and Nitin Singh},
      title = {Batching-Efficient {RAM} using Updatable Lookup Arguments},
      howpublished = {Cryptology ePrint Archive, Paper 2024/840},
      year = {2024},
      doi = {10.1145/3658644.3670356},
      note = {\url{https://eprint.iacr.org/2024/840}},
      url = {https://eprint.iacr.org/2024/840}
}
Note: In order to protect the privacy of readers, eprint.iacr.org does not use cookies or embedded third party content.