02:10:30 Hi all. Long time lurker here. I wanted to share an Arcturus implementation I am working on. I hope to demonstrate some competitive performance benchmarks to maybe stir some more interest. I already have numbers outperforming Lelantus on a modest laptop. Anyways, I figured I would share if anyone finds it useful. 02:10:30 Disclaimer: This library is just a toy, so not recommended for serious use. 02:10:30 https://github.com/cargodog/arcturus 02:12:07 Thanks, sounds interesting 03:15:15 Nice! But Arcturus is not a ring signature scheme :) 03:36:15 Is there a policy against job posting on this chat? 03:44:16 Ah, indeed, calling it a "ring signature" is inaccurate! Stayed up too late hacking code the last few days, and didn't put enough time into the docs 🥴 03:57:32 Welcome, cargodog[m] 👋 04:27:04 Thanks Isthmus 👋 10:37:34 cargodog[m]: Does the code support batch verification with common input sets? 10:37:43 mikerah[m]: job posting? 11:59:53 I'm hiring and I know the expertise in this group is what I'm looking for 12:06:30 I guess I don't have a problem with it being brought up in this channel, but I don't know how other people feel about that 12:06:46 Could certainly mention in #monero-research-lounge to be safe 12:07:57 IMHO if it's just a one liner with a link to something that really is about related research, it's fine. It's when people repeatedly spam the same thing several times it gets irritating. 12:08:51 I agree 12:22:35 Thanks sarang and moneromooo 12:23:26 Here it is: https://iacr.org/jobs/item/2294 . I'm hiring for a Research Engineer (more emphasis on the Engineer part). If interested, you can send an email to careers⊙hc with the subject line "Research Engineer" along with your resume and something interesting you've built or published 12:41:59 Another very minor update to the Arcturus preprint: https://eprint.iacr.org/2020/312 12:42:11 and the corresponding diff: https://github.com/SarangNoether/skunkworks/commit/c2e05776743fbc6acdbf4217c56f3b2f0d7bed7d 12:42:30 and the diff for the previous, more substantial, update: https://github.com/SarangNoether/skunkworks/commit/12f4579511417bd6700f93b1d32963aee7a2e93a 12:42:55 I'd make the same update to Triptych too, but this would push it over the page limit for ESORICS 12:43:10 because apparently page limits are still a necessary thing for some reason :/ 12:43:42 No change to security or anything, just better explanation of the completeness proof 13:26:39 sarang: Indeed it does support batch verification. The default verification method (`ArcturusGens::verify()`) allows you to provide a batch of proofs that share the same anonymity set. Im currently working on improving the docs to make this more understandable. 13:26:39 I realize comparing Arcturus to Lelantus is not "apples to apples", but using a similar config (anonymity set of 65535, 3 inputs per TX, 3 ouptuts per TX), I was able to outperform the numbers published in the Lelantus paper on my comparable laptop. Im not confident in my benchmarks yet, soI don't want to share numbers and make bold claims yet, but initial results are very promising. Arcturus really is 13:26:39 beautifully efficient :D 13:27:18 cargodog[m]: did you use a similar Lelantus implementation, with the same underlying curve library? 13:27:25 sarang: oops, forgot to tag you in my response 13:29:15 No I did not, hence all my disclaimers and reluctance to share concrete numbers :) 13:29:15 I am only basing this off of the numbers published in the Lelantus paper. My goal now is to get Lelantus building and provide more meaningful comparison 13:29:23 Ahok 13:29:43 " (ArcturusGens::verify()) allows you to provide a batch of proofs that share the same anonymity set" 13:29:44 Does this mean that if I wanted to create a tx, I could use every previous output as a decoy? 13:29:49 Yeah, I didn't put much weight on the comparison numbers in the Lelantus preprint for that reason 13:30:01 Granted that everyone else does it too 13:30:18 I always prefer op counts for this reason 13:30:34 Unless there are implementations with common libraries and (ideally) optimization methods 13:30:56 So essentially everyone in the block uses the same anonymity set, and the nodes batch verify every transaction 13:32:13 kenshamir[m]: one option is to use multiple fixed input sets, so proofs sharing even some of the fixed sets could get some benefit from partial batching 13:32:33 kenshamir: Technically yes, but while Arcturus is leap years ahead of anything else, very large anonymity sets still take a long time to prove/verify. I was able to get proof/verification on the order of milliseconds with an anonymity set of ~65535 outputs, but that seems to be a practical upper limit (IMO) that could be used in real time transaction processing 13:32:36 And depending on the input selection method, proofs in nearby blocks/transactions are more likely to share such inputs 13:32:56 Indeed. And looking at op counts alone, Arcturus is a clear winner :D 13:33:12 :D 13:33:30 I hope the preprint and/or Python/C++ implementations were clear and useful! 13:33:39 I'd welcome any comments or suggestions you have on them 13:33:51 esp. since Arcturus is not currently under consideration for publication 13:33:57 This has definitely got me interested in Articus :D 13:33:58 (that damn hardness assumption...) 13:33:58 Sorry still can't spell 13:34:12 kenshamir[m]: I am Articus! 13:34:13 *Arcturus 13:34:18 I worked mostly off the preprint. It was like reading a paint by numbers guide. Very easy to follow :D 13:34:18 No, I am Articus 13:34:34 Nice! I have yet to look into the code, but will try to do so today 13:34:39 No... 13:34:47 I am Articus! 13:34:58 I'm especially interested to see how you implemented batch weighting etc. 13:35:17 My Python proof-of-concept implementation shows one way to do it that reduces to a single multiscalar multiplication 13:35:21 Hardness assumption is actually my primary motivation here. I hope if I can put forth some impressive numbers, that may attract more eyes to the protocol, and hopefully some of those eyes can give us confidence (or debunk) the hardness assumption 13:35:22 with maximal generator reuse 13:35:27 I assume you do this as well for efficiency 13:35:33 cargodog[m]: awesome! 13:36:11 I was really disappointed that the initial Arcturus reviewer didn't do a better job with their supposed counterexample, which was incomplete and frankly really shoddy work 13:36:39 Yes, maximal generator reuse 13:36:39 In Arcturus, what is the bottleneck for large anonymity sets? 13:37:02 Verification is still linear in input set size 13:37:08 technically O(n/lg(n)) 13:37:09 the `ArcturusGens` provides a context of generators that can be largely reused 13:37:24 cargodog[m]: I meant between verification equations 13:37:36 e.g. all the commitment stuff contains reusable generators 13:37:47 so you can verify Eqs. 1 and 2 in half the time 13:37:51 (half-ish) 13:38:04 Again, I should just read the damn code =p 13:38:18 The practical issue I find, is coming up with a construct that allows users to build TXs with a shared set, without revealing information about their members within the set.... I'm not sure theres an easy solution to this 13:38:40 Lelantus' method is to use a windowed common input set 13:38:48 They have a blog post about it somewhere 13:39:09 I don't recall the exact link 13:39:19 sarang: yes, maximal reuse between proofs. The bit commitments (eq1&2) cannot share gens IIRC, but much of the rest of the proofs share common generators 13:39:46 Sure they can 13:39:59 The tensor commitments all use fixed generators 13:40:14 Those can be weighted together within a proof, and between proofs 13:40:46 And of course you have a few other global generators in Eqs. 3-5 13:40:52 but those are minimal 13:41:09 Lelantus windowed approach seems untenable to me (no hate). They claim a window of ~65k UTXOs will last 12 months, but they expect 16K minimum before you can spend out of the set... so wait 3 months to spend an output? This also breaks down as you consider a chain that might scale to much larger volumes 13:41:37 Ah, I misspoke. I share generators for the bit commitments, but I verify them individually 13:41:39 eq3-5 are verified in aggregate 13:41:57 Why not combine 1-2 together, and then between proofs? 13:42:09 That's a pretty easy way to shave off some time 13:42:27 It's small compared to the number of total curve points for large sets, but it's not nothing :D 13:43:09 I do combine 1-2 together, but I wasn't sure if I could aggregate across proofs and still have the same guarantee that each proof was only committing to a single input at a time (instead of some malformed comittment to many bits) 13:43:24 I would love to hear I am wrong in this assumption :D 13:43:36 You apply random weights, just like you do between equations 13:43:50 ah duh! 13:43:52 ya 13:44:00 great point 13:44:13 The weighting method is identical across all combinations :) 13:44:17 Thanks, I will add that to my list of improvements! 13:44:33 and then you just sit down and write out how to combine the common gens 13:44:56 cargodog: Which backend did you use for the benchmarks? 13:45:08 So yeah, you only need to use any generator exactly once in the multiscalar multiplication 13:45:24 I'm hesitant to look over the code, as I would like to make an implementation when I'm free to see where our ideas diverged 13:46:06 I am using a SIMD backend for the underlying curve library (Ristretto group on Ed25519) and ASM_SIMD backend for the Blake2 implementation, which I use for the indexed hash function 13:46:37 Not Blake3 yet? 13:46:38 =p 13:47:02 (I don't know if there is a good standard implementation of that yet) 13:47:11 Not yet:D 13:47:20 The speed benefits for Blake3 look insane 13:47:38 I could get into a long discussion about selecting hash methods, but probably not fruitful here :p 13:47:47 lol, how so 13:47:59 For speed, or security in particular use cases? 13:48:00 Most Blake3 improvement comes from multi-threaded processing of large sets 13:48:13 I wonder if that was used in the numbers I saw :/ 13:48:19 I did know about the parallelization benefits 13:48:23 but assumed this wasn't part of comparisons 13:48:27 Maybe that was not the case 13:48:29 but on small sets, its practically similar to Blake2b 13:48:39 Blake2 it is! 13:48:40 heh 13:48:54 At any rate, surely that's peanuts compared to the bulk of the curve operations 13:48:59 cargodog: was there anything you found difficult while implementing Arcturus? Speaking in general. Always nice to hear problems when implementing cryptography or maybe some general "gotchas" that you did not expect 13:49:31 My understanding was their published numbers leveraged parallelization, and their point was that wasn't possible with other hash functions. Which is great, no doubt! But not an apples to apples comparison, and not perfect for all situations 13:49:38 boo 13:49:53 Peanuts indeed :D 13:49:54 But I suppose it's still really helpful for large data 13:51:00 It's so cool to see another implementation in place! 13:51:14 kenshamir: Hard to say. ZK proofs are generally not "easy", but I found the Arcturus paper quite easy to follow compared to other proof systems 13:51:18 My Python implementation was just an easy proof-of-concept for the algebra, and to demonstrate it 13:51:32 The C++ implementation is useful for timing, but uses the Monero libraries 13:51:48 I suppose the batching was perhaps more tricky, since it wasn't specifically written out? 13:52:02 I thought about it, but figured reviewers would just request to have that removed anyway 13:52:19 I actually never say your Python implementation. I'd love to take a look. Have a link? 13:52:35 https://github.com/SarangNoether/skunkworks/tree/arcturus 13:52:36 Batching was the most difficult piece to wrap my head around 13:52:50 Huh, looks like I didn't implement cross-proof batching after all... thought I had 13:53:01 but I only do intra-proof combinations 13:53:05 I should do cross-proof too 13:53:25 Yeah, batching ends up becoming an irritating game of keeping track of weights and combinations 13:53:49 Note that the Python one uses a custom curve library and isn't intended for security or speed, just as a demo 13:53:56 I have to step out for now o/ 13:53:57 I will check back later if you have any questions. Also, feel free to drop an issue on github (or even submit a patch :D). My contact email can also be found in the project Cargo.toml 13:54:11 I also show in that code how to embed secret data in proofs for later extraction, using a common PRNG seed 13:54:17 Thanks cargodog[m] 13:54:35 I look forward to that! 13:55:07 I might update the preprint to include that embedding stuff 13:57:16 Gah, I never pushed the weighting stuff in the Python at all... I'll do that 13:57:24 It's in the C++ timing code though 13:57:29 How silly 13:57:40 https://github.com/SarangNoether/monero/blob/arcturus/src/ringct/arcturus.cc 13:59:19 If you're interested cargodog[m], feel free to stop by next Wednesday at 17:00 UTC to the research meeting here, where you'd be welcome to share your code to more people 14:03:08 I wonder if anyone else has been working on Lelantus implementations in Rust, which could be useful for comparison if there are common-ish benchmarks 14:04:28 But hot dang, it'd be neat if Arcturus was more efficient in practice :D 14:18:32 i thought arcturus used some moon math assumptions and thats why its sorta been a maybe 14:19:40 It does use a new and untested cryptographic hardness assumption 14:20:14 But as cargodog[m] said, having some solid numbers for comparison could help to get additional eyes on it 14:20:31 indeed. 14:20:47 FWIW I consider the assumption to be pretty reasonably 14:20:57 It just doesn't cleanly reduce to a more standard tested assumption 14:21:02 s/reasonably/reasonable 14:21:02 sarang meant to say: FWIW I consider the assumption to be pretty reasonable 14:21:03 Could such an assumption get audited? 14:21:04 goot bot 14:21:13 good researcher 14:21:15 Ehhhh not really 14:21:18 :o 14:21:55 Such assumptions acquire confidence with time and a lot of people poking them with sticks 14:22:14 A thorough audit from qualified cryptographers could certainly be a step in that direction 14:23:29 Fortunately both Triptych and Arcturus have essentially identical verification complexity 14:23:39 it's the proof size that shows a difference 14:28:01 Submitting the preprint for additional review will also help 14:28:32 I should add some additional narrative around the assumption, to avoid a repeat of the first reviewer, who I really don't think read it completely (and certainly didn't actually work out their counterexample in full) 14:34:13 Oh, cargodog[m]: another note that may be of interest to you 14:34:47 You can actually do a single hash execution for the `mu` terms instead of one each, and substitute a field multiplication instead 14:35:01 So instead of `mu_k = hash(stuff,k)` for each `k` 14:35:24 You set `mu = hash(stuff)` and then define `mu_k = mu^k` 14:35:31 and you can define the former iteratively 14:35:51 So if your scalar ops are sufficiently faster than your hash function, you can shave some time off that 14:36:15 I demo this in the C++ code that I linked 14:37:00 * sarang updates the preprint too 14:37:33 Note that there's nothing wrong with the original method, except the speed difference 14:37:49 s/former/latter 14:37:49 sarang meant to say: and you can define the latter iteratively 14:37:52 good bot 14:37:59 Not my day for typos :( 17:13:51 it's the proof size that shows a difference <---- Which begs the question can we increase the penalty free block weigh if needed and go ahead with Triptych? 17:14:41 I think he is referring to the proof size difference between Triptych and Arcturus 17:14:59 Yes this is my point 17:16:48 We can compensate for this if needed by increasing the penalty free block weight above 300000 bytes 17:19:13 I suppose... but I meant that all other things being equal, it's obviously better to choose a smaller proof size... 17:19:32 Also note that proof size and verification do not scale the same 17:19:39 it's similar to how bulletproofs scale 17:19:42 log size, linear verify 17:20:47 When the confidence in the assumption becomes more mainstream we can them move from Triptych to Arcturus 17:21:11 Heh, we haven't moved to Triptych 17:22:20 Triptych is still way better than the current situation 17:23:22 What concerns me is a form of paralysis here 17:32:55 For example tune Triptych to close to the current verification time, and then determine a ring size and proof size. We can then make a reasonable judgment as to whether the change is worthwhile 17:35:28 By tune I mean set the Triptych parameters 17:39:49 I have yet to hear anything about practical multisig concerns, which I have asked about many times 17:41:15 You mean Triptych breaks or may break multisig? 17:59:06 ArticMine, i agree. I feel like optimizations will be found for triptych, or moore's law will come along 17:59:56 but im just a guy with opinions 18:01:06 It requires more complex crypto that implies different library functionality, as I have discussed many times 18:06:50 Yes but, if Triptych is not on the radar then support for multisig is not likely to be addressed 18:08:25 sarang: I think I managed to miss all your questions about "practical multisig concerns" and "different library functionality". Is this about handling multisig which might more complicated still, or about math and crypto? 18:10:26 Even more complicated handling would not worry me too much, I am of the opinion that for any sensible multisig you need some good tools anyway 18:12:32 AFAIK it's possible, but requires using Paillier (or similar name) crypto, which is fairly new IIRC (or at least the scheme using this crypto is). So a chunk of new crypto code. 18:13:20 Paillier is well understood but assumes math on arbitrary RSA groups 18:14:04 This functionality is required for safely computing the necessary proof components for multisig key images 18:14:41 Do I correctly read that as "a lot of work to implement" and "possibly needs audits"? 18:14:44 So it's not just additional crypto primitives, it's entirely different library support for non-ed25519 stuff 18:14:49 a _lot_ of work 18:14:57 and yes, would _absolutely_ need external review 18:15:04 I see 18:16:00 Well, as of now multisig is a somewhat unloved step-child, but who knows, maybe one day it becomes all the rage, with Monero's importance slowly but steadily ramping up 18:16:06 but FWIW current multisig doesn't use the multi-round work that suraeNoether and I wrote in our threshold multisig paper anyway... 18:16:19 but at least that stuff is all 25519 18:17:11 Note that Omniring, Arcturus, Triptych, RCT3 all require a similar approach 18:17:42 One version of Omniring maintains the current key image structure, but still requires some nonstandard operations in the proofs 18:18:51 And all this in hardcore C++ ... 18:19:45 So yeah, any discussion of a real implementation of this stuff, if multisig is desired, requires a heck of a lot of additional longer-term planning and work 18:19:53 If you don't need multisig, it's much easier 18:20:27 Yes, was thinking about this, you can't say "for the next half year or so multisig takes a break". Either it's there from the start, or it goes overboard, I would say 18:21:39 I think we have to have multisig if it'd be possible to send to a current multisig address and that multisig address would not otherwise be able to spend the monero. 18:21:54 If that's not the case, there's a choice. 18:22:09 So yeah, any discussion of a real implementation of this stuff, if multisig is desired, requires a heck of a lot of additional longer-term planning and work <---- How transferable would this work be to Arcturus for example? 18:22:46 The math used to compute the key image is identical between Triptych, Arcturus, and RCT3 18:23:55 Just for curiousity: Would it help if we give up tx uniformity, i.e. multisig tx have some different structure in the blockchain? Or is this an altogether stupid question? 18:24:50 atm, a node can't tell if a spender is multisig or not. So you'd have to allow CLSAG and triptych. 18:25:38 Plus there's a joint computation of the output secret key, which means a joint computation of the key image 18:25:57 Is that bad, terrible, or a catastrophe? Such a "double protocol solution" 18:25:58 So then it can make sense to do the multisig work for Triptych and then move to Arcturus for example 18:26:10 Avoiding Paillier means the players all learn the full secret key, and can arbitrarily spend without other players' consent in a race condition 18:26:23 That's definitely bad :) 18:26:42 The tricky part is that the current key image format is linear in the secret key 18:26:50 These new constructions are not linear in the secret key 18:27:06 This is much more efficient, but means you can't get away with simple linear combinations anymore 18:27:12 You need a particular inversion trick 18:27:28 You also need some additional zkps 18:27:33 It gets complicated 18:27:59 It's already what I would call complicated today ... 18:28:10 Well, it would become much more so 18:28:17 Splendid 18:32:03 I can easily imagine a scenario where we try to build our "loose consensus", and that will then result in giving up multisig, with a subsequent shit storm from people in the shadows so far, and also regret later on 18:32:13 I believe we need a more defined course of action, even if this comes down to the slow increase in ring size via the sequence of primes 18:32:37 I do not belive give up multisig is the way to go 18:33:21 Yeah, but who knows which way opinion would sway, with fast Triptych introduction without multisig in front of the eyes 18:33:46 It's mightily tempting 18:34:00 That's why I worry about that political stuff 18:35:35 The sequence of primes can take care of the political stuff and put pressure on the adversaries. The reason I like it is because of technology is not static either 18:36:06 "Sequence of primes" simplifies the crypto and the math? 18:37:02 No. 18:37:05 What I mean is increasing the ring size under the current implementation at each HF to the next prime number 18:37:23 Ah, alright 18:37:40 The prime number thing is merely a coincidence, or a whimsical design choice 18:37:43 but nothing to do with the math 18:37:51 Of course 18:37:55 So no new protocol for possibly a long time then 18:38:21 Cointelegraph: "Usage of primes in Monero is merely a coincidence" 18:38:27 And maybe some breakthrough on the way? 18:38:34 Depends if someone really wants to implement Paillier and the associated multisig protocol 18:38:37 Lol 18:39:28 Really wants, and really does :) 18:39:36 Basic Triptych and Arcturus prove/verify functionality is already implemented in the codebase 18:40:23 Intra-transaction batching for Triptych, too 18:40:45 Depends if someone really wants to implement Paillier and the associated multisig protocol <--- So this is then the holdup for Triptych? 18:40:54 The primary one 18:41:08 What else? 18:41:14 There are other design choices around input anonymity set size, input binning 18:41:34 Output set migration 18:42:19 could the 2 protocol solution have legs? Standard transaction -> triptych. multisig -> CLSAG. 18:42:38 i mean, i have no data, but i can claim no one uses multisig 18:42:38 This would effectively separate the output set 18:42:56 The key image types are different 18:43:14 ... and complicate migration 18:43:32 Migration from where to where? 18:44:06 You wouldn't be able to select old pre-Triptych outputs for rings 18:44:09 I mean, if you are basically free to use whichever protocol suits your fancy today 18:44:16 Since you can't test old and new key images together 18:48:13 This all does not sound very funny. No radically different ideas to the rescue? Some completely different approach with the same end result: Things only move if m/n people give green light, *somehow*? 18:49:02 I would love a method that didn't require a cooperative inversion to compute a joint key image 18:49:17 but it doesn't work 18:49:43 I mean, Bitcoin multisig looks so simple, and so elegant, but I am sure Sarang comes up with an argument why that does not work for Monero in almost zero time :) 18:50:01 Because Bitcoin authorizes transactions using script conditions 18:50:14 Monero authorizes transactions using only signatures 18:50:35 So we introduce scripts!!! (ducks and covers) 18:50:53 Changing that would still require the key image stuff, unless the protocol were radically changed in a way that's almost certainly hugely unsafe 18:51:03 No, seriously, I start to wonder what is the least bad of all solutions 18:52:22 Are signatures, quite in general, safer than scripts? Or does that comparison make no sense? 18:53:03 It enables the signer ambiguity 18:54:46 not that it matters, but the rational put forth in the great whitepaper is "The scripting system in Bitcoin is a heavy and complex feature. Itpotentiallyallows one to createsophisticated transactions [12], but some of its features are disabled due to security concerns andsome have never even been used [13" 18:55:47 Hmm, wouldn't we need only a small part of it for supporting multisig? 18:56:22 Keep in mind that with the way outputs exist today, you need to be able to compute key images to avoid double-spend attempts 18:56:30 Basically "This tx must have x valid signatures". And then the signatures themselves. 18:56:50 and the key image part is what makes newer constructions complicated 18:56:57 You'd need to be able to tell those sigs aren't from the same keys. 18:57:01 Too bad I never was really able to grok key images 18:57:44 its sorta just like hashing a file. its a fingerprint of a given key. so you can detect if there are duplicates. 18:57:45 Everyone has an additive share of the output secret key, and you need to compute a particular group element using the inverse of the sum of those shares 18:57:57 that's the new key image format 18:59:14 You'd need to be able to tell those sigs aren't from the same keys. <--- Is that solved in Bitcoin, or one of said potential problems? 18:59:29 Keys are public in Bitcoin. 18:59:48 Ah. 19:00:06 Makes sense. 19:00:27 well i put up a super sophisticated reddit poll to see if multisig is worth holding up ringsize a bajillion 19:00:49 Don't you dare :) 19:01:13 wow even more sophisticated than I expected 19:01:26 lol 19:03:35 Hopefully people don't answer for themselves... 19:03:48 "Yes, and I keep my key in a box at this bank..." 19:12:20 https://old.reddit.com/r/Monero/comments/ivbz0x/how_many_people_use_monero_multisig/ 19:15:28 some paillier crypto example in python: https://github.com/h4sh3d/poc-threshold-ecdsa-secp256k1/blob/master/paillier.py 19:23:25 well that's a conversation and a half in here. has been a while since we had that sort of natter. 19:24:38 Yep, and I have a working example in Python of the key image construction too 19:27:08 https://www.github.com/SarangNoether/skunkworks/tree/inverse-mpc/ 19:55:25 @sarang is it not possible to delegate some parts to a zk proving system like bulletproofs? 19:56:19 I’m also guessing that this would preferably be in C++? 19:56:42 Delegate in what way? 19:56:58 The players need compute the key image 20:18:24 Not entirely sure, I haven’t looked over the protocol yet, was thinking there were some parts which could be put into a zero knowledge proof 20:19:58 A trivial example would be to make the key image set a sparse merkle tree and prove in zero knowledge that the key that was just computed is not in the tree 20:22:54 A different example would be to offload the rangeproof portion to a zkproof, but I think bulletproofs may have the best verification time for rangeproofs with a trustless setup 20:25:05 Oh I see what you mean 21:13:44 sarang: Thanks for the good ideas. I'll try to drop by the next dev meeting and share there. 21:13:57 * cargodog frantically takes notes 22:08:38 sarang: Here is another moderate performance optimization that may be useful to anyone implementing. Not sure if its relevant enough to describe it in the paper, but perhaps its of interest. 22:08:38 It applies to any proof based on the "One out of Many" construction by Groth & Kohlweiss. IIRC, in a previous work, I observed ~5% performance improvement, but that work was not an apples to apples comparison with Arcturus. 22:08:38 https://github.com/cargodog/arcturus/issues/19 22:09:39 The point is to iteratively compute set coefficients over Gray coded indices, instead of the usual binary number sequence.... it takes a small tweak to generalize it to `n-ary` proofs, but its worth the effort 22:49:50 Ooh that's interesting... 22:50:04 Thanks cargodog[m] !! 22:50:11 I'll take a look 22:50:32 Any previous work I should cite on this? 23:29:34 Nothing formal. Just a proof of concept implementation in another project. 23:31:18 It saves roughly `m/2` scalar multiplications per inclusion proof... which can add up quickly for large set sizes 23:57:04 I'm surprised that would lead to 5% improvement... was that in verification or proving?