01:45:21 @sarang - agree, seems fine to update with a citation, link, and the file checksum. 01:47:40 Maybe sign it for good measure 01:50:45 I would be very surprised if someone decided to compromise the archive! 03:03:02 Surprise targets are the best targets 03:06:58 I wouldn't be surprised if an unscrupulous totalitarian government compromised some low-security academic servers to ensnare or dox citizens bypassing firewalls and information control 07:19:40 is there any documentation regarding how bulletproofs were implemented in Monero? I see some code references to the original bulletproofs paper, but aside from that my understanding is there were various implementation choices made. 07:53:13 btw sarang I did take advantage of your ZtM commit regarding updated encrypted amount format, thanks! 07:54:04 ZtM2 is coming along nicely, just finished a narrative rework that Im very happy with 07:55:00 most of what remains is just bulletproofs and multisig (admittedly leaving the hardest parts for last) 07:56:32 ah and the dastardly new weight/fee system 12:20:30 What bulletproof questions do you have koe? 12:20:49 nothing specific right now, just want to collect a list of documents 12:22:42 Ok 12:27:03 How much detail are you intending? 12:27:20 not sure yet 12:27:24 Getting into the weeds in the math for bulletproofs is probably not useful for the reader 12:27:34 hopefully on a similar level to mlsag etc 12:27:55 but I did get half-way through that Alex document, and it's VERY TEDIOUS 12:28:11 Which? 12:28:29 Adam sry https://github.com/AdamISZ/from0k2bp 12:28:55 I don't think that would really be needed IMO 12:29:55 I gave up on the condensed vector knowledge proof lol. In any case it would be nice to, at minimum, write down the algorithm used. 12:30:26 maybe some comments about how it works 12:31:37 maybe translate Adam's document into a supplement or appendix 12:42:42 The bulletproofs paper does a good job of detailing 12:43:46 Hooray, Triptych is already on the IACR archive: https://eprint.iacr.org/2020/018 12:43:57 congrats! 13:03:31 sarang: Would you mind posting that on Reddit with a little description / explainer? 13:03:55 Sure, will do it when not on mobile 13:04:20 I didn't expect it to be posted so quickly 13:04:25 All right, thanks 13:04:27 Sometimes takes a couple of days 13:04:29 almost 2020/020 13:04:32 Ikr 13:04:36 I was hoping... 13:16:51 btw one thing to keep in mind for these logarithmic size systems is in Monero there is still a linear component, namely the ring member offsets which are at least 1 byte. Probably not significant on the order of 100 member rings, but 1000 member rings it becomes a lot. 13:18:24 probably average 2-3 bytes each based on varint representation 13:32:21 I have a hard time imagining the utility of 10^3+ order rings anyway.. 13:36:22 With small ring sizes, you can still have proofs that my incoming monero can't have originated from this particular output. With large ring sizes, this becomes less possible faster. 13:36:42 s/my/your/ 13:36:54 koe: all the analysis that I've posted has noted that size estimates do not include anonymity set representations 13:37:23 However, the use of fixed sets of outputs can reduce this, as well as take advantage of security benefits from binning (see e.g. the Miler paper) 13:44:43 dEBRUYNE: https://www.reddit.com/r/Monero/comments/elboby/triptych_logarithmicsized_linkable_ring/ 13:46:08 that's good to hear, it's easy to overlook 13:47:05 Omniring in particular mentioned the use of "squeeze-a-crowd"-style efficient representation, but this is tricky to do when you have a more complex output selection distribution (as we do) 13:47:46 But as an example, to have a 100-ring, you might instead use 10 separate 10-element fixed sets, and your representation is now approximately the same size as it is now 13:48:04 interesting 13:48:11 There are a few other subtle points to this, such as verifiably randomizing the sets to avoid malicious packing 13:48:26 but that's straightforward to do 13:58:30 koe: one part of Triptych that I like (and which is shared by RCT3 and one version of Omniring) is that there's no longer a need to hash all the outputs in the anonymity set anymore 13:59:00 For a large anon set, it'd be a significant time sink or require caching of the hashes in advance 13:59:08 yeah I was wondering about that 13:59:26 The new image format uses a particular PRF introduced by (I think) Dodis 13:59:50 This is what complicates multisig, since you have to invert the signing key to generate the linking tag 14:00:07 Even in the form of Omniring that uses the current image format, there's still an inversion in the proof :( 14:02:34 where there's a will there's a way! 14:02:55 Yeah, there's a method to do the inversion that's been worked out 14:03:08 it's not as straightforward as you'd like 14:03:39 Requires some homomorphic public-key stuff 14:06:39 I need to escape before yall do something crazy! 14:08:18 Some initial work on inversion-based multisig: https://github.com/SarangNoether/skunkworks/tree/inverse-mpc 14:08:22 (should not be considered secure or production-ready) 14:08:49 It's also markdown-based math, which is not great to read =p 14:09:01 sarang: Can you also add a comment on Reddit what Triptych could mean in the future? :) 14:09:21 "Maybe something; maybe nothing" 14:09:25 but I guess that’s not scientific 14:09:26 lol 14:13:07 done 14:14:05 ty 14:14:43 also added links to other relevant preprints 14:27:42 Link to TeX source for the paper, in case it's useful to anyone: https://github.com/SarangNoether/skunkworks/blob/triptych/paper/iacr.tex 15:30:13 did you make a similar graph for transaction verification? 15:33:36 I did not, since verification complexity was only listed for multiscalar-type operations 15:33:45 Didn't think it would be a totally fair comparison 15:34:16 The table provides that complexity 15:34:52 sgp_ just train your mind to dream in logarithm 15:36:48 sarang koe my mind is trying to turn the table into a graph :) 15:37:21 good work, brainmail it to me when you're done! 15:37:46 Keep in mind as well that the table is only for the signatures/proofs 15:38:00 It's not a full representation of what's needed for something akin to an RCT transaction structure 15:38:22 I'm looking for an informal, dummied-down, non-scientific, realistic graph of verification time 15:38:48 "this is drunk, outspoken sarang making a graph on a board" quality 15:38:54 You'd need to include a lot of other stuff for a full comparison, like balance computation and other auxiliary information like range proofs 15:39:01 Heh 15:41:12 The material in my sublinear branch includes this stuff 15:42:21 brb making a graph :p 15:43:22 From the sublinear data? 15:43:28 yeah 15:43:46 Keep in mind that the sizes/times scale differently based on transaction in/out structure 15:43:54 so it's really hard to make universal comparisons 15:44:15 yeah fair 15:44:16 I don't want to make informal claims that are interpreted more broadly than they should be 15:44:21 I just need some visualization 15:44:35 the verification complexity that I listed is more realistic IMO 15:44:55 even though less practical since we don't just include a sig/proof in txns on chain 15:44:58 it's pretty opaque 15:45:08 to most eyes 15:55:13 what values should I use for k? 16:00:04 It's not a variable. It's a notation for the complexity of a multiscalar multiplication operation 16:00:18 Depends on the implementation relative to things like hash to point ops 16:00:32 Since CLSAG uses a lot of these hashing ops 18:00:15 Boo, just realized the table from the Triptych paper does not include my batching complexity numbers 18:00:21 I'll update and revise on IACR 18:01:13 In fact, I should include batching data for common and separate anonymity sets across the batch 18:01:21 * sarang gets to work 18:01:28 ^ suraeNoether 18:10:29 Lmk when it's done! I didnt review the tables as closely as the proofs :( 18:13:59 Thought I had included the batch versions in the paper, but I must have forgotten 18:14:15 Fortunately it's easy to revise on IACR, and I believe it would appear instantly 18:14:23 and I suppose that's part of the point of preprints 19:15:56 Batching is only used for the txes since last embedded hash when syncing historical data. So it may not be that big a loss. 19:17:24 ? 19:17:39 I mean that I didn't include the data on batching in the table of the paper 19:17:51 It's accounted for in the full-txn analyses I did 19:18:10 The paper has proof-only data that does not include range proofs, multi-input stuff, etc. 19:18:16 to have a more fair comparison 19:18:41 but the narrative of the paper says batching is included, which is wrog 19:18:45 I did not mean to make a comment about the paper, sorry if it sounded that way. 19:18:49 ah ok 19:18:51 nvm 20:22:40 New preprint on application of SHA-1 collisions: https://eprint.iacr.org/2020/014 20:31:21 Also: don't forget this week's research meeting will be tomorrow (Wednesday) one hour later than previously (now 18:00 UTC)