04:22:36 Updated paper on recursive proof composition: https://eprint.iacr.org/2019/1021 15:54:10 What would be the privacy ramifications if we allowed txes to be created without a ring signature (or one member), but only for txes that occurred within the past 10 blocks (our current hard-coded limit that you can't respend within) 15:55:06 Which would of course allow clients to blacklist the now probably spent output and not use it in rings 15:55:25 Provably spent* 16:00:43 of course there are obvious privacy drawbacks for the specific transaction 16:01:20 are there efficiency limitations for searching for and avoiding these specific outputs? 16:02:33 If blacklisting the provably spent output handles the privacy issues, and efficiency isn't a concern, this would potentially allow for chained spends within 20 minutes 16:02:55 I suppose one ramification is that if these quick-spend transactions are removed from the typical decoy spend distribution, it would impact the distribution 16:03:21 decoy selection would need to be adapted for the change in user behavior (again) 16:03:42 Would it be worth it for the upside of quick spends? 16:03:56 possibly assuming it was clearly communicated 16:04:24 maybe use ringsize - 1 to be slightly more safe (chose exclusively from past 10 blocks) 16:05:21 the main drawback would be if this feature was widely used 16:05:59 Make it a prompt, but only in situations where there is no alternative 16:06:17 Just received from exchange, sending to Atm (for example) 16:06:48 is there a possibility of wallets stupidly only implementing this less-safe send method? 16:06:56 No available outs other than one within 10 blocks, 'Attempt quick spend? Blah blah reduced privacy, wait X blocks if you want to be safe' 16:07:05 Of course there is 16:07:16 that is a potential severe issue 16:07:45 It would require users to be using their wallet every 20 minutes to be a problem 16:07:52 open up a less-safe method, some idiots (or a system devised by idiots) will always use it 16:08:20 I don't think it's that much of a concern 16:08:22 oh here's a mitigation 16:09:04 if ringsize is (ringsize -1) [or whatever we set it to be identifiable], then consensus rule that all decoys must be within the last 10 blocks or so 16:09:20 I thought that was the assumption to begin with 16:09:21 probably with some padding for latency 16:09:26 No padding. 16:09:38 that's the assumption, but it would need to be a consensus rule 16:09:51 Hmmm 16:09:57 Txes become invalid fast if there's any contention. 16:09:58 so if I try to stupidly spend my output this way a month later, the nodes would reject 16:10:17 But maybe that's what you want if you need to meet a timer... 16:10:20 since at least 1 output will be old 16:10:34 we can also require that these transactions have high fees 16:10:41 Does this seem worth looking into? 16:10:54 but miners may disregard 16:11:18 You can force fees with block penalties, but thats also potentially dangerous 16:11:29 maybe use ringsize + 1 to make larger transactions lol 16:12:17 Then you can't do this if there's very few txes in the last 10 blocks. 16:12:36 Or the last 8. Since you want your tx not to become invalid before it has a chance to get mined. 16:13:54 I forget what the reasons for the 10 block thing are, beyond "reorgs make everyone's life harder". Since that one would be (mostly ?) fixed by signing for output keys rather than indices. 16:14:18 It was a privacy issue 16:14:31 Because people ended up using txes in mempool iirc 16:14:34 As ring members 16:14:34 I suppose it'd still be vulnerable to signig with an output that gets reorg'd out by a double spend... 16:15:16 Can you expand on that ? Unmined txes can't be used since you don't know their outputs' indices yet... 16:15:37 Or I guess you can guess and spray :P 16:17:06 🤔 16:17:52 They actually did that ? 16:19:12 I can't recall now 16:19:29 It's been so long since I last thought about the 10 block window 16:50:18 it was some privacy and reog combination 16:50:37 transactions can't be confirmed if they include outputs that don't exist on another chain 16:50:51 decoys complicate the process, so the window helps things settle down 16:50:58 (as far as I understand it) 16:51:45 Isthmus worked on some numbers for reorg lengths under "normal" network conditions 16:54:38 needmonero90: my gut feeling is that if this feature were implemented, the nodes would need to check that all outputs of this transaction type are from the last 20 blocks 16:56:35 and we can't overlap the windows, I suspect, so that would mean doubling the length of time before you can 'securely' spend 16:56:45 definitely a tradeoff 16:56:58 no I am assuming the windows overlap 16:57:12 hm 16:57:18 *ideally* a wallet would spend correctly after block 10 16:57:38 but due to latency, this would only be consensus-imposed after block 20-ish 16:58:57 in some ways, segregating user behavior into these classifications *may* improve privacy 16:59:24 just like I believe separating coinbase outputs will improve user privacy for >95% of users 17:00:35 but in this case I think the privacy benefits are less clear, since we would be permitting behavior that isn't currently allowed. How would it be used? How many people would use it? etc 17:03:09 I don't yet know if I want to be a champion for this idea and research, but obviously if the tradeoff is deemed to be acceptable, that would provide a huge UX improvement 17:11:00 Documentation write-up for transaction proofs, intended for Zero to Monero: https://gist.github.com/SarangNoether/99a24506772db5ce25e89500d9317e3e 17:11:47 Accounts for the new InProofV2 and OutProofV2 from https://github.com/monero-project/monero/pull/6329 17:17:57 [keybase] : Hoooooo 17:18:50 Hello suraeNoether surae 17:19:26 When is your Triptych talk? 17:22:18 [keybase] : In about 40 17:22:21 [keybase] : I think 17:22:38 [keybase] : Oops, lunch first, then second talk after that 17:22:55 Neat! Are the talks recorded/posted/streamed? 17:24:08 Also, any last-minute questions or details you'd like clarified for the talk? 17:24:21 (I didn't review any further changes you made to the Overleaf presentation since we talked last) 18:06:05 nope, i'm good. i believe that the talks will be made available online... 18:06:42 allegedly at https://www.fields.utoronto.ca but I don't think they'll be posted today. 18:23:27 I'll be watching closely :) 18:33:19 [keybase] : National archives of Korea appears to have invented a rather convoluted blockchain scheme for official record keeping. 18:33:47 [keybase] : First few talks were on crypto protocols and quantum computing. These two are from librarians, then me again. 18:34:06 [keybase] : It's a curious scheduling choice 18:37:03 [keybase] : Wait: are these streaming?? 18:37:32 is the korean blockchain PoW? 18:40:34 [keybase] : Not clear, I am going to ask why not use a DB with a public facing bulletin board 18:40:49 [keybase] : I would guess it's staked since it's governmental 18:41:37 [keybase] : Seems more like a chain of signatures instead of a blockchain, not sure what a fork means :/ 18:54:30 [keybase] : I'm not going to ask my question because he has already gone over time 20:10:41 Initial timing estimates coming in for new protocols... running the numbers now 20:11:16 And the winner is... Triptych! 20:12:09 And the crowd goes wild! 20:12:35 The post-BP chain would take the following number of hours to verify (spend, balance, and range proofs only): 20:12:43 rct3-multi 4.12 20:12:47 rct3-single 4.60 20:12:52 triptych-multi 3.68 20:12:56 triptych-single 3.94 20:13:26 (this is for ring=11, as an initial test) 20:14:11 I need to get additional timing data to compare to estimates for CLSAG/MLSAG 20:17:37 very cool 20:17:50 let's just go all-in on triptych already 20:18:03 I need an excuse to visit that brewery 20:18:36 Interestingly, there are some changes at higher ringsize, where Triptych still wins but the RCT3 variants swap 20:18:54 The same chain region for ring=256 would take about 20 hours to verify 20:19:07 (verification is linear in the ring size) 20:19:23 This assumes per-block batching, BTW 20:19:50 can you do better than per-block? 20:20:02 Next up is to use larger fixed batches, and pull some operation timing data to compare to CLSAG/MLSAG (which use different ops) 20:20:05 ^^ yes 20:20:16 The current default is per-block, so I coded that in 20:20:35 Once I have the added functionality, I'll post the code for the analysis tool 20:20:48 the linear verification time is not ideal :( 20:20:52 but this is still exciting 20:21:15 Yeah, can't do better than that (up to a logarithmic factor from multiexp) 20:21:52 You could get crazier and start factoring in common ring epochs across a batch due to the selection algorithm 20:22:00 sarang: What is the hardware for this kind of timing? 20:22:28 It's a 2.1 GHz Opteron 20:22:51 Same machine as all the timing data I've reported earlier 20:23:41 ok 20:24:23 It's straightforward to run perftests for multiexp and use those timings in the tool, though 20:24:35 at large N, an N-multiexp is quite linear 20:24:43 so the estimates are pretty reasonable 20:24:49 [keybase] : Agreed 20:25:02 Agreed to which part? =p 20:26:37 Anyway, I'll get some better comparison data and run plots over various ring sizes 20:26:51 I'm so excited for this data 20:27:29 Yeah, the tool is finished for all the Triptych and RCT3 variants already... it's just a matter of running it across the whole parameter space of interest 20:27:51 Which IMO is probably N=128 to N=1024 20:28:07 i.e. 1 and 2 orders of magnitude higher than current 20:28:22 I should have the plots ready to go by tomorrow's meeting 20:28:47 [keybase] : Agreed re: going hard on triptych 20:29:01 It doesn't win for size, but it wins for time 20:29:44 Heh, surae I thought you were agreeing to "at large N, an N-multiexp is linear" 20:29:52 I mean, sure 20:30:06 Given size tends towards constant with increasing ring size, I think time is the most important. Even discounting arguments about physics. 20:30:58 FYI this is the dataset, courtesy of n3ptune and friends: https://github.com/noncesense-research-lab/monero_transaction_io/tree/master/data 20:31:48 Here is the tool (still WIP): https://github.com/SarangNoether/skunkworks/blob/sublinear/estimate.py 20:33:39 Todo: include CLSAG/MLSAG timing estimates 20:33:48 Todo: modify for arbitrary batch sizes 20:35:01 FWIW both Triptych variants are ~12% faster than RCT3-single at N=256 20:35:17 and ~30% faster than RCT3-multi at N=256 20:35:36 RCT3-multi really suffers at large ring size because of its padding requirements 20:35:54 hence the single and multi variants flipping 20:36:29 Anyway, anyone is welcome to play around with the tool and dataset if they like 20:37:01 It doesn't cache results for common transaction structures (for simplicity), so it's pretty slow 20:37:16 but it'll do a full-chain run in under a minute 20:37:36 Rather, a post-BP chain run (for better protocol consistency) 20:38:45 https://media.giphy.com/media/gFExLUk9xb7s5C36e7/giphy.gif 20:40:23 hooray triptych! 20:40:40 lemme make that PR and call it a day 20:41:00 Who knows? Perhaps a future version of Omniring will win 20:41:08 It's still under development 20:41:28 Triptych has the best name for marketing 20:41:35 well, that brings up the concern of the cutting edge.... there's always gonna be something in development 20:42:31 What about the mathematical difficulty / complexity of the protocols, are there obvious differences between the protocols? 20:43:40 RCT3 uses a proving system based on Bulletproofs 20:43:53 Triptych uses a proving system based on one by Groth and Kohlweiss 20:44:14 Neither is crazy complex 20:44:28 [keybase] : I feel like triptych is less prone to implementation problems 20:44:33 [keybase] : But that's not quantifiable 20:45:08 [keybase] : It's probably my own familiarity with it, not a comment on the complexity 20:49:10 Bigger batching shaves maybe an hour (out of 20) off full verification time for Triptych 20:49:16 so only about 5% over per-block 20:49:59 [keybase] : That's significant 20:50:08 [keybase] : Since we have an exponential trade-off between time and speed 20:50:11 Yeah, but not as significant as I'd hoped 20:50:30 [keybase] : Time and space* 20:50:54 *spacetime 20:52:03 batching code pushed 20:52:13 in case you're playing along at home 21:03:15 20hrs or 19hrs is quite a lot 21:05:17 [keybase] : Wait: sarang are you saying the entire chain as-is would take 19 hours to sync if it had been triptych the whole time? 21:05:54 The post bulletproofs chain 21:07:18 I've definitely synced full nodes recently in less than 20 hours. Am I comparing the wrong numbers? 21:07:28 Yes 21:07:33 One sec 21:07:36 good :) 21:07:51 sgp_: without checkpoints? 21:08:54 UkoeHB_: probably not 21:09:05 just default run and see 21:10:25 default is with checkpoints 21:10:55 how much faster is it with checkpoints? I thought Bitcoin removed them 21:11:25 significantly faster 21:12:29 lopp recently tested 1 day 2 hours 40 mins with `max-concurrency=12 fast-block-sync=0 prep-blocks-threads=12` 21:16:11 sounds right, AFAIK he used strong hardware 21:17:51 my computers are always limited by the CPU 21:18:42 --block-download-max-size might help, default is... 250 MB IIRC. If you have the RAM, setting it higher would allow early download. 21:20:50 how easy is it to test the available ram and increase that automatically? 21:21:15 Not easy. 21:21:30 You start getting in the way of the OS caching. 22:22:06 sgp_ etc: the numbers I gave are estimates about taking the entire existing post-BP chain's input-output distribution and applying it instead to different protocols 22:22:14 Notably, at higher ring sizes 22:25:35 They'll be useful for comparison, but I would not use them as hard-and-fast expectations for actual sync time 22:50:57 [keybase] : Sgp_etc would be a good v2.0 username 22:51:06 [keybase] : So uh anyone in Toronto bored?