01:01:53 Seems fair 16:44:12 A new site for sharing research papers, from JHU: https://acrab.isi.jhu.edu/ 18:18:25 sarang or suraeNoether what do you think about, for multisig, removing commit and reveal for marketplace escrow, where every transaction is directed to a different vendor subaddress, with change pointed at buyer subaddresses. One problem rbrunner encountered with OpenBazaar integration is requirements for offline purchases, while with 18:18:26 commit-and-reveal it seems impossible to do create tx without vendor sending a message to the buyer 18:19:06 a direct message* 18:21:52 ah crap nvm, the buyer has to hide the signature opener from observers anyway, so commit and reveal is unavoidable 18:22:35 I would need a detailed explanation of what you mean before I could possibly judge. C-and-r adds a round of interaction, no matter what there will have to be interactivity. Maybe I'm not following what you were going for 18:26:38 Ok while writing out an explanation I may have solved it 18:34:36 cliff notes 18:34:37 vendors publish, for each product, a list of shared secret pub keys with trusted escrow agents, along with the original pre-secret keys, in addition to a commitment to signature opening EC point 18:34:43 buyer declares intent to purchase by sending tx details and signature opener commitment to vendor, along with correct address for vendor to complete the 2-of-3 wallet with whatever escrow agent the buyer selected; vendor accepts the terms and sends out the product, along with his reveal of the commitment data; buyer receives product, completes his 18:34:44 part of signature, and sends to vendor his signature-response scalar along with this reveal of his commitment data; vendor completes signature and publishes tx to the blockchain in order to receive funds 18:35:23 workflow does not interrupt the common workflow of a digital marketplace 21:17:24 what are the pros and cons of increasing max number of tx outputs? 21:42:01 pros: more outputs, less txs needed if doing large batches 21:42:05 cons: everything else 21:43:13 everything else? 21:43:54 Pros also include smaller overall size, and better batching speedups. 21:44:23 Quite a corner case currently though. Might not be if we get some kind of coinjoin. 21:45:37 is batching better for small batches rather than larger? 21:48:08 Better for large. 21:48:36 ah right, pro implies better duh 21:50:41 con is probably, easier to fingerprint individual tx since breaking large batches into smaller chunks increases indistinguishability 21:51:00 I think the original worry was someone making a tx with 100k outputs, DoSing verifiers. 21:51:26 The one I remember anyway. 21:52:10 is it really better for batching? 21:52:30 AFAIK it is, though it gets into diminishing returns. 21:52:32 smaller over size depends on distribution of of output sizes I would guess 21:52:42 overall* 21:52:48 output amounts* 21:52:55 numbers of outputs* 21:52:58 whatever 21:53:20 though overall smaller I would think 21:53:33 just more range proof wastefulness 21:53:55 The incremental size for range proofs grows quite slowly as the number of outputs goes up. If you have to split into 4 txes, you restart the log from 0. 21:56:36 I'm curious what the verification is like for 4x 16 batched (and non) vs 1x64 21:56:42 probably been tested somewhere 21:59:03 There are performance tests hidden somewhere in the monero tree. 21:59:32 I'll tell you in... some as yet unknown amount of minutes. 22:04:24 You also need to pad generators to the next power of 2 22:04:37 So for verifiers, 17 == 33 22:04:42 *32 22:05:11 And fees need to account for this of course 22:08:42 Hrm. Need to patch the source to allow for 64 -_- 22:10:05 Since max inputs is 185 I think it would make sense to have 64 or 128 be max outputs 22:16:27 Also better for batching to have fewer unique generators 22:16:58 Two 32out proofs share their 2x32 generators 22:17:10 One 64out proof needs 2x64 unique generators 22:17:27 Plus the others that are per proof, but there are fewer of these with log scaling 22:18:09 I don't agree that max inputs should match 22:18:45 well it makes sense for MoJoin 22:20:05 185 max input/128 max output gives the most flexibility, even though if it's 1 input per output the balance is closer to 100/100 22:20:47 You also need to pad generators to the next power of 2 <= this is what I meant by more range proof wastefulness 22:21:46 mojoin is pretty secondary in consideration for me at this point 22:21:50 does that wastefulness refer to transaction weight or transaction verification speed? or code-side 22:25:03 It's no size difference between powers of 2 22:25:14 Verification does scale within those values tho 22:25:58 so a 64-rp will take longer than 4x 16-rp? 22:26:02 Hence needing fees to scale accordingly 22:26:38 koe: probably... The shared generator benefit becomes more apparent with higher counts 22:26:58 Because of those per proof elements that can't be shared 22:29:40 https://paste.debian.net/hidden/f6a30587/ 22:29:49 Batching enabled. 22:30:21 That's on a laptop with various other VMs running, so take with mucho salt. 22:36:04 Oh, that's just the bulletproofs. I don't have tests for whole tx for this... 22:37:06 Makes sense 22:41:13 if Im reading this right, it's suggesting 4x 16-rp is 10x faster than 1x 64-rp 22:42:07 What is rp ? 22:42:28 rangeproofs. My mistake the loop counts are different, more like 2.5x 22:43:37 sounds in range 22:44:46 Do you have proof ? 22:45:12 without batching, looks like 64-rp is 20% faster 22:46:28 under what conditions may range proofs be batched for verification? 22:47:34 It's pretty forgiving. I think all they need to be is bulletproofs. 22:49:12 Do they have to be the same power of 2? 22:49:17 That's only enabled within a block currently. I have a patch to make it batch "free standing" txes but it was not deemed worth it. We could also batch a set of blocks when downloading historical data, in theory. 22:49:21 No. 22:51:05 what if you batched 4x 64-rp and compared with 16x 16-rp? 22:51:43 All BPs can be batched. The extent to which hey share generators depends on size and implementation 22:51:48 *they 22:53:24 Generators not shared are an extra marginal cost in the multiexp 22:54:35 The number of unique generators (excluding per proof) is based on the largest in the batch 23:13:17 https://paste.debian.net/hidden/9b061880/ 23:15:36 Looks to me like 40% faster for the 16x 16-rp over the 4x 64-rp 23:19:21 implies smaller rangeproofs are better (in terms of verification speed), assuming transaction volume or verification procedure is enough to sustain batching, although there are diminishing returns as batching volume rises 23:21:47 another pro of bigger max tx output count: to disperse a large amount amongst many recipients, currently it takes a chain of transactions to accomplish, since first you break the large amount into 16 smaller amounts, then (assuming <=256 recipients) construct a bunch of tx 23:22:11 that would happen less with bigger output sets 23:23:39 it actually means more blockchain bloat to do that chaining, from the intermediate outputs 23:31:16 Isthmus what is the proportion of tx volume with max output count?