00:10:20 I think if the host is contacted over i2P or Tor then input/output groupings can be hidden to some extend. There is still risk of MitM 00:10:35 although MitM would be really hard to pull off 00:12:38 basically without IP logs then the protocol as it currently stands can be hosted just fine, assuming MitM is difficult and unlikely 01:32:25 such a service could participate as an output in each hosted join, basically requiring the other participants to pay the service some tiny amount 01:32:40 that way no need to set up accounts or anything 04:08:54 https://github.com/monero-project/research-lab/issues/59 04:23:24 was there discussion about knaccc 's refund address scheme? 04:58:29 koe i think the main hurdle was deciding whether any kind of refund address scheme is a good idea. now that it might be almost-free, the question still remains over whether it's useful or a good idea or not 04:59:24 almost free? 05:00:35 well 32 bytes for free from the bulletproof storage, but it needs a few bits extra on top of that to 05:00:37 too* 05:01:09 i can quite easily see that there may be a much stronger case for using that free storage for a general-purpose encrypted memo field 05:03:00 it's 32 bytes for a 2-out tx, so that'll all be free storage in the bulletproof, but if there are >=3 outputs you need to also store in txextra ceil(num_outputs*ceil(log2(num_outputs))/8) bytes 05:03:23 ah ok 05:03:49 I wonder about pruning.. seems a pernicious roadblock to using that free scalar 05:04:24 you could argue that the refund address is only really useful for a short period after the tx is received, so only useful anyway when within the pruning window 05:04:46 but then, there is a good chance i've not thought through all the return address use cases 05:05:16 and by "good", i mean 100% :) 05:05:50 how wide is the pruning window? 05:06:12 are refund addresses needed for payment channels etc? 05:06:52 i'm not sure 05:06:58 to either of your questions 05:14:30 in any case it's an interesting scheme, well designed 05:44:10 heh thanks, yeah it went through more than 10 iterations to get there, with the help of people in mrl to keep bouncing ideas off 11:32:15 I really like the blog post about supply auditability. this term comes up so often and it's not clear at all what this even means 13:52:22 Thanks real_or_random 14:35:13 knaccc: can't safely store it in the bulletproof if the recipient is to be able to access it 14:35:32 the recipient could use that information to brute-force the amount of the output it doesn't control 14:36:24 gingeropolous: some method of effective refund (ideally non-interactive) 15:45:00 anyone knows whether @atoc will be around today? wanted to talk xmr-btc atomic swaps to her//him 15:56:45 Backlog shows atoc is around every few days, so I expect it to continue. 15:57:34 zkao[m]: anything of interest to the channel about the swap protocol? 16:00:06 sarang: we put a video up of petrinet exectutions of the protocol. it can be found here: https://open.tube/videos/watch/515a1dc7-978c-44e3-b9fd-42a380a949ca 16:01:22 and i wanted to clear out that there's already some initial code for implementing it: https://github.com/h4sh3d/monero-swap-lib 16:01:46 What's used for the preimage proof? 16:01:52 * sarang is looking at the code now 16:02:50 sarang: nothing, its not done. thats is what @atoc should work on 16:02:58 Got it, thanks 17:06:42 I'm very much in favor of return addresses, with a few considerations to prevent privacy concessions: 17:06:56 (most or all of which have been discussed previously) 17:07:00 1) Encrypt them, to prevent dusting. 17:07:04 2) Required in every transaction by consensus, to prevent fingerprinting and not create a new anonymity partition for transaction linking. 17:07:24 3) Sender can put dummy return address to avoid mandated bidirectionality. 17:08:09 #3 is a freebie, could encrypt 0s, stick in a random number, or encrypt the *recipient's* address for the "return" address 17:33:34 This is a builbing block for second layer stuff, right ? 17:38:03 Interactive refunds require user action 17:39:17 What I mean is, is this just for the case where Alice might want to send money back, or is there something actually useful behind it :) 17:39:44 Or is the sending money back actually common without second layer, because I can't imagine it is. 17:40:10 I kinda remember this being mentioned about second layer, but I thought I'd make sure. 17:40:29 And wasting space just for a corner case seems unwanted. 17:41:03 I'd much rather spend the space on something of general interest. 17:41:40 (and a building block for some second layer would definitely be) 17:43:00 as a general rule, btc doesn't really have return addresses either 17:43:21 so any use case would have to be either a building block that is specific to xmr, or outside of how people generally use btc today 17:59:51 Brief writeup describing Janus mitigation (with simple code example): https://github.com/SarangNoether/skunkworks/tree/janus 18:14:03 The mitigation requires a single fixed-base tx public key per tx private key used in the transaction 18:14:15 So this plays into the "how many pubkeys do we have?" game 18:31:13 sarang can you add a note that subaddress recipients may share a transaction private key, so there should never need to be more than one extra `base' key per transaction 18:31:40 Sure 18:31:58 What cases, if any, exist where multiple tx privkeys are present? 18:32:20 otherwise looks really good, can you make a repo issue recommending the change? 18:33:23 any time at least one subaddress exists (excluding cases: 2-output with non-change to subaddress, 1-subaddress and it's the change output) 18:33:51 currently when there is at least one subaddress, every single recipient gets their own tx pub key 18:34:19 Right, but I mean for private keys (`r` in the notation of the writeup) 18:34:45 Ignoring all the cases of identifying which keys to use, the mitigation is technically per-privkey 18:34:48 "every single recipient gets their own tx pub key" -> "every single recipient gets their own tx private key" 18:36:12 OK, so when you say "may share a transaction private key", this is not done by default 18:36:30 construct_tx_and_get_tx_key() creates all the private keys; that's right 18:36:37 noted 18:37:13 Would this not reveal whether a single subaddress was used in multiple outputs of the same transaction? 18:37:19 It would yield identical tx pubkeys 18:38:26 ah indeed 18:40:24 If size were not an issue, it'd be great to have all separate pubkeys and "mitigation points" 18:40:27 I am concerned the cost of mitigation will be too high if we need an additional pub key for every output 18:40:34 Right, size is an issue 18:40:58 Of course, there could be a wallet/consensus check for repeated pubkeys that indicate a repeated address 18:43:24 I think there needs to be better agreement on how to handle these cases before making concrete suggestions about Janus mitigation implementation 18:46:13 there may be a way lemmee see 18:46:23 Even having a designated single "mitigation point" with individual pubkeys derived from a single privkey is problematic, since the mitigation point would be identical to any pubkeys for standard addresses 18:46:32 and this would leak the number of subaddresses again! 18:48:26 "Of course, there could be a wallet/consensus check for repeated pubkeys that indicate a repeated address" 18:48:40 Wait, nevermind 18:48:43 * Isthmus shuffles off 18:48:48 That wouldn't apply if there was a single privkey 18:49:00 Since all standard-address outputs would share identical pubkeys 18:49:38 Using separate independent privkeys for all outputs removes the problem, but then introduces the new problem of size bloat 18:49:44 https://justpaste.it/4b8xy 18:49:49 there will this work? 18:49:51 since you need separate tx pubkeys _and_ separate mitigation points 18:50:10 plz explain 18:51:19 (I get the notation from ZtM, just want to confirm what the resulting values correspond to!) 18:52:29 basically make the tx private key a sum of base private key and hash of base pub key with output index; will be different for each index, but can be reconstructed for janus mitigation 18:52:34 using just the one base key 18:52:56 even if repeat of same subaddress recipient 18:54:26 gotta afk lmk if it works 19:02:59 Popping in just to say that all non-subaddress tx pub keys would be random. Only shared secret (with subaddy spend key) would use the constructed private key 19:24:51 Distribution of output count since 1978433 (last update, banning 1OTXs) 19:25:07 https://www.irccloud.com/pastebin/y62MRpeA/ 19:26:46 What's up with the 11-spike? 19:29:05 Uh I stopped paying for irccloud but apparently my account still works ?? 19:29:27 There's a free version that only keeps you logged in for something like 2 days 19:29:33 Oh 19:29:40 Maybe it reverted to that 19:30:09 It shows you just joined the channel before writing 19:30:20 Guess I'll suck it up for the 5$ 😭 19:31:20 https://usercontent.irccloud-cdn.com/file/oSealHXL/image.png 19:31:36 what an odd spike 19:32:07 The distribution between 3 and 10 was a bit unexpected to me 19:32:12 Pools I'd suspect. 19:32:21 Probably 19:32:32 I'd bet that most of the >2 is pool activity 19:33:01 https://supportxmr.com/ has a list of all our payout counts, it's +1 output of course due to the pool being it's own payee 19:33:30 We send bunches of 16 output txns every hour, then it's weird straggler counts for whoever we can't jam into the main groupings. 19:34:39 10 outputs I think is the old pool with it's hard-limit to the number of outputs for when they were larger, and you couldn't jam as many outputs into a single txn. 19:35:57 That makes sense 19:36:46 Though straggler transaction would have count (# of users mod 16), which would be evenly distributed across the range. So it doesn't totally explain the 'decay'-ish trend 19:37:11 No reason for even distribution. 19:38:05 Assuming that it's fully random, sure, but pools have a sort of interesting thing, where there's a small number of users that get paid out highly reuglarly, then it's one-two offs from there. 19:38:32 Aw geez at some point I need to take 2 months off and deep dive into pool mechanics 19:38:39 Just ping me and ask. 19:38:44 It's probally way faster. :P 19:38:52 Always appreciate your input and insights :- ) 19:38:58 I'd also suggest raiding SXMR's API 19:39:13 Oooh I'll check that out 19:39:17 * Isthmus runs off to acquire a smoothie 19:39:44 There /used/ to be network docs, but the provider I was using went away. However, there's 144k payments in our DB, with payee counts you can use to mine data about "How do pools pay out" :) 19:41:51 Ok shouldn't lose pms now 20:14:22 Snipa what do you think about using a coinjoin-like service (name now TxTangle) to collaborate with other pool(s) for purpose of hiding which pool your miners belong to? 20:29:51 Also is there anyone who understands networks and privacy networks that can give advice about TxTangle? I don't know the design limitations around anonymous communication 20:31:09 vtnerd has done a lot of research in this area 20:35:43 UkoeHB_ - I don't know why we'd bother honestly. Functionally, you'd have to know an address to confirm if a miner's mining at a pool or not in the first place. 20:46:27 UkoeHB_: is this what you intended with your idea? https://gist.github.com/SarangNoether/e41a760f5558cb05357d1fd6134cca07 20:46:48 (ignore all the revision history; it was me trying to remember how to do nested lists in Markdown) 20:47:30 The way I wrote it, the output and key generation is essentially identical between subaddresses and standard addresses 20:47:32 uses a single Janus key 20:47:35 etc. 20:48:51 Also: "The Janus Key" sounds like the title of a mystery novel 20:53:11 Snipa heuristics can be used to track down which pool a miner gets payed from. I expect it's fairly trivial. Say a miner deposits his earnings into an exchange without any rigorous churning. Bam, easily traced to his pool 20:53:11 So with this method, the Janus key is specifically identified as such in the tx data, and each output pubkey can be safely linked to its indexed public key 20:53:38 The cost is an indexed pubkey for all outputs, and one additional Janus key 20:53:56 (should be mandatory IMO to avoid obvious heuristics!) 20:55:21 UkoeHB_ - I suppose that's true enough, frankly, if you're going right to a exchange, I don't think you worry too much about privacy. :) 21:01:07 sarang using enforced sorted TLV it might be easy to start requiring things on the protocol level, like a certain size of the extra_pub_keys field 21:02:12 Does the gist make sense and capture the idea of what you were also thinking? 21:02:41 actually non-subaddress can't use the indexed version, it has to be random, since observers can trivially test it 21:02:49 otherwise yup! 21:03:56 Isthmus: besides the 11 spike, that's what I was expecting 21:04:23 Ah good point UkoeHB_ 21:04:42 I had used the index method mainly for the simplicity of consistency 21:04:55 and the equality check would be slightly cheaper without subtracting, check if equal not subtract to zero 21:05:02 but the proportion of 11 and 16-out transactions, mostly related to pools, makes me think that the impact of public pool data can be quite significant.... 21:05:39 UkoeHB_: we can do both operations cheaply, and they only apply to outputs controlled by the wallet anyway 21:05:47 the difference would be trivial 21:05:59 ok 21:06:11 But you are technically correct (the best kind of correct) 21:06:29 im thinking of the case when someone owns a bunch of outputs 21:08:32 If there are enough of them, you could do a delayed batched check using multiscalar multiplication 21:08:48 there's a factor of `lg(N)` savings for large `N` 21:09:42 and in that case you use the zero-sum equalities 21:10:05 if tx pub keys are sorted like I want, it will require testing every other pub key in the extra field 21:10:13 randomly sorted* 21:10:35 and will be even more tests if viewspent idea is implemented, or return addresses, or or or 21:12:01 updated gist 21:12:12 What's the advantage to randomizing? 21:12:26 versus including them directly with the associated output pubkey 21:14:19 with txTangle the only way to janus mitigate is using the change tx pub key as janus base key 21:14:34 or any other non-subaddress output from your subset of outputs 21:15:09 but if the base key isn't at index 0 in tx pub key list, then recipient knows its a TxTangle transaction 21:15:14 It would be nice to avoid testing against them all 21:18:36 In fact, it would be interesting to know how much scanning time is spent doing all these tests 21:18:53 yeah, I just don't see any way around it for TT; plus you can't enforce 1:1 correlation with outputs, so parser has to scan everything anyway in case someone implemented it weird 21:19:01 so better to randomize as the standard 21:19:27 Why can't you enforce 1:1? 21:19:43 If you require a separate indexed pubkey for every output 21:19:52 (plus separate Janus key) 21:19:54 well it's not verifiable 21:20:13 unless removed from extra field 21:20:38 It's not verifiable now that the tx pubkey actually corresponds to anything in the output either 21:21:17 it's only on a per-owned output basis, so if you own 1000 outputs it will only correspond with a few thousand blocks or less of normal scanning time, probably 21:21:19 It can be random, if your client is wonky or you feel like watching the world burn a little =p 21:21:43 I don't really like the idea of forcing extra scan time just to accommodate clients doing silly things 21:21:49 it's a mild form of DoS 21:22:03 it's 1% or less scanning time, and usually alot less unless you own TONS of outputs 21:22:16 1% or less of whole chain anyway 21:22:52 So you prefer lexicographic ordering for output pubkeys and indexed pubkeys? 21:23:00 (using the naming I had in the gist) 21:23:11 and then a separately-flagged Janus key 21:23:26 or just randomized, shuffled; currently they are implemented 1:1 with output index 21:24:30 for implementers it's easier to scan all pub keys instead of keep track of the rules around which pub key is what 21:24:37 marginally easier 21:24:40 Can't verify a shuffle 21:24:43 Can verify lexicographic 21:24:51 Less room for a bad implementation getting propagated 21:24:59 good point 21:25:20 no need to flag the janus key, just mix it in with the other pub keys 21:25:31 I'd rather have lexicographic for one of the sets (e.g. output pubkeys) and then a matched indexed key, but the next best thing would be lexicographic on both sets, IMO 21:25:31 that's the original point 21:26:02 no need to care about sorting if janus key is identifiable, since the point of sorting is to aid TxTangle janus mitigation 21:26:14 For that use case, sure, I guess 21:27:42 Is your idea for that TT use case limited to pool operations? 21:27:51 (haven't read all the scrollback) 21:27:53 no just one application 22:13:53 FYI will be away tomorrow (Friday) 22:43:08 sarang are you planning to make a repo issue describing Janus? 22:43:43 It's well known 22:44:10 I don't want to formally recommend a particular fix until we have a better grasp on what to do more generally about the handling of tx pubkeys 22:45:30 ok I can make a post about enforced sorted TLV, which is the first step in that direction; waiting on decisions about TxTangle for janus might be overkill, since TT is such a huge project (if even doable in the final analysis) 22:46:07 Even so, including a Janus key would require a decision about how to store it, handle it, etc. 22:46:44 But I agree that moving discussion forward on tx_extra handling is a good idea 22:49:42 hopefully once I work out TT's networking kinks and nail down other minor iterations, it can move forward as a more formal proposal that can be taken into account for other decisions like tx pub keys; it may require a very focused meeting exactly on that topic, since there is also viewspent and return address proposals that involve tx pub keys 22:50:27 Yep, a lot of different protocol decisions... 22:50:54 😈 22:52:14 And then there are ring signatures to consider for future protocol upgrades too... 22:52:32 Or rather, ring signatures / spend proofs 22:54:04 ok I will put some energy into organizing a focused meeting, in 1-3 weeks, since development on any of these things should start within next couple months to be on the same track with CLSAG 22:54:45 Wait, what do you hope to have deployed concurrently with CLSAG? 22:55:50 would be nice (from my pov) to have at least one other thing out of: janus mitigation, enforce sorted TLV, viewspent 22:56:29 IMO having mandatory Janus mitigation without a more robust handling of subaddress keys seems like a bit of a waste 22:56:46 robust handling? 22:57:22 e.g. mitigating the heuristic with tx pubkey counts etc 22:57:35 (which could be part of Janus mitigation, I know) 22:57:54 Whereas CLSAG is neatly modular 22:58:00 and basically done :) 22:58:29 pubkey counts? aside from the bug with extra key, it's two clear categories 22:59:43 if there are at least two non-change outputs, and at least one of them is a subaddress recipient, then #outputs == #keys (+1 due to bug); in all other cases 1 tx pub key 23:00:22 Or, as Isthmus would say, "anonymity pools" 23:00:47 do you mean deciding if all tx should have #outputs == #keys? 23:00:52 aye 23:01:02 Ill put it on the list for discussion 23:03:36 A takeover of the research lab! 23:03:37 =p 23:04:06 😇 23:05:19 -____- 23:09:07 "Transaction keys and you" lol what is this cheesy title :p 23:33:13 UkoeHB_: in a world where Triptych is used for ring signatures, a view-spent key could be included directly within the proof at no additional space cost 23:33:38 that's great news 23:34:18 Provided the sender uses a PRNG seed they (and only they) can recover, this would recover the key (or other arbitrary data) and would also leak the signing index via a brute-force recovery 23:34:29 (which makes it unsuitable for hiding data intended for other parties to recover) 23:34:50 is it a prunable part of the proof? 23:34:51 The recovery is done during the signature verification process 23:35:14 It's included as an offset to one of two proof elements that are necessary for signature verification 23:36:46 And because there are two such proof elements, there's still 32 bytes open 23:36:57 I'll note this on the github issue for ya 23:39:21 https://github.com/monero-project/research-lab/issues/58#issuecomment-580513139 23:40:51 Seems like a perfect use for this functionality 23:41:30 and it's not possible for an observer to determine if such a key was included at all, since the proof elements are still uniformly distributed and verification is not affected 23:41:44 sweet! 23:42:04 Therefore, it's no additional space cost, minimal computational overhead (a couple of hash-to-fields), and no heuristics 23:42:28 RCT3 likely could function in this way too, FWIW 23:44:52 Here's the recovery process: https://github.com/SarangNoether/skunkworks/blob/triptych/triptych-single/triptych.py#L265 23:46:00 Those lines are ignored for any signature where the verifier does not suspect auxiliary data is present 23:46:48 The seed would be a hashed combination of wallet key, transaction information, etc. such that it's unique to the signature and known only to the signer 23:47:03 (but it can be anything that you can throw into a hash function) 23:53:43 https://github.com/monero-project/research-lab/issues/61 23:55:54 well just hash the view key, since a viewer will know the signing index anyway (or am I speaking out my ass, cause idk anything about those protocols!)