13:31:12 https://twitter.com/HavenXHV/status/1273272339938107394 13:31:22 Was posted here before I think, but figured it wouldn't hurt to post again 13:31:25 May inspire some ideas 13:35:56 Thanks for sharing that :) Pretty cool stuff they're doing 13:36:14 My biggest concern is distinguishability of transactions causing some issues, and the whole Chainlink/oracle portion 13:36:26 But cool to see their idea come to fruition on a Monero-chain 14:03:42 Interesting application of CLSAG, for sure 14:03:51 We mentioned exchange rates in the original version of the preprint 14:04:23 Haven was the project that had contacted me about that aspect of CLSAG math 14:05:23 I'll have to take a look at the code they link to see if that's what they have finished implementing, or if it's something else 14:18:12 fort3hlulz: if they're using the CLSAG-based exchange method invented in the original preprint, what non-uniformity are they introducing? 14:18:47 It's still visible if a transaction is merely XHV, or if it's an exchange transaction 14:18:58 The whole point of our construction is that amounts are still hidden during the exchange 14:19:01 so wouldn't you end up with two "pools" of anonymity/TX type? 14:19:03 Yes for sure 14:19:25 Amounts are hidden, but the transactions still stand out as exchange TXs 14:19:27 It's not clear to me if this is in fact the method they are using 14:19:32 No, not necessarily 14:19:48 One method, the one we built in CLSAG, uses two separate commitments within each output 14:20:08 and whether or not a transaction actually "moves value" is hidden 14:20:28 https://github.com/haven-protocol-org/haven-offshore 14:20:31 That should be the new code 14:20:41 It's worth noting that this part of the preprint remains entirely unreviewed; we removed it from later drafts in order to focus the preprint on the basic security model 14:20:53 But again, not clear if this is the method they're using 14:21:19 The current CLSAG audit won't be reviewing that application 14:23:08 https://explorer.stagenet.havenprotocol.org/ if you want to view the transactions 14:23:21 The stagenet is the current place where exchanges are happening 14:23:36 ok 14:23:39 It's blocked on my work network so I can't dig into it right now lol 14:23:55 I'll take a look at the code and see if I can determine what method they decided to use 14:24:15 I'm just speculating here based on some questions their team had on how the CLSAG math/code worked 14:24:33 sweet, obviously no pressure but I'm always curious for your thoughts :) 14:24:54 On the CLSAG-exchange method? I'm a coauthor on its preprint, so I think it's pretty nifty =p 14:25:20 yeah :D 14:25:30 But AFAIK nobody has/had yet implemented it outside of proof-of-concept stuff 14:25:49 If it's the method they used, I hope they seek external review 14:26:20 I had advised this when they asked, since the method could be flawed 14:26:35 We listed it as a side application, but did not study it in detail 14:34:45 Looks like they do have CLSAG functions in the code 14:35:04 and they look to be essentially unchanged from the Monero code that's currently under review 14:35:17 Along with some extra functionality to handle confidential multiple asset types 14:35:57 Hmm, separate range proofs though, if I'm reading it correctly 14:36:16 If they can establish a common range requirement, they could aggregate those and save a ton of space 14:36:35 probably at the expense of a bit more complex logic 14:36:45 I wonder if/how they adjust the fee structure to account for this 14:42:42 sarang: Would it be worthwhile to spin this part into a new paper that can be reviewed separately? 14:43:34 It certainly could 14:43:47 A full treatment would require a new security model, which would itself be interesting 14:44:22 Since now you have an added goal of ensuring proper soundness of value across the asset commitments 14:45:08 There's always the question of how to set the exchange rate, which this project appears to be doing via some kind of pricing oracle 14:45:39 That rate can't inherently be handled by the cryptography; it has to come from somewhere, whether fixed in the protocol or via an external source (and that would be outside the security model) 15:41:03 Yes, it basically does not solve the oracle problem 16:03:04 they are pulling pricing from https://feeds.chain.link 16:03:45 In what way? 16:03:56 Is that a single-source point of failure? 16:04:11 exchange rates 16:04:52 not sure how they call it a "decentralized" oracle service 16:05:27 No, I mean suppose that service were compromised 16:05:34 Is it the only source of rate data? 16:05:38 https://oracle.havenprotocol.org 16:05:52 it seems so 16:06:18 I see 16:06:47 I further wonder how the network verifies the rates of previous transactions were correct 16:06:58 Does the oracle need to be available for historical data forever? 16:06:58 https://medium.com/@havencurrency/haven-launching-xusd-on-july-20th-856b04c62065 16:07:06 "We plan to add additional decentralized pricing oracles in the future" 16:07:14 Or do verifiers assume that accepted transactions used a valid oracle? 16:07:43 I know very little about the use of price oracles in practice 16:08:29 I have so many questions 16:08:34 it is a single point of failure 16:08:44 I mean for past transactions 16:08:52 Suppose in a year, I want to spin up a Haven node 16:08:58 My client needs to verify transactions 16:09:04 It sees that transactions have an exchange rate 16:09:18 How does it know whether or not that rate was valid at that time? 16:09:26 Does it need to query the oracle for historical data? 16:09:40 Or does it assume that the transaction is deep enough into the chain that it "must be" correct? 16:10:19 To be fair, Monero clients make assumptions relating to old range proofs (they aren't checked by default, but _absolutely_ can be on request) 16:10:49 If you need historical data and the oracle doesn't do this for whatever reason, then you have to trust the chain alone and can't verify externally 16:11:04 Maybe this is considered an acceptable risk; I dunno 16:13:59 https://github.com/haven-protocol-org/haven-offshore/blob/32d8d21b52866c7b766832bcaae2ea64d73c518b/patches/src/cryptonote_core/blockchain.cpp.patch#L683 16:14:58 o_0 16:15:06 looks very hacky 16:15:31 Looks like it pulls historical price data directly from them: https://github.com/haven-protocol-org/haven-offshore/blob/32d8d21b52866c7b766832bcaae2ea64d73c518b/patches/src/cryptonote_core/blockchain.cpp.patch#L696 16:15:36 That seems very risky 16:15:41 depending on their trust model 16:16:15 Also looks to ignore this unless a "full mode" is enabled: https://github.com/haven-protocol-org/haven-offshore/blob/32d8d21b52866c7b766832bcaae2ea64d73c518b/patches/src/cryptonote_core/blockchain.cpp.patch#L711 16:16:35 Which isn't necessarily a big risk on its own 16:16:52 This is basically what Monero does for pre-bulletproofs range verification 16:17:12 it doesn't verify historical price darta 16:17:13 *data 16:17:17 ? 16:17:19 it just assumes it was correct at the time 16:17:21 Did I misread? 16:17:25 "why would a miner lie to us" 16:17:35 oh wait you're correct 16:17:40 sorry I'm reading backlog down 16:17:55 so then it's still SPF 16:17:55 My understanding is that "full mode" uses the historical oracle 16:18:03 Historical Oracle would be a great band name 16:18:41 * sarang goes to start a band 16:18:58 lol 16:18:59 yeah 16:19:00 or a Dr. Seuss book 16:19:06 * fluffypony goes to write a book 16:20:34 will fund 16:22:12 Anyway, dEBRUYNE asked if it's worth writing a paper that elaborates the exchange idea that suraeNoether and RandomRun and I included in our early CLSAG drafts... the reason we didn't do that already is because it assumes some kind of exchange data that we assumed would not be acceptable for the Monero use case 16:22:24 I am certainly interested from an academic perspective 16:23:09 Any results of the CLSAG audit should _not_ be taken to mean any kind of review of this exchange idea, which is not in the most recent preprint version and is not being audited in any way 16:23:20 We don't make any claims about the security of the exchange idea 16:23:30 It could be safe; it could also be very flawed 16:27:16 It'd make a cool separate preprint :) 16:27:42 I have an idea for a security model that I suspect would be easy to build into security proofs with Triptych 16:28:04 more complex for CLSAG (which doesn't share the same properties that Triptych does as a zkp system) 16:29:14 Can't the exchange rate be something like... Alice says <= x, Bob, says >= y, the protocol accepts any compatible constraint and uses (x+y)/2 ? No need for oracle then, it's by mutual agreement. 16:30:39 (It does require x and y to be public, but a public rate oracle is also public) 16:30:45 Protocol can do whatever it wants... but Alice and Bob could collude and build whatever they like, which seems like a bad idea 16:31:06 Why would it be a bad idea ? 16:31:50 I *assume* this is about an exchange protocol. We *want* Alice and Bob to collude, no ? 16:33:03 decentralized oracles don't seem to solve the problem of decentralized prices to me. They seem to simply change the trust from one person to another, which in many cases seems unnecessary to me, or isn't really an improvement. But "decentralized" prices are sexy I guess 16:40:10 the way I think about decentralised oracles is that I'd maybe trust them if it was like 50+ academic institutions / non-profits around the world 16:40:45 but most of the decentralised oracle experiments are based on "we'll pay you not to lie" which seems more like "we'll pay you not to lie unless a better offer comes along" 16:41:58 It feels similar to the idea of "trusted setup" 16:42:06 where you rely on some level of non-collusion 16:42:09 and honesty 16:42:58 yes 16:42:59 which might be fine for coffee-grade transactions, I dunno 16:43:38 All comes down to your trust/risk model 17:40:35 is the TL; DR; that haven stablecoin is a pipe dream? 17:41:12 IMO it's all relative to your risk model 17:41:21 If you're cool with their idea of a price oracle, it may be suitable for you 17:41:25 If you are not, then it may not be 17:42:02 Same with things relating to trusted setups... if you're willing to offload that risk, it may be ok for you 17:42:13 Same with things relating to supply auditing 17:42:14 etc. 17:42:20 All designs imply risk 17:44:23 https://twitter.com/JEhrenhofer/status/1273672780660293632 17:46:30 "enter Sarang Noether" sounds so badass 17:49:07 I had also hypothesized that coinbase spend-age distribution patterns would differ from non-coinbase 17:50:13 Also worth noting that there's no a priori expectation (AFAIK) that suggests the spend distributions _should_ be gamme distributions 17:50:19 they just happen to agree 17:50:27 s/gamme/gamma 17:50:27 sarang meant to say: Also worth noting that there's no a priori expectation (AFAIK) that suggests the spend distributions _should_ be gamma distributions 17:50:30 good bot 17:55:23 Has anyone investigated the verification hit for doing a coinbase-only ring check during sync? 17:56:13 sgp_: your comments here still valid? https://github.com/wownero/wownero/issues/101 17:57:15 The current selection algorithm no longer selects coinbase outputs "too often" relative to other outputs as it used to 17:57:28 The selection algorithm takes block density into account 17:58:26 It was addressed after some observations that the Miller method was non-optimal as originally presented 18:00:45 Well, "too often" imo = "one or more" 18:01:13 ? 18:01:23 I only mean in terms of block density 18:01:26 You can probably ignore the comment on 12/27 18:01:50 What is a coinbase-only ring check ? 18:02:21 A hypothetical consensus rule that a ring may contain either (a) only coinbase outputs; or (b) only non-coinbase outputs 18:02:23 wowario[m]: the more elegant consensus rule is simply "rings can either be all-coinbase or all-no-coinbase" 18:02:39 I've expressed my concern previously about this 18:03:17 When spending from a wallet, if spending coinbase, select all coinbase for that ring. If not spending coinbase, then don't select any coinbase for that ring 18:03:36 The consensus rule enforces this wallet behavior 18:08:26 If coinbase-only rings will stick out anyway, perhaps their ring size could be made higher 18:09:43 The marginal benefit of this is unknown 18:10:16 AFAIK the extent of plausible deniability has not been formally tested 18:13:04 scoobybejesus: I've definitely thought about this, but it's tough when most pool hashrate doesn't care and makes info public anyways 18:14:21 Such a test could be made to have negligible extra runtime, if we change the db to store an extra byte per output. 18:14:48 It used to be so visible you'd need ringsize 45 to protect the real input with at least 1 non-discernable decoy 99% of the time 18:15:07 It currently stores 2 keys and 2 64 bit values, so... 80 bytes, for comparison. 18:15:31 We could also cheat and store height on 63 bits. No extra size, but some juggling in various places. 18:16:52 adding 1 byte would be awkward, breaking data alignment in the DB 18:18:30 Haven.. how in the world would an exchange rate develop if the amount of coins you receive is pegged to the exchange rate?! It's circular... 18:24:14 hyc the view tag idea would add 1 byte per tx https://github.com/monero-project/research-lab/issues/73 does that seem like a problem? 18:25:40 not really a problem per se 18:26:07 but if current data values are 80 bytes, and keys are 8 or 32 bytes, then all data is currently perfectly 8-byte aligned, which helps performance 18:26:31 adding 1 byte to data will in fact add 2 bytes, because LMDB always keeps all records at least 2-byte aligned 18:27:18 Transactions are variable size already in the db. The per output tx is not. 18:27:44 ah yeah 1 byte per output 18:56:12 https://eprint.iacr.org/2020/735 18:56:14 0_0 18:56:18 * sarang reads on... 19:08:50 quite like the sound of zk-WIP 19:09:54 The improvement in size is intriguing, if it still retains the desired security properties 19:12:10 seems like a relatively marginal improvement? 2.5 KN -> 2.4 KB 19:12:20 s/KN/KB 19:12:20 sgp_ meant to say: seems like a relatively marginal improvement? 2.5 KB -> 2.4 KB 19:13:09 What's QuisQuis? 19:13:44 It's an idea that came out of UCL (and possible collaborators) for a privacy-preserving account-based ledger 19:13:54 Meiklejohn et al., IIRC 19:14:25 https://eprint.iacr.org/2018/990 19:15:14 Yep, UCL _and_ collaborators 19:17:16 also table 2 suggests this bulletproofs+ trades some verification time for faster prover time, which ideally I'd rather have the other way 19:18:52 I need to examine batch verification in greater detail 19:19:02 Single proof vs. batch proof is an important detail 19:20:33 is agg size batching? 19:21:04 I use the term "aggregation" to refer to generating a single proof demonstrating range on multiple commitments 19:21:31 and "batching" to refer to verification of multiple indendent proofs at the same time in a way that combines the computational complexity of common generators 19:21:40 They are not the same thing 19:21:56 Aggregation has size benefits 19:21:59 Batching has time benefits 19:22:12 It'd be interesting to see if that new method can do aggregation without power of 2. 19:22:23 Doesn't appear so 19:22:33 That's surprisingly subtle and tricky 19:22:39 it sounds like they shave off 96 bytes per proof; the initial claim is a bit misleading 19:22:40 I have at least one method, but it doesn't work as well as you'd like 19:25:32 also, our 2-out range proof is 736 bytes, which isn't listed on their table so I'm not sure how direct the size comparison is 19:27:24 Since dEBRUYNE posted to r/Monero: the usual disclaimer that preprint neither require nor expect any formal peer review 19:27:27 32x8 is 800 bytes, and our 4-out range proof is 800 bytes so maybe that's appropriate to look at 19:28:11 Anyone can submit a preprint; preprint server editors perform minimal editorial review that does not examine accuracy of results 19:28:43 (this seems to be an ongoing issue with "reporting" of technical material) 19:30:13 so at 4-out tx (according to their numbers): 12% size reduction from 800->704 bytes, and 0.8% verification increase for a single proof 4.51 -> 4.55 ms 19:31:26 although maybe the 64x series is more accurate.. anyway Ill stop typing :p 19:32:52 Post on r/Monero: https://www.reddit.com/r/Monero/comments/hbl0li/bulletproofs_shorter_proofs_for_privacyenhanced/ 19:32:52 [REDDIT] 'Bulletproofs+: Shorter Proofs for Privacy-Enhanced Distributed Ledger' (https://eprint.iacr.org/2020/735) to r/Monero | 5 points (100.0%) | 2 comments | Posted by dEBRUYNE_1 | Created at 2020-06-18 - 19:19:40 19:33:03 I'm getting pretty sick and tired of terrible reporting on preprints 19:33:18 so I'll probably continue to increase my disclaimers on them whenever I see them 19:33:34 when you see "preprint", think "PDF that someone uploaded" 19:34:50 lol 19:34:58 "Google Drive for PDFs only" 19:35:19 -____- 19:35:36 Good preprint servers do a cursory editorial review, but only for apparent relevance 19:35:40 IACR does this 19:35:45 as does arXiv 19:36:00 but if you see "a study shows..." there's a _very_ good chance it's a preprint 19:36:04 and therefore shitty reporting 19:36:19 I have increasingly little patience for this poor reporting 19:36:47 "preprint" means neither "accurate" or "inaccurate" 19:38:15 why can't they just watermark the preprint with something like, I don't know "preprint" 19:39:06 would this matter in practice? 19:39:31 I've seen medical preprint archives have specific disclaimers and warnings for reports that appear to go unheeded 19:39:39 "a study" means absolutely nothing, apparently 19:39:55 Preprint archvies are a double-edged sword 19:40:01 They're of huge value to experts 19:40:11 and (IMO) huge risk to non-experts 19:40:45 Heck, even experts fall into the trap of assuming preprint results are correct without external verification 19:40:50 It's an easy trap to fall into 19:41:40 to be honest, I think it's a shit-show, but it seems a "thing". Instapost-research 19:41:58 well, presumably the reputational damage from publishing invalid results is enough to motivate some diligence in the authors 19:42:03 To be clear, I think having preprint archives is a benefit to research 19:42:24 If all research had to wait for peer review to be posted anywhere, there would be a large body of work that never sees the light of day 19:42:30 and not for specific lack of quality 19:42:54 "Just get accepted to a journal/conference" is highly nontrivial, and could take months or years 19:43:07 I _love_ that I can see up-to-date work as it's done 19:43:17 but you have to take it with a large spoonful of salt 19:43:46 no, I totally understand the benefit, but just wonder about how it weighs up against the negatives. 19:43:55 but it's frustrating to see reporting that does not appreciate the spectrum of the review process 19:44:23 like it's essentially just a message board for research. Which is great. But it is being read as a message board for knowledge. 19:44:32 Which is troublesome 19:45:15 This is not the fault of the preprint archives 19:45:31 Their acceptance criteria are easy to find 19:45:43 this is the fault of poor reporting 19:45:46 which is also easy to find 19:45:49 "Knows how to operate an FTP client" 19:47:55 It's come to the point where if I see a news article about "a study" without a direct link to the paper or preprint, I assume it's full of shit and ignore it 19:49:35 There is nothing more annoying these days then a reference without a source hyperlink. 19:50:00 addicted to the semantic web 19:50:47 Yeah, surely the internet has advanced to the point where a hyperlink is possible