14:38:43 Hello all 14:52:30 morning 15:10:13 'ello 15:12:02 halo 16:04:09 Supply audit blag post MR: https://repo.getmonero.org/monero-project/monero-site/merge_requests/1207 16:04:13 Please read and comment 16:15:17 "whatever more complex mathematics is used for balance assertion is unlikely in practice" I think the word unlikely means something else for the casual reader than for a cryptographer. 16:16:07 amounts are hidden using cryptographic structures called Pedersen commitments Xthat hide amountsX. 16:17:42 The benefit is to help with -> This helps with 16:18:36 "It is not possible to _prove_ that these problems are hard" Is that really right ? I thought it was possible to prove. 16:19:23 koe: added the small wording change 16:19:24 Or at least not known to be impossible. But hey, I'm in the peanut galleery. 16:19:29 I'll defer to sarang for the others 16:43:25 Hardness assumptions are just that: assumptions 16:52:24 But AFAIK assumption as in we did not prove it was, nor that it wasn't. Not as in we proved it's not possible to prove even if it's true. Though I'm not sure how you'd prove it's not possible to prove something which is true but it doens't sound impossible offhand. 16:52:57 (say so if I'm plain wrong, I might well be) 17:21:01 Oof, good catch koe 17:22:40 moneromooo: this seems like https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems 17:22:41 ! 17:23:13 OK. I was wrong then. Thanks. 17:23:41 Oh no, I don't mean to say that any particular hardness assumptions fall into that 17:23:52 The wording I used could be changed 17:24:05 to something like "these hardness assumptions are not proven" 17:24:31 Ah. Back in the race :D 17:25:39 ping me when you two sort it out :p 17:27:56 "It is not possible to _prove_ that these problems are hard, but decades of research and use implies this." <-- "The computational hardness of such assumptions is not proven, but decades of research and use imply this." 17:28:00 Something like that, perhaps? 17:28:45 well, sarang just referenced Gödel, so we could be here a while. 17:28:56 -____- 17:29:43 I like that better 17:38:11 suraeNoether: any thoughts on the post? 19:11:40 sarang hrm i get the concepts but (a) it sounds like the blog ends on the note that further research on soundness isnt considered theoretically to be likely fruitful .. and separately (b) the diagram kinda makes it sound like soundness of the supply is not tied to implementation risks ... are you saying 'theoretical soundness' in that diagram? 19:12:00 (diagram with the circles) 19:12:47 the diagram confused me as well 19:14:20 That was sgp_'s work 19:14:49 o no what am I liable for now 19:15:16 point was to try and visualize relative risks 19:15:27 LOL sarang instantly blame sgp 19:15:53 Not my intent! 19:15:56 just teasing 19:16:44 I can replace the image with something else that helps show relative risk 19:18:43 easy could be replacing 'soundness' with 'theory' (i.e. the theory/math) - and i'd make it the same color as the other dots since they're supposed to sorta be in the same category of objects 19:19:24 maybe put 'em all side by side, labels under 19:21:27 oops edit fail 19:21:36 s/the theory/the model/ 19:47:03 I think it could be fine without the image 20:15:53 sarang: I think so too, but I want some visual if possible 20:26:17 endogenic koe sarang is this better? what can I replace the '...' with? I'm thinking something funny. 'fluffypocalypse'? https://usercontent.irccloud-cdn.com/file/QY45aCfL/relative_risks 20:30:22 I'm not a fan of assigning relative risk sizes like this... someone will rightly complain 20:31:12 how else can I help show users it's a relative balance of risks? in a large sense that's a main point of the post 20:34:15 let me try a graph 20:35:42 I already don't like the graph idea 20:40:55 https://usercontent.irccloud-cdn.com/file/P2zYQzvt/bar_chart 20:41:17 great post sgp_, sarang. enjoyed it. 20:41:41 do you at least get the goal of what I'm trying to convey? 20:45:24 Maybe compare to likely vs. unlikely events? 20:45:37 Getting a speeding ticket vs. getting struck by lightning 20:45:53 You don't want either, but one of them is much less likely 20:53:41 yep speeding ticket is less likely 20:55:35 hi math nerds, Im wondering if the MLSAG decoy responses can be theoretically compressed if we change the randomness into hash outputs of the signed message (e.g. h1 = H(m,1), h2 = H(m,2); something like that) 20:55:58 I dont know how to compress while hiding the real one, but is it logically possible? 20:57:54 by compress Im considering the sum of field elements, e.g. q = a + b 20:58:09 or q = a*b 20:59:50 and the hiding effect from a truly random number, so generate a random number, compute the hashes, compute the decoy responses from true random and hashes, perform the signature and get the real response, compress the responses, then return one or two scalars which can be used in combination with the known hashes to generate the list of responses 21:03:47 and the real response is at an unknown index in that list 21:11:03 definite progress visually sgp 21:11:18 solved that problem 21:12:24 can you elaborate endogenic? 21:13:12 uh nevermind i thought you renamed it for some reason. this is what i get for doing 4 things at once 21:25:25 koe: I don't see how you could do that while still hiding the signing index 21:44:17 would it be reasonable to outsource the problem to other math nerds? here is my outline https://justpaste.it/5zdsd 21:55:36 would it be worthwhile to offer a bounty (e.g. let you guys focus on your current projects)? if so, would someone mind helping me formalize this? draft 2 https://justpaste.it/2ounn 22:02:35 moreover, where would you even go to propose such a challenge? 22:34:06 I think you can do this by seeding a hash-based chain and including an offset 22:35:24 @sgp maybe something like a bee sting, a car accident, a tornado, and a tornado 22:35:27 (if I'm understanding the problem correctly koe) 22:36:51 koe: are you saying you want to be able to generate a sequence `S` such that for a secret index `l` we have `S[l] == v` for some secret value `v`? 22:37:05 and the sequence `S` is uniformly distributed field elements? 22:37:12 Because that is possible 22:38:20 where v is the product of a one way function of the other members of s 22:39:47 or in other words v can only be determined after all other members of s are known, and v is random 22:41:05 Could you use a method like this: https://gist.github.com/SarangNoether/bdceabf9a72de13040fed91dd2104c7c 22:41:27 You can select `v` at random after building the hash sequence `S`, and choose the offset `x` accordingly 22:41:35 Isthmus: hmm, I'll try a few options 22:42:00 In that example, `v` is whatever you want it to be 22:42:10 You could have a set value, or choose it after building the sequence 22:42:54 ok lemme see 22:43:22 Maybe this isn't what you're asking 22:43:55 It uses this ed25519 test library: https://github.com/SarangNoether/skunkworks/blob/curves/dumb25519/dumb25519.py 22:45:19 well we have MLSAG scalars, and most of them are completely random, so why not offload most of that randomness to a hash function that verifiers can compute as well? then the prover just has two pieces of important info: the real response scalar, and the randomness used to obfuscate which index is the real scalar 22:50:01 kind of like how the commitment mask was changed, it's now created out of the transaction public key's randomness 22:53:59 not sure that code works, since x is a function of v, so after S += x, ALL the scalars will be functions of v 23:00:35 Yeah, this wouldn't work for MLSAG/CLSAG 23:01:19 Since you need `v` to construct `x` and offset the whole sequence 23:02:13 Unless you operate on each sequence element, the verifier would gain information 23:11:06 Im trying to think of a way where, you do an operation on each element, and most of the time information related to v gets canceled out EXCEPT for the element v itself