00:26:23 Heh, that's not even right... it's 20 iterated square-and-muliply operations, not 20 operations alone... 00:26:51 it'd be like 200 per inversion 00:47:32 cargodog[m]: in your Gray code test from the `one-of-many-proofs` code, do you still compute all the `f` inverses, even if you aren't using the Gray code method? 01:43:45 Ah, but wait... we can also take advantage of batch _inversion_ as well, since we know all the `f` tensor components from the start! 01:43:58 So the single inverse's 200-odd operations end up being amortized 01:44:17 and our codebase already supports batch scalar inversion, since we added it for bulletproofs 01:50:49 So the batch inversion of `k` scalars would be `3k` multiplications in addition to the single inversion (which accounts for an additional ~200 multiplications) 02:42:25 did I hear ringsize a bajillion? 04:05:03 Sorry, just now catching up on recent messages 04:07:21 sarang: regarding generic (n, k) Gray codes, the wikipedia entry touches on the subject, but isn't complete. The important feature that a generalized `n`-ary "Gray" code must guarantee, is that the hamming distance between subsequent integers in the sequence is always 1. If this holds, then the optimization will hold. This is possible, but as you suggest I may have to do a bit of work to formally describe the 04:07:21 construction. I am currently writing this up, and hope to share within the week. 04:10:09 With regards to computing inversions of each element of `f`: Yes, computing the inversions is necessary, however this is only done once at front. This does eat into the performance gains some, but as I intend to demonstrate, for large batches this penalty is still small compared to the overall savings. 04:11:36 Regarding confirming my speedup, I have created a test branch with a concise modification to the code and a testbench, to test with and without Gray coding. My hope is that it is easy for others to audit my modification and reproduce the results on their own. Please reach out with help getting things running. 04:14:55 selsta: the true benefit arises in the case of not just large ring size, but large verification batches. A larger ring increases the cost of each additional proof in scalar multiplications. As the batch size grows, the cost of scalar multiplications eventually overcomes (and even overwhelms) the cost of EC point operations, which is where my optimization shines. 04:17:46 This benefits most for large batches, and interestingly, the best way to enable large batches is to use larger and larger ring sizes so that more transactions may share overlapping rings. I am still curious to see how that trade-off balances, and if indeed it may become more practical to consider very large rings... or ringsize a bajillion as I will lovingly adopt from gingeropolous :) 04:20:19 Lets hope I can keep coming up with excuses to take time away from my day job and think on this! It seems time is always my enemy 04:23:30 s/out with help/out for help/ 04:23:30 cargodog[m] meant to say: Regarding confirming my speedup, I have created a test branch with a concise modification to the code and a testbench, to test with and without Gray coding. My hope is that it is easy for others to audit my modification and reproduce the results on their own. Please reach out for help getting things running. 08:19:15 sarang: the preprint is there https://eprint.iacr.org/2020/1126, if you want to have a look 10:48:59 cargodog[m]: sure, of course there isn't just one general Gray code 10:50:08 But in your test branch, are you still computing the inversion even in the case where you aren't using the Gray-code method at all? 10:50:19 It seems you shouldn't be, for a fair speed comparison 10:50:47 h4sh3d[m]: yep, saw it yesterday! Nice to see it on the archive 12:55:53 Hey everyone, Helen Partz from CoinTelegraph is going to try and get on here today. She is trying to learn more so she can write an article about all the research efforts that have gone into understanding and breaking Monero algorithms, and otherwise improving its privacy and security features. 12:56:06 She wrote this article last week after talking to me: https://cointelegraph.com/news/xmr-workgroup-says-irs-should-study-monero-not-try-to-break-it 12:56:07 And she was instrumental in helping me get this article updated by its author to point out it was a payment ID and not an address that the US Treasury had posted: https://cointelegraph.com/news/new-us-treasury-sanctions-on-russian-hackers-hit-monero-address 12:56:10 She seems to want to do justice to the Monero story, so please try and give her the benefit of the doubt. If she doesn’t get good answers she won’t be able to write a good article. 12:58:36 hey everyone! @xmrhaelan_ thanks a lot for the invite and introduction! 13:00:03 No problem. Hopefully there are some people online who can answer your questions about Monero research efforts. 14:12:51 Hi helenpartz, what exactly is the subject of your article? xmrhaelan mentioned research into breaking Monero algorithms, but afaik no research of that specific nature has been done. We _have_ had two upgrades addressing algorithmic vulnerabilities (a merkle tree bug, and a key image domain check added). There has also been a lot of research on inferring information about transaction participants based on 14:12:51 data available on-chain, much of which is discussed in the ‘Breaking Monero’ YouTube series. 14:17:40 Although I guess research into advancing quantum computers is technically research that could be applied to breaking Monero algorithms... there is an ongoing research project assessing what Monero algorithmic assumptions could be broken if quantum computers become viable. 14:19:02 Merkle tree bug ? Or do you mean the old 2014 one from bytecoin ? 14:25:20 Yeah 14:31:04 She told me she’d be back on tomorrow with some questions 14:32:10 ok 16:52:48 Well "no research" is not quite correct. There might not be much in the form of published papers - but there has been quite a bit of work looking into poisoned outputs and blackballing and cascades. As well as the whole Breaking Monero series 16:55:07 Right. I was puzzled about this but I assumed I had misunderstood the wording :) 17:13:01 'breaking an algorithm' implies the algorithm itself no longer does what it's supposed to 17:13:36 transaction graphs are not an algorithm 17:13:47 I would attribute that to being an imprecise claim initially 17:15:04 I dunno, brute forcing SHA-256 in 2^252 is technically a break. I kinda read this in this way. But journalist I guess. 17:18:59 Brute forcing 2^252 variants? Not in this universe. 19:54:42 * cargodog[m] sent a long message: < https://matrix.org/_matrix/media/r0/download/matrix.org/lnHbdAwJgmZcghcptpIgwmqY/message.txt > 19:57:10 As you suggested before, the penalty will be higher for configurations with `n` > 2, but this penalty will always be small compared to the performance benefit of Gray codes.