13:00:51 I am playing around a little with Ed25519 and checked what happened with a public key if you make the smallest possible change to a secret key, i.e. just increment the secret key by 1 13:01:26 I noticed that the public key seems to completely change, at least if you look at x as hex bytes. 13:02:04 Is that so? Is elliptic curve multiplication similar to hashing in this respect, i.e. very small input change -> very big output change? 13:02:15 It adds G, which is pretty large. 13:03:28 Yeah, right, with a change of 1 you already add G one more time, this big number, so to say 15:14:37 midipoet: I have seen omershlo's work elsewhere 15:15:11 rbrunner: that's why private -> public is a good one-way map :D 15:15:23 Any thoughts in particular about BP+? 15:15:52 I already had it on my own project list, but it hasn't undergone formal review yet so I didn't prioritize it 15:16:43 I feel paranoid at leaving crypto code to random people. 15:17:49 i think we change the math in Monero too much already, as I always think something's gonna break. But that belief is probably directly related to my inability to understand the math 15:20:00 Wouldn't this conflict of sorts with a move to a scheme that is not only an increment, but quite a lot better, which I assume Triptych to be? 15:20:29 RCT3, Triptych, and Arcturus all still require a separate range proving construction, like BP or BP+ 15:20:42 So there's no conflict of any kind 15:21:49 I want to check on the timing estimates they provided, since it's not immediately clear how they get those numbers from an operation count perspective 15:22:32 You mean different "corner" of the crypto? Are BP plus Triptych and BP+ plus Triptych possible then? 15:22:54 Yeah, they're separate parts of the transaction protocol 15:23:17 Any of the constructions that isn't Omniring can use either BP or BP+ 15:23:44 Interesting. So it's at least feasible and does not conflict with the tentative roadmap. 15:24:25 Sure, the question is whether the change is considered safe and worth the cost in time/funding 15:24:47 I'm happy to do the update as well, but that's not a reason to fund or not fund the proposal 15:29:17 Well, I am little out of my depth here, but my immediate thought: Wouldn't it be nice to keep you free to work on the true breakthroughs by giving tedious work like switching to BP+ to other people? (If the cost is right, and we trust those people, etc.) 15:31:41 Heh, it's hard to plan breakthroughs if they happen at all... 15:31:51 Three person-months seems like a long time for this 15:32:14 and I suspect there would be a need for them to get caught up on how Bulletproofs plays a part in the protocol 15:32:37 The writeup seems to imply the authors didn't know about protocol requirements for proof aggregation, or about output limits 15:32:42 FWIW these are small things 15:33:11 Oh, and there would need to be a decision on whether an external audit would be necessary for this 15:33:46 Then again, encouraging more researchers to work on Monero is great 15:33:49 Breakthroughs typically happen serendipitously :-P 15:34:04 dEBRUYNE: such is the nature of research 15:35:23 Maybe I am wrong again, but isn't Triptych that thing that would allow much larger rings? What I would call a "breakthrough", given how people always quibble about rings and are able to give the impression of a great weakness 15:35:52 Triptych is one of a few constructions that allow for more efficient scaling, yes 15:35:56 certainly not the only one 15:36:19 "breakthrough" might be a bit of a stretch... 15:36:24 but anyway 15:37:01 Things go a little different from the labs over in the marketing department :) 15:37:12 From a research interest perspective, I am certainly quite interested to do the BP+ modifications myself if deemed safe and useful 15:37:24 but again, that's not itself a reason not to bring other researchers in 15:40:21 Can something be said about the probability that in a few months there will be BP++? 15:45:47 If you assume (for the sake of this thought experiment) that an audit is not required, and that the construction is correct/safe/etc., then I don't see why it couldn't be included in the next network upgrade if all went smoothly 15:46:16 I'll reach out and ask why they assume three person-months to do this; it seems like a lot of time for modifying an existing construction in our codebase 15:47:10 However, the savings aren't quite as extreme as for something like CLSAG, so delaying until a future upgrade isn't as costly 15:47:33 (related note: there's a network upgrade meeting in -dev today at 17:00 UTC: https://github.com/monero-project/meta/issues/485) 15:47:37 .date 15:47:41 .time 15:47:47 Huh, the bot is gone 15:48:26 1 hour, 10 minutes from now, if I am not mistaken 15:52:51 sounds right 15:52:56 what's the bot name? 16:07:04 .time 16:07:04 2020-07-19 - 16:07:04 16:07:08 there we go 16:15:47 AI bot that feels if and when it's needed, and then joins? 16:18:59 heh 16:19:06 No, I found it in -community and invited it 16:21:35 oh wow monerobux is even here 16:23:19 It's nice to have for datetime information 16:23:45 Please don't use it for stuff like price information 16:27:58 OK, I made a brief comment on the BP+ CCS to ask for some additional details on the timeline 16:28:08 as well as if/how they could take advantage of existing work 16:28:19 There's already a good unit and performance test framework in place 16:31:48 Hmm, I wonder if their performance numbers account for unrolling the verifier recursion 16:32:39 If you do that, the timing for the inner product stuff should be pretty much identical, with the practical time savings arising from removing a couple of point computations elsewhere 16:32:57 And even then, we use a more optimized algorithm for computing the multiscalar multiplications 16:38:15 https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/156#note_10097 16:38:19 and https://repo.getmonero.org/monero-project/ccs-proposals/-/merge_requests/156#note_10098 16:38:45 I'm always a bit skeptical about performance estimates based on specific implementations, since they might not translate well 16:39:47 If we have operation counts for the multiscalar multiplication, we can run simple performance tests on that to see what would change, and the result would be independent of any particular optimizations in their implementation 16:40:40 This does ignore scalar-only computations, but those are orders of magnitude faster than the group operations we need 16:40:55 To the point where they can often (not always) be ignored as negligible 16:41:55 Based on my earlier initial work on BP+, the time savings seemed fairly minimal, although there's no argument that you save space 16:42:32 When I brought it up, there didn't seem to be a lot of positive reaction, presumably on the assumption that an external audit would be needed for somewhat marginal benefits 16:43:08 I think it's because the authors claimed slightly slower verification. 16:43:16 (IIRC) 16:43:22 IIRC the authors didn't account for recursion unrolling 16:43:40 But I did account for that in my initial estimates 16:44:02 suyash67: are you that person who wants to code BP+ ? 16:44:35 FWIW, I said I wasn't comfortable having unknowns code crypto code. 16:45:04 (and yes, we do inherit plenty of it from CN unknowns) 16:46:43 I don't know the proposers personally, but I have seen omershlo's work elsewhere 16:48:16 I'll rephrase: s/unknown/new to monero/ 16:48:40 Actualy, even that is not right. I'd be comfortable with djb, say. 16:49:15 Well, any code would be reviewed as usual 16:49:21 Hmm. People who we're not reasonably certain yet aren't both competent and not trying to sabotage monero. 16:49:23 Hi Guys! I am Suyash Bagad and I posted a proposal about BP+ yesterday. Me and Omer would love to interact with you all and get us all on the same page. 16:49:24 Roughly. 16:49:38 And the review would have to depend on how much change to the original code is done 16:49:39 Hi suyash67 16:49:53 I made a couple of comments on the CCS page just now 16:50:12 I will clarify sarang's comments with detailed explanations on the CCS proposal. 16:50:29 Could you also clarify here as well, since we can talk directly more quickly? 16:50:35 Yes, just saw them, thanks @sarang! 16:50:49 Sure! 16:51:32 Great thanks suyash67 16:53:42 So I have briefly explained the construction of BP+ in the blogs 16:54:07 Yep, and I am familiar with it as well 16:54:21 (I worked on the original BP implementation with moneromooo) 16:55:05 Meeting in #monero-dev in 5 minutes 16:58:55 suyash67: I think operation numbers for curve-only computations would be helpful, since this helps to abstract away particular differences between implementations 16:59:40 and I think another important part of any work on this would be the extent to which the existing code changes; minor changes might not be seen as requiring another full audit, whereas a major overhaul likely would 16:59:48 and this would add considerably to the expense of deployment 17:00:07 I'd also like know more details on the proposed timeline of 3 person-months 17:31:35 suyash67: did my messages go through? saw you had a disconnect 17:32:10 I saw your messages on the logs, I had a disconnect so my messages didn't reach probably 17:33:13 Yeah, I didn't see any messages from you since I said "I'd also like know more details on the proposed timeline of 3 person-months" 17:36:25 I was saying that we would provide the exact number of curve operations used in BP+ so that we can estimate the timing improvements it could provide 17:37:45 Great 17:37:55 Specifically, in terms of multiscalar multiplication 17:38:04 To clarify a previous question, yes, we are using a single multiscalar verification in BP+ and BP (in the numbers presented in the running times in the proposal) 17:38:07 Right now we do batches in a single operation 17:38:26 OK, so you are accounting for unrolling the verifier recursion 17:38:56 Yes, the verification is a single check obtained by unrolling the recursion. 17:39:04 Yep, as it is now 17:39:13 Correct! 17:39:46 I'm a bit surprised to see the verifier numbers that you did, based on some back-of-the-envelope estimates I initially ran on BP+ when the preprint first came out 17:40:03 Another aspect to keep in mind is what the actual diffs end up being to code 17:40:23 If significant, it's almost certain that an external full audit would be required, and incur significant additional expense 17:40:43 I actually already have some WIP code for BP+ that I started initially, but hadn't prioritized at the time 17:40:43 And we don't want significant unless really needed :) 17:42:51 The slower numbers for verification (inspite of unrolling recursion) is because we used the cryptoxide library in Rust and we do not specifically optimized multiscalar multiplications. We just wanted a first-hand comparison between times for BP and BP+. 17:43:59 *specifically use optimized multiscalar multiplications. 17:44:13 Hi everyone. Thanks for taking the time to consider our BP+ proposal!. Reading through the logs I see some concerns were raised , let me try to classify them : 17:44:34 suyash67: op counts will provide a better idea of the change 17:44:38 suyash67: the proposal says this "This means that each transaction is accompanied by **2.5** range proofs (Bulletproofs as of now). 17:44:38 As Monero uses the Ed25519 curve, the size of a single Bulletproofs proof is **676** bytes [1]. 17:44:38 This could be reduced by **96** bytes by using Bulletproofs+. 17:44:38 In effect, about **240** bytes per transaction could be saved. " However, in Monero the bulletproofs for all outputs are aggregated. There is only one Bulletproof, so Bulletproofs+ can only save 96 bytes per transaction 17:44:40 since they're independent of implementation details 17:44:54 UkoeHB_: good point, I wanted to bring up some consensus stuff 17:45:17 Namely, that proof aggregation (with padding) is required, and that there's a limit of 16 commitments aggregated 17:46:08 suyash67: namely, our use of a simpified Pippenger algorithm means the timing reduction isn't quite linear, of course 17:46:42 omershlo: hello! 17:51:10 suyash67 omershlo: I assume your timeline accounts for implementing full batching, even among proofs with different aggregation sizes? 17:51:28 Our current implementation does this, and I consider it a requirement 17:51:41 i.e. I wouldn't support any new code that did not perform batching 17:51:46 UkoeHB_ thanks! I was under the impression that in a transaction where all outputs are owned by a single owner are aggregated. Thanks for the clarification. I had added a footnote in the proposal for aggregated range proofs for outputs. 17:52:29 suyash67: all outputs are aggregated, to avoid leaking information about common ownership 17:52:39 We perform power-of-2 padding, of course 17:53:22 Yes, we aim for implementing aggregation and batching according to whatever is currently done. 17:53:47 Can you speak to (a) the timeline in more detail; and (b) the intent for code reuse and minimizing changes? 18:25:54 omershlo: looks like you disconnected there 18:26:20 yep sorry, back now. 18:42:28 Mapping the concerns about BP+ project: (1) BP+ cost effectiveness compared to BP is unclear (2) we feel unease with outsiders writing cryptography code (3) Sarang can do it better (please let me know if I missed anything ) . I completely agree with the above statements. That said: THIS is the way to get talented researchers involved in the 18:42:28 community and in the weeds. Knowing Suyash skill and dedication, this project might end up much faster than 3 months. However, we didn't see any reason to rush it, after all , this code is a cryptographic component of critical low level code. 18:44:26 code reuse means that we want to keep the same interface as much as possible, use the same functions used in BP as much as possible (to make audit and review easier). Ideally, keeping similar naming/naming conventions and code structure 18:50:18 * moneromooo likes minimal diffs 19:11:53 To be more specific on the timeline: we figured that a quick and dirty PoC should be the first step. Our goal is to get to that point as fast as we can. We didn't go into the week by week resolution, but we assumed that this can be done within roughly one month including getting familiar with the existing code (its been two years since last time I 19:11:53 read the code). After getting this milestone we will dedicate time to make the code "production ready", working on readability, structure, security (removing side channels for example), putting emphasis on code re-use, and perhaps some low hanging optimisations. This is where we will do "deep integration". We again didn't go into weekly 19:11:54 resolution but assumed that one month will be enough (this step can go on forever but we want to put an hard stop). The final phase is for measurements, profiling , unit testing and fine grained optimisations. I believe we will be able to produce a report with results. Maybe even to make recommendations for some other parts of the code. We are not 19:11:54 familiar enough with existing frameworks for benchmarking, testing and unit testing that are already in place - we might be overshooting this step by giving it 1 month time but we consider it important part of the project and prefer to overshoot than undershoot 19:12:33 Are you referring to trying to make the code constant-time when you mentioned side channels? 19:12:55 In general that has not been a design goal for many algorithms 19:13:30 To what extent do you think code reuse is important for this? The current implementation is not really written with reuse in mind since its application is so specific 19:13:57 For benchmarking, I think it would be helpful to examine the unit and performance test framework 19:14:14 Those are already built into the CI pipeline 19:14:19 this is txn construction code, yes? if someone has sufficient access to your machine to mount a side-channel attack while you're creating a txn 19:14:24 you've got much bigger problems... 19:14:54 Construction and verification 19:15:01 There's plenty that isn't constant time 19:15:42 but it would take repetitive operations on the same data to be able to turn any of that into an oracle 19:17:06 sarang what magnitude of changes would you expect to go Bulletproofs -> Bulletproofs+? 19:17:41 I understand that existing code was not meant to be reused. however can we agree that since we replace range proof with range proof it makes sense to not change the function signature by much? and if the signature is the same the data structures must also be similar ? and so on 19:17:46 Granted, I did only initial noodling on what it might take to modify the current implementation for BP+, but provided the goal wasn't to build something that's a more general framework, it did not seem like too many changes TBH 19:17:56 But again, these are only my initial thoughts 19:18:49 omershlo: right, the existing BP code needs to stick around for verification of course, and it makes sense that any changes for BP+ would maintain similar structure 19:19:02 and for testing purposes the existing BP generation code also must stay 19:19:09 (otherwise unit/perf tests don't work) 19:30:30 side channels in large crypto-systems are sometimes overlooked. Even if not a design goal in a system, I personally do a best effort to eliminate them from the components I work on. A system is only as good as its weakest link. hyc: what other problems an attacker with side channel access to your machine might cause? 19:32:39 they can probably read your keystrokes or your copy/paste buffer directly 19:33:20 many easier/low effort avenues of attack, that would be taken in preference to a side-channel that yields maybe 1 bit per minute or somesuch 19:34:38 you make a lot of assumptions here about the attacker. What if the attacker can only measure time/energy on the machine ? 19:35:35 one bit per minute means you can extract a private key in under several hours. that's not bad at all