-
rbrunner
I am playing around a little with Ed25519 and checked what happened with a public key if you make the smallest possible change to a secret key, i.e. just increment the secret key by 1
-
rbrunner
I noticed that the public key seems to completely change, at least if you look at x as hex bytes.
-
rbrunner
Is that so? Is elliptic curve multiplication similar to hashing in this respect, i.e. very small input change -> very big output change?
-
moneromooo
It adds G, which is pretty large.
-
rbrunner
Yeah, right, with a change of 1 you already add G one more time, this big number, so to say
-
sarang
midipoet: I have seen omershlo's work elsewhere
-
sarang
rbrunner: that's why private -> public is a good one-way map :D
-
sarang
Any thoughts in particular about BP+?
-
sarang
I already had it on my own project list, but it hasn't undergone formal review yet so I didn't prioritize it
-
moneromooo
I feel paranoid at leaving crypto code to random people.
-
midipoet
i think we change the math in Monero too much already, as I always think something's gonna break. But that belief is probably directly related to my inability to understand the math
-
rbrunner
Wouldn't this conflict of sorts with a move to a scheme that is not only an increment, but quite a lot better, which I assume Triptych to be?
-
sarang
RCT3, Triptych, and Arcturus all still require a separate range proving construction, like BP or BP+
-
sarang
So there's no conflict of any kind
-
sarang
I want to check on the timing estimates they provided, since it's not immediately clear how they get those numbers from an operation count perspective
-
rbrunner
You mean different "corner" of the crypto? Are BP plus Triptych and BP+ plus Triptych possible then?
-
sarang
Yeah, they're separate parts of the transaction protocol
-
sarang
Any of the constructions that isn't Omniring can use either BP or BP+
-
rbrunner
Interesting. So it's at least feasible and does not conflict with the tentative roadmap.
-
sarang
Sure, the question is whether the change is considered safe and worth the cost in time/funding
-
sarang
I'm happy to do the update as well, but that's not a reason to fund or not fund the proposal
-
rbrunner
Well, I am little out of my depth here, but my immediate thought: Wouldn't it be nice to keep you free to work on the true breakthroughs by giving tedious work like switching to BP+ to other people? (If the cost is right, and we trust those people, etc.)
-
sarang
Heh, it's hard to plan breakthroughs if they happen at all...
-
sarang
Three person-months seems like a long time for this
-
sarang
and I suspect there would be a need for them to get caught up on how Bulletproofs plays a part in the protocol
-
sarang
The writeup seems to imply the authors didn't know about protocol requirements for proof aggregation, or about output limits
-
sarang
FWIW these are small things
-
sarang
Oh, and there would need to be a decision on whether an external audit would be necessary for this
-
sarang
Then again, encouraging more researchers to work on Monero is great
-
dEBRUYNE
Breakthroughs typically happen serendipitously :-P
-
sarang
dEBRUYNE: such is the nature of research
-
rbrunner
Maybe I am wrong again, but isn't Triptych that thing that would allow much larger rings? What I would call a "breakthrough", given how people always quibble about rings and are able to give the impression of a great weakness
-
sarang
Triptych is one of a few constructions that allow for more efficient scaling, yes
-
sarang
certainly not the only one
-
sarang
"breakthrough" might be a bit of a stretch...
-
sarang
but anyway
-
rbrunner
Things go a little different from the labs over in the marketing department :)
-
sarang
From a research interest perspective, I am certainly quite interested to do the BP+ modifications myself if deemed safe and useful
-
sarang
but again, that's not itself a reason not to bring other researchers in
-
rbrunner
Can something be said about the probability that in a few months there will be BP++?
-
sarang
If you assume (for the sake of this thought experiment) that an audit is not required, and that the construction is correct/safe/etc., then I don't see why it couldn't be included in the next network upgrade if all went smoothly
-
sarang
I'll reach out and ask why they assume three person-months to do this; it seems like a lot of time for modifying an existing construction in our codebase
-
sarang
However, the savings aren't quite as extreme as for something like CLSAG, so delaying until a future upgrade isn't as costly
-
sarang
(related note: there's a network upgrade meeting in -dev today at 17:00 UTC:
monero-project/meta #485)
-
sarang
.date
-
sarang
.time
-
sarang
Huh, the bot is gone
-
rbrunner
1 hour, 10 minutes from now, if I am not mistaken
-
sarang
sounds right
-
sarang
what's the bot name?
-
sarang
.time
-
monerobux
2020-07-19 - 16:07:04
-
sarang
there we go
-
rbrunner
AI bot that feels if and when it's needed, and then joins?
-
sarang
heh
-
sarang
No, I found it in -community and invited it
-
charuto
oh wow monerobux is even here
-
sarang
It's nice to have for datetime information
-
sarang
Please don't use it for stuff like price information
-
sarang
OK, I made a brief comment on the BP+ CCS to ask for some additional details on the timeline
-
sarang
as well as if/how they could take advantage of existing work
-
sarang
There's already a good unit and performance test framework in place
-
sarang
Hmm, I wonder if their performance numbers account for unrolling the verifier recursion
-
sarang
If you do that, the timing for the inner product stuff should be pretty much identical, with the practical time savings arising from removing a couple of point computations elsewhere
-
sarang
And even then, we use a more optimized algorithm for computing the multiscalar multiplications
-
sarang
-
sarang
-
sarang
I'm always a bit skeptical about performance estimates based on specific implementations, since they might not translate well
-
sarang
If we have operation counts for the multiscalar multiplication, we can run simple performance tests on that to see what would change, and the result would be independent of any particular optimizations in their implementation
-
sarang
This does ignore scalar-only computations, but those are orders of magnitude faster than the group operations we need
-
sarang
To the point where they can often (not always) be ignored as negligible
-
sarang
Based on my earlier initial work on BP+, the time savings seemed fairly minimal, although there's no argument that you save space
-
sarang
When I brought it up, there didn't seem to be a lot of positive reaction, presumably on the assumption that an external audit would be needed for somewhat marginal benefits
-
moneromooo
I think it's because the authors claimed slightly slower verification.
-
moneromooo
(IIRC)
-
sarang
IIRC the authors didn't account for recursion unrolling
-
sarang
But I did account for that in my initial estimates
-
moneromooo
suyash67: are you that person who wants to code BP+ ?
-
moneromooo
FWIW, I said I wasn't comfortable having unknowns code crypto code.
-
moneromooo
(and yes, we do inherit plenty of it from CN unknowns)
-
sarang
I don't know the proposers personally, but I have seen omershlo's work elsewhere
-
moneromooo
I'll rephrase: s/unknown/new to monero/
-
moneromooo
Actualy, even that is not right. I'd be comfortable with djb, say.
-
sarang
Well, any code would be reviewed as usual
-
moneromooo
Hmm. People who we're not reasonably certain yet aren't both competent and not trying to sabotage monero.
-
suyash67
Hi Guys! I am Suyash Bagad and I posted a proposal about BP+ yesterday. Me and Omer would love to interact with you all and get us all on the same page.
-
moneromooo
Roughly.
-
sarang
And the review would have to depend on how much change to the original code is done
-
sarang
Hi suyash67
-
sarang
I made a couple of comments on the CCS page just now
-
suyash67
I will clarify sarang's comments with detailed explanations on the CCS proposal.
-
sarang
Could you also clarify here as well, since we can talk directly more quickly?
-
suyash67
Yes, just saw them, thanks @sarang!
-
suyash67
Sure!
-
sarang
Great thanks suyash67
-
suyash67
So I have briefly explained the construction of BP+ in the blogs
-
sarang
Yep, and I am familiar with it as well
-
sarang
(I worked on the original BP implementation with moneromooo)
-
ErCiccione[m]
Meeting in #monero-dev in 5 minutes
-
sarang
suyash67: I think operation numbers for curve-only computations would be helpful, since this helps to abstract away particular differences between implementations
-
sarang
and I think another important part of any work on this would be the extent to which the existing code changes; minor changes might not be seen as requiring another full audit, whereas a major overhaul likely would
-
sarang
and this would add considerably to the expense of deployment
-
sarang
I'd also like know more details on the proposed timeline of 3 person-months
-
sarang
suyash67: did my messages go through? saw you had a disconnect
-
suyash67
I saw your messages on the logs, I had a disconnect so my messages didn't reach probably
-
sarang
Yeah, I didn't see any messages from you since I said "I'd also like know more details on the proposed timeline of 3 person-months"
-
suyash67
I was saying that we would provide the exact number of curve operations used in BP+ so that we can estimate the timing improvements it could provide
-
sarang
Great
-
sarang
Specifically, in terms of multiscalar multiplication
-
suyash67
To clarify a previous question, yes, we are using a single multiscalar verification in BP+ and BP (in the numbers presented in the running times in the proposal)
-
sarang
Right now we do batches in a single operation
-
sarang
OK, so you are accounting for unrolling the verifier recursion
-
suyash67
Yes, the verification is a single check obtained by unrolling the recursion.
-
sarang
Yep, as it is now
-
suyash67
Correct!
-
sarang
I'm a bit surprised to see the verifier numbers that you did, based on some back-of-the-envelope estimates I initially ran on BP+ when the preprint first came out
-
sarang
Another aspect to keep in mind is what the actual diffs end up being to code
-
sarang
If significant, it's almost certain that an external full audit would be required, and incur significant additional expense
-
sarang
I actually already have some WIP code for BP+ that I started initially, but hadn't prioritized at the time
-
moneromooo
And we don't want significant unless really needed :)
-
suyash67
The slower numbers for verification (inspite of unrolling recursion) is because we used the cryptoxide library in Rust and we do not specifically optimized multiscalar multiplications. We just wanted a first-hand comparison between times for BP and BP+.
-
suyash67
*specifically use optimized multiscalar multiplications.
-
omershlo
Hi everyone. Thanks for taking the time to consider our BP+ proposal!. Reading through the logs I see some concerns were raised , let me try to classify them :
-
sarang
suyash67: op counts will provide a better idea of the change
-
UkoeHB_
suyash67: the proposal says this "This means that each transaction is accompanied by **2.5** range proofs (Bulletproofs as of now).
-
UkoeHB_
As Monero uses the Ed25519 curve, the size of a single Bulletproofs proof is **676** bytes [1].
-
UkoeHB_
This could be reduced by **96** bytes by using Bulletproofs+.
-
UkoeHB_
In effect, about **240** bytes per transaction could be saved. " However, in Monero the bulletproofs for all outputs are aggregated. There is only one Bulletproof, so Bulletproofs+ can only save 96 bytes per transaction
-
sarang
since they're independent of implementation details
-
sarang
UkoeHB_: good point, I wanted to bring up some consensus stuff
-
sarang
Namely, that proof aggregation (with padding) is required, and that there's a limit of 16 commitments aggregated
-
sarang
suyash67: namely, our use of a simpified Pippenger algorithm means the timing reduction isn't quite linear, of course
-
sarang
omershlo: hello!
-
sarang
suyash67 omershlo: I assume your timeline accounts for implementing full batching, even among proofs with different aggregation sizes?
-
sarang
Our current implementation does this, and I consider it a requirement
-
sarang
i.e. I wouldn't support any new code that did not perform batching
-
suyash67
UkoeHB_ thanks! I was under the impression that in a transaction where all outputs are owned by a single owner are aggregated. Thanks for the clarification. I had added a footnote in the proposal for aggregated range proofs for outputs.
-
sarang
suyash67: all outputs are aggregated, to avoid leaking information about common ownership
-
sarang
We perform power-of-2 padding, of course
-
suyash67
Yes, we aim for implementing aggregation and batching according to whatever is currently done.
-
sarang
Can you speak to (a) the timeline in more detail; and (b) the intent for code reuse and minimizing changes?
-
sarang
omershlo: looks like you disconnected there
-
omershlo
yep sorry, back now.
-
omershlo
Mapping the concerns about BP+ project: (1) BP+ cost effectiveness compared to BP is unclear (2) we feel unease with outsiders writing cryptography code (3) Sarang can do it better (please let me know if I missed anything ) . I completely agree with the above statements. That said: THIS is the way to get talented researchers involved in the
-
omershlo
community and in the weeds. Knowing Suyash skill and dedication, this project might end up much faster than 3 months. However, we didn't see any reason to rush it, after all , this code is a cryptographic component of critical low level code.
-
omershlo
code reuse means that we want to keep the same interface as much as possible, use the same functions used in BP as much as possible (to make audit and review easier). Ideally, keeping similar naming/naming conventions and code structure
-
» moneromooo likes minimal diffs
-
omershlo
To be more specific on the timeline: we figured that a quick and dirty PoC should be the first step. Our goal is to get to that point as fast as we can. We didn't go into the week by week resolution, but we assumed that this can be done within roughly one month including getting familiar with the existing code (its been two years since last time I
-
omershlo
read the code). After getting this milestone we will dedicate time to make the code "production ready", working on readability, structure, security (removing side channels for example), putting emphasis on code re-use, and perhaps some low hanging optimisations. This is where we will do "deep integration". We again didn't go into weekly
-
omershlo
resolution but assumed that one month will be enough (this step can go on forever but we want to put an hard stop). The final phase is for measurements, profiling , unit testing and fine grained optimisations. I believe we will be able to produce a report with results. Maybe even to make recommendations for some other parts of the code. We are not
-
omershlo
familiar enough with existing frameworks for benchmarking, testing and unit testing that are already in place - we might be overshooting this step by giving it 1 month time but we consider it important part of the project and prefer to overshoot than undershoot
-
sarang
Are you referring to trying to make the code constant-time when you mentioned side channels?
-
sarang
In general that has not been a design goal for many algorithms
-
sarang
To what extent do you think code reuse is important for this? The current implementation is not really written with reuse in mind since its application is so specific
-
sarang
For benchmarking, I think it would be helpful to examine the unit and performance test framework
-
sarang
Those are already built into the CI pipeline
-
hyc
this is txn construction code, yes? if someone has sufficient access to your machine to mount a side-channel attack while you're creating a txn
-
hyc
you've got much bigger problems...
-
sarang
Construction and verification
-
sarang
There's plenty that isn't constant time
-
hyc
but it would take repetitive operations on the same data to be able to turn any of that into an oracle
-
UkoeHB_
sarang what magnitude of changes would you expect to go Bulletproofs -> Bulletproofs+?
-
omershlo
I understand that existing code was not meant to be reused. however can we agree that since we replace range proof with range proof it makes sense to not change the function signature by much? and if the signature is the same the data structures must also be similar ? and so on
-
sarang
Granted, I did only initial noodling on what it might take to modify the current implementation for BP+, but provided the goal wasn't to build something that's a more general framework, it did not seem like too many changes TBH
-
sarang
But again, these are only my initial thoughts
-
sarang
omershlo: right, the existing BP code needs to stick around for verification of course, and it makes sense that any changes for BP+ would maintain similar structure
-
sarang
and for testing purposes the existing BP generation code also must stay
-
sarang
(otherwise unit/perf tests don't work)
-
omershlo
side channels in large crypto-systems are sometimes overlooked. Even if not a design goal in a system, I personally do a best effort to eliminate them from the components I work on. A system is only as good as its weakest link. hyc: what other problems an attacker with side channel access to your machine might cause?
-
hyc
they can probably read your keystrokes or your copy/paste buffer directly
-
hyc
many easier/low effort avenues of attack, that would be taken in preference to a side-channel that yields maybe 1 bit per minute or somesuch
-
omershlo
you make a lot of assumptions here about the attacker. What if the attacker can only measure time/energy on the machine ?
-
omershlo
one bit per minute means you can extract a private key in under several hours. that's not bad at all