-
gingeropolous
wow hot dog sarang xmrblocks is using 75% of the memory on that thing!
-
gingeropolous
i love seeing resources utilized
-
sarang
?
-
sarang
Oh on your speedyfast machine?
-
sarang
Yes! I'm running transaction stats
-
gingeropolous
i *don't* think i updated onion-explorer on that one, but mainnet hasn't forked so i think ure good
-
Inge-
how much memory is there on the thing?
-
gingeropolous
64 gigs
-
sarang
Good day, all
-
sarang
Working on some MCCVR preparation, as well as Triptych/Arcturus modifications to support (n,k)-Gray codes
-
sarang
The latter is a fun challenge :D
-
xxx1
hi i am reading the btc-xmr.pdf on atomic swaps. how do u make the monero address with only half the private key?
-
gingeropolous
-
monerobux
[WIKIPEDIA] Gray code | "The reflected binary code (RBC), also known just as reflected binary (RB) or Gray code after Frank Gray, is an ordering of the binary numeral system such that two successive values differ in only one bit (binary digit).For example, the representation of the decimal value "1" in binary would normally..."
-
sarang
Yep!
-
sarang
A generalized-base version
-
gingeropolous
nothing like a 74 year-old optimization !
-
sarang
Heh
-
sarang
It's a little more involved =p
-
gingeropolous
i mean, i know its not... yeah
-
gingeropolous
but just seeing the patent date
-
sarang
Has to do with using a Gray code to iteratively compute coefficients more efficiently than we are now, for both Triptych and Arcturus (would also apply to Lelantus, FWIW)
-
sarang
xxx1: the public key construction is homomorphic in the private keys
-
xxx1
this means you can add the public keys of each private key to make a public key that works for the "sum" of the private keys?
-
sarang
If you form a composite private key via a sum, you form the corresponding public key via a sum too
-
sarang
Now, whether or not it's a good idea to use a sum for the composite depends on the use case
-
xxx1
ohh ok, this is cool! tyvm sarang!
-
sarang
There's been a lot of good research on safe ways to robustly construct such composite keys, so caveat emptor
-
sarang
Gray code proof-of-concept completed... testing...
-
sarang
failed :(
-
moneromooo
Maybe one bit is off...
-
moneromooo
I'll... find the door
-
sarang
Heh that's probably what it is
-
sarang
The Python test code builds a Gray iterator and uses that to mess with coefficients
-
sarang
But the prover works a little differently, and that's likely where the problem is
-
sarang
it's also a little wonky since you don't actually care what the Gray code is... you care about which digit changes, and to what
-
sarang
Huzzah, it works now!
-
sarang
It was a problem with this multidimensional indexing that Triptych uses
-
sarang
It'll be even better in Arcturus, which adds another dimension to the indexing...
-
sarang
0_0
-
sarang
^ cargodog[m]
-
gingeropolous
moar dimensions!
-
sarang
Will push to my research repo after some code cleanup
-
sarang
-
sarang
-
sarang
Doesn't use batch inversion yet (need to write additional code because of the tensor structure)
-
sarang
^ cargodog[m]
-
cargodog[m]
Oooh great work, sarang!
-
cargodog[m]
I keep dividing my time between code and paper, and neither are progressing as fast as I would like :(
-
sarang
For maximal efficiency, you'll want to collapse the `f` tensor into a list/array/whatever and do a single batch inversion
-
sarang
But the only thing this changes is where my code calls `f[u][j][i].invert()`-type operations
-
sarang
Another inefficiency is that defining `sigma` requires Gray codes for only the signing indices, and right now I do a silly thing and iterate until I reach that point
-
sarang
But later in the prover, you need all the Gray codes anyway... so you could compute them all from the start if you really wanted to
-
sarang
But prover efficiency isn't a big deal IMO anyway; not at that level, at least
-
sarang
The verifier still computes them all iteratively just once
-
sarang
Hopefully the way I structured the iterator makes sense
-
cargodog[m]
I used to feel like prover efficiency didn't matter, until my proofs started taking hours to generate :|
-
sarang
It can either return a single full Gray code, or else return only the single change from the last code
-
sarang
Well, you can have the prover compute all the gray codes at once, and pay for it in memory
-
sarang
but then you have to compute the differences later
-
sarang
You only have to do the single Gray codes once for each signing index, fortunately
-
cargodog[m]
Indeed. Although, I think for most realistic ring sizes it shouldn't be necessary
-
sarang
Yeah, seems silly to optimize for that initially
-
sarang
Anyway, hope this is useful to you
-
cargodog[m]
It certainly is. This is great work!
-
cargodog[m]
Thanks for sharing
-
sarang
np
-
sarang
It'll be very interesting to see your results using the dalek ristretto library
-
sarang
I think the method of iteratively tracking only the digit changes is optimal for efficiency
-
sarang
since the verifier never actually cares what the full Gray codes are (except for the initial one, which is always fixed at 00...0)
-
cargodog[m]
Yeah, thats how I see it too
-
sarang
Maybe there's a nice convenient way to generate a specific value without the iteration, but I don't really feel like working it out...
-
cargodog[m]
As I think about this more, I realize that more efficient curve libraries (e.g. lower relative cost of EC ops vs Scalar ops) will feel this improvement more quickly, as scalar operations will overcome EC ops more quickly in batching
-
cargodog[m]
I believe there is a way, but I don't think its very useful in our scenario
-
sarang
You're using multiscalar multiplication for the verifier EC operation, right?
-
cargodog[m]
I am
-
sarang
ok cool
-
cargodog[m]
`vartime_multiscalar_mul` in the verifier
-
sarang
yup yup
-
sarang
That's a huge savings (we use it too)
-
sarang
or rather, we use a similar approach under the hood
-
cargodog[m]
Its a brilliant bit of math :)
-
sarang
Do you happen to have comparative numbers for scalar-scalar multiplication versus scalar squaring?
-
sarang
IIRC the dalek library computes squaring differently
-
sarang
our library does not
-
sarang
This comes up a few times in verification, and is used frequently in the scalar inversion algorithm
-
sarang
(as part of the addition chain)
-
cargodog[m]
I don't. Its on my (ever growing) todo list to run those numbers, but I haven't done it yet
-
sarang
OK; I assumed there was a benchmark for it, but I don't have an installation for it right now
-
cargodog[m]
Which library are you using?
-
cargodog[m]
I think there is an existing benchmark for that... let me dig
-
sarang
For C++, the Monero codebase
-
sarang
For Python, I wrote a silly inefficient ed25519 library that makes prototyping easy
-
cargodog[m]
Makes sense :)
-
sarang
The Python code isn't remotely suitable for benchmarking
-
sarang
But it's great for getting the algebra to work, and for communicating algorithms
-
cargodog[m]
Indeed. Its a great tool for that job
-
cargodog[m]
-
cargodog[m]
I still haven't found one specifically for that though
-
sarang
no worries; was just an idle curiosity
-
cargodog[m]
I'm not seeing one
-
gingeropolous
-
john_r365
win /8
-
-
kenshamir[m]
Not quite sure what the marginal times were referring to though
-
kenshamir[m]
So the numbers seem to be within the ballpark of groth16, with only DL assumption.
-
kenshamir[m]
That’s quite nice. The block verification times seems to be an extra 1ms per sapling proof, so per 2inputs-2outputs in the block, didn’t quite understand that part
-
kenshamir[m]
-
kenshamir[m]
Time stamp: 34:57
-
kenshamir[m]
There were also questions at the end regarding Sean’s presentation
-
sarang
If only it were possible to use some kind of public project management site to list details
-
sarang
some kind of hub that uses git
-
sarang
So transaction sizes will increase... adding at least 1 kB to each
-
sarang
Since those numbers would be for proofs only, I assume?
-
sarang
(obviously the Groth numbers do not include complete tx data)
-
sarang
I still really don't understand their reluctance to use their usual open GitHub-based discussion for this, which is usually quite good and fantastically detailed
-
sarang
That plus the license change makes me (cynically) think it's part of a broader strategy to restrict use of this and prevent scooping
-
kenshamir[m]
<sarang "Since those numbers would be for"> Yeah this is what I gathered. Proofs will increase by 1300 bytes from groth16
-
kenshamir[m]
<sarang "I still really don't understand "> Yeah it’s quite pedantic to be drop fed information
-
kenshamir[m]
<sarang "That plus the license change mak"> Yeah but I think they given enough information on how to implement it, if one was determined
-
kenshamir[m]
Except for the hash function, I think if their pedersen hash comparison was based on the current state of art PLONK implementations, then this is a big win by itself.
-
kenshamir[m]
Since currently PLONK uses ~256 constraints to do a simple pedersen hash. P =aG+bH.
-
kenshamir[m]
I also think that this system might be applicable to Monero, as PLONK also has quite efficient rangeproofs also
-
kenshamir[m]
I think some of their ideas will be useable on Monero, but will have to wait again to hear more information
-
sarang
Yeah, until there's actual concrete information, it's all speculation
-
sarang
Perhaps informed speculation, but speculation nonetheless
-
sarang
and it's too easy for that to bleed over into marketing, which isn't useful
-
kenshamir[m]
Yeah speculation mixed with my hopes and dreams
-
» Inge- makes an ERC20 token
-
sarang
Oh totally; I'd love to get fast, small, trustless transactions
-
sarang
But I've learned to treat all press releases with extreme suspicion
-
sarang
and all technical claims with wait-and-see attitude
-
sarang
The whole point of math and code is that you shouldn't need to take anyone's word for it
-
sarang
I hope it all works out as they claim
-
kenshamir[m]
<sarang "The whole point of math and code"> Very true
-
sarang
But I'm not going to assume they've done it