11:03:14 XMR network hashrate 60 : 2,001.841 MH/s 11:03:20 last 60 blocks ^ 11:03:27 Did someone hack Azure again? 11:09:14 so this azure hacking has been going on since april according to their blog post 11:09:21 and they still don’t properly monitor it? lol 11:12:12 what blog post? 11:12:36 oh my... XMR network hashrate 60 : 2,115.315 MH/s 11:13:16 https://azure.microsoft.com/en-us/blog/detect-largescale-cryptocurrency-mining-attack-against-kubernetes-clusters/ 11:13:24 https://www.microsoft.com/security/blog/2020/06/10/misconfigured-kubeflow-workloads-are-a-security-risk/ 11:14:42 on the other hand, there are only 10 unknown in the last 100 blocks and pools don't show any increased hashrate 11:14:50 maybe just a strike of good luck? 11:18:37 Damn, lazy fuckers downloaded xmrig straight from github: https://www.microsoft.com/security/blog/wp-content/uploads/2020/06/Misconfigured-Kubeflow-1.png 11:28:52 https://news.ycombinator.com/item?id=23524496 11:29:26 Monero PoW on Hacker News 11:35:04 sort of funny the people that discuss the algo, not taking into consideration the context it is deployed (though i suppose that happens a lot in life generally) 11:39:01 the only meaningful comment there is from thesz but he's totally wrong in his estimates 11:40:17 all the optimizations that he mentions fall within 2-3x possible efficiency speedup 14:39:34 meh. once he said it was unnecessarily complicated, it was clear he doesn't know what he's talking about 14:40:11 e.g.: using fixed point to emulate floating point is ridiculously slow 14:42:23 but most of all, pointing to optimizations to a braindead simple algo (FFT) as proof that you can accelerate an ASIC with a codesgin tool is utterly missing the point 14:43:55 just as an aside, when I worked at JPL, our hardware lab had a couple decades of experience building custom floating point array processors for FFTs 14:44:29 in the course of the time I worked there (3 years) all of that got retired in favor of massively parallel supercomputers 14:44:44 Intel i860 based, SPARC based, many varieties 14:45:19 I worked on the i860 compiler and optimizers that made a lot of that possible... 14:46:44 fundamentally it's a memory-hard problem, it's easy to pipeline the CPUs so they're 100% busy, aside from waiting for memory stalls 14:48:49 restated - the actual computation isn't the slow part 15:44:31 hyc we should start a counter how many times "RandomX is dead" is said 15:44:40 like they have a counter for "Bitcoin is dead" 15:48:13 https://99bitcoins.com/bitcoin-obituaries 16:57:35 lol 17:58:08 this guy now claims 10x would be possible https://news.ycombinator.com/item?id=23530196 17:58:18 sounds more realistic already compared to his 100x in the first comment 18:00:17 still nonsense to compare any fixed algorithm to a randomly generated one 18:00:50 FFT requires no dynamic scheduling at all 18:06:23 except RandomX doesn't need "more shifters than the CPU", the instruction mix aready matches common CPUs 18:07:08 wider issue: not possible because of isn dependencies, only OoO would work 18:07:12 nor can you do ahead of time preprocessing 18:07:36 in parallel, since every instance starts from a different nonce. the scheduling demands of each parallel thread will clash. 18:08:13 IIRC the simulations showed that VLIW over 4-wide had little to no performance gain 18:11:49 two of these guys claimed they eval'd for their work. sounds like their employers got a bad deal. 18:27:02 I think one of the guys was from X41 18:27:10 the other one, I'm not sure 18:27:30 maybe TrailOfBits 18:29:14 So he has a secret knowledge how to make CPUs 10 times faster? 18:30:40 My CUDA code has VLIW interpreter. I don't remember exact numbers, but it achieves around 3 RandomX instructions in a single VLIW instruction 18:31:04 Memory dependencies and branches make it hard to go wider 18:31:13 considering we paid X41 & ToB to report any potential falws, and they didn't include these kinds of comments in their final reports, I'd say if that was true then someone over there really f#cked up 18:31:42 Demand moneyback? 18:31:47 i.e. withholding conclusions that we paid for 18:32:07 but I'd want to know for certain that those are their employers 18:33:59 this seems to be his "employer", or at least he checked in some PoW related code to one of their repos: https://github.com/hexresearch 18:35:19 https://www.linkedin.com/in/zefirov-serguey-81aa1a8/ 18:35:23 I think "q3k" is from X41 18:36:14 Found "thesz" after some googling 18:37:36 would be great to ditch the term asic resistance and replace with asic equivalence 18:37:43 yes q3k is listed in the X41 audit 18:37:47 yikes lol 18:40:05 q3k just repeated their VLIW "issue" from audit report, no? 18:40:35 hm, could be 18:46:50 I don't see q3k in the x41 report 18:47:46 he wrote the hardware part 18:48:20 Serge Bazanski 18:49:54 ah I see 18:50:26 in that case fine, already reported and dismissed