01:24:32 so it distributed the hashrate? 01:25:01 there's no reason to assume a single team attacked all of the supercomputers 01:25:20 but sure, maybe they distributed their hashrate to make it less noticeable 01:25:26 that would have been the smart thing to do 01:26:32 huh. were different exploits used? i thought i read the article... perhaps ill read again 01:28:12 "evidence like similar malware file names and network indicators suggests this might be the same threat actor." 01:28:16 this wasn't a professional botnet operator. it sounds more like some friend-of-a-friend copied down passwords to shared ssh accounts 01:29:13 and they probably knew they would be detected soon 01:30:03 wonder if we saw any reorgs during this period 01:30:16 to the noncesesnece lab 01:30:23 heh 15:16:56 UpMem spells the death of memory-hard algorithms https://www.upmem.com/technology/ 15:17:09 they just emailed me, with their 2020 product brochure 15:17:30 8GB DIMM with 128 embedded CPU cores @ 500MHz each 15:18:00 would this also be interesting for randomx? 15:18:22 only 32bit CPUs, not great for number crunching, but will eat up data-intensive algos 15:19:05 I think RandomX could be ported to it, as an experiment. no idea what to expect. 15:19:26 what even is this? 15:19:31 each CPU can only address 64MB of memory, so we would have to subdivide the argon2 cache even further 15:19:32 C programmable fpga? 15:19:34 lol 15:19:49 custom CPU cores co-resident with RAM 15:20:24 I think it's more applicable to database/search workloads 15:20:46 e.g. my day job ;) 15:21:03 so you would put one of those into a server 15:21:18 but it would probably eat ProgPow ... 15:22:07 yeah they have a few sample system configs in the PDF 15:22:28 intel server with 20 of these modules installed 15:22:44 would have 2.56TB/s bandwidth for their processing algos 15:23:29 moving your computes out to where RAM lives is an excellent approach 15:23:53 for things that parallelize well, like GPU-friendly algos, this will devour them 15:32:38 would be interesting to see hashrate comparions for randomx and progpow and other GPU algos 15:35:22 but yea probably more interesting for non mining applications 15:37:10 only 32bit CPUs, not great for number crunching, but will eat up data-intensive algos >>> sure, now. Surely as time progresses they'll get to 64bit and increase the 64 MB memory access etc 15:38:14 yeah, I'd expect so 15:46:10 right, so its only a matter of time ... well, does this pose the same kind of threat as ASICs? 15:46:46 but if thats the future of compute, then everyone will be doing it 15:46:58 yeah it doesn't seem to have the exclusivity problem 15:47:46 but it seems these will only drop-in to server DIMM slots (it's compatible with RDIMMs) 18:27:03 Hello, im the op of xmrpow.de. I found out that other pools like supportxmr etc are having some small advantage concerning getting new blocks. For example: supportxmr is able to send jobs to its miners 2 sec earlier than xmrpow.de does. Although im closer to my own server when i tested that. I have seen worse but it's still improvable. Therefore I asked what supportxmr admin does to achieve that. He told me that he is operatin 18:27:04 f nodes and these nodes do have a special messaging queue which they use to communicate with each other. Because we are a small pool we can not afford to host monerod in all regions of the world. Do you have some idea how I could solve the problem? Are there some special monerod optimizations? 18:59:52 xmrpow if you can't afford nodes around the world with custom protocols for block propagation, did you at least try to have more peers connected to your node? 19:21:53 @sech1 do you know the auto value for in and out peers or is it dynamic? 19:22:14 it's 8 in 8 out by default 19:22:23 you can ramp up these numbers via command line 19:22:52 But it cant be done during operation right? Might have to restart for that... 19:23:13 in_peers is unlimited by default 19:23:15 out_peers 8 19:23:24 I think in is not limited by default. 19:23:53 Ok but if in is unlimited it cant be my problem 19:24:02 It can be done from the monerod console (out_peers/in_peers) 19:28:19 But i can t set in out peers after launching monerod? 19:28:29 Or is there some interactive mode? 19:29:01 < moneromooo> It can be done from the monerod console (out_peers/in_peers) 19:52:28 yes, in_peers command works, and is unlimited by default 19:52:36 so that's probably not an issue here 19:53:14 maybe you could examine your current peerlist, ping each of them to tally up roundtrip latency, and disconnect from the slowest ones 19:53:39 A bit sucky for those people though. 19:54:37 why? 19:54:50 just do it for the outbound conns, then your monerod wil ltry to find new peers 19:55:03 If people start disconnecting people with low ping, people with shitty connections will end up being unable to get onto the network. 19:55:15 Ah, that might work yes. 19:55:22 if the connection is that slow, it's probably geographically distant. chances are good there's a closer node to talk to 19:55:54 Yes, but OTOH doing this means txes will be even slower to propagate overall. 19:56:12 (since they'll go through more hops to reach everywhere) 19:56:21 (that's a conjecture) 19:57:28 @hyc im trying it 19:57:29 I guess. each hop will be faster, but the number of added hops will probably cancel that 19:57:53 Has to, since there's downtime (as sech1 said, at least the PoW check delay) 20:00:01 yeah, true 20:02:13 @moneromoo can you pipe the output of print_pl_stats into a file? 20:03:27 sry print_pl 20:03:46 Why ? 20:04:09 Oh, you're asking whether it's possible. Not without copy/paste. 20:04:20 It'll be in the log file, bit with timestamp prefixes. 20:04:32 (which can be sedded out) 20:04:58 Hmm. Actually might not be in the log, I think it's a direct console write... 20:07:18 @moneromoo i want to examine my peer list 20:07:41 and copy paste does not work that well in my ssh session 20:08:46 is it normal that status does show 0 in? 20:08:55 utils/python-rpc/console.py 18081 20:09:03 Does that mean unlimited? 20:09:03 L = daemon.get_peer_list() 20:09:07 Yes. 20:09:10 ok 20:09:17 Wait 20:09:28 0 in means noone can connect to your daemon. 20:09:39 0 in the in_peers command means unlimited. 20:09:45 Or is that -1 ? I forget. 20:10:18 i mean when i call status 20:10:46 Sorry. Looks like 0 means 0. -1 means unlimited. For status, 0 means 0. 20:11:00 So your router is not set up right or similar issue. 20:11:18 oh shit. there is something rly ry wrong... 20:11:26 im in a datacenter 20:16:18 @moneromoo can p2p bind ip bind to localhost? 20:16:30 dont tell me that it cant :) 20:16:31 Yes. 127.0.0.1. 20:16:55 thought it might be the problem for no incoming connection 20:17:03 are you sure? 20:17:10 Fairly sure. 20:17:34 The default is to 0.0.0.0 though, so unless you specifically set it, it won't be the reason. 20:18:11 i set it 127.0.0.1. Maybe i should try default? 20:18:17 Yes :D 20:18:23 :) 20:18:35 Keep the RPC one to 127.0.0.1 though (which is the default). 20:20:25 LOL 20:20:51 gosh, how come nobody on localhost is connecting to my node ... 20:22:19 sry, i know it s dumb but shit happens ;) 20:22:52 FWIW, setting binding to 127.0.0.1 when first trying something is usually prudent :) 20:23:41 And, we, wrong channel, I'm just seeing. That's really #monero stuff.