00:57:38 selsta> Yes, after a timeout. But malicious nodes don't know which hop they are so they can't say with certainty the source IP. <<< if they see tx B in their subnet, and then never see tx B in the rest of the monero network, then they can presume it came from the node connected to their node 00:57:56 because otherwise the failsafe would cause an honest node to fluff 01:02:39 i wonder if we should leverage the centralization of seed nodes to provide block lists 01:03:04 i mean, if a list of IPs *to* connect to is fine, why not a list of IPs *not to* connect to? 01:45:35 gingeropolous: I don't follow. What do you mean with subnet? 01:47:16 Let's say A, B are legit nodes, C is malicious. Tx goes A -> B -> C -> blackhole, then B has the shortest timeout timer and broadcasts it to the network 01:47:48 how does C know A is the original node for that transaction? 02:01:18 The timer is random (within a given distribution). 02:01:34 right 02:01:53 was just an example 02:02:11 I think gingeropolous' point is that if A send a tx and is connected to only Eve's sybils, then if Eve drops it all, if Eve gets the tx again from A, it's more likely A is the sender. 02:02:20 Which seems like a fair guess. 02:02:39 A might have been the one with the shorter timeout though. Can't know. 02:02:58 Actually, there's one way to know for almost sure. 02:03:13 If Eve's sybils have 12 incoming connections to A. 02:03:46 Then it's pretty likely A has no other connections if it doesn't accept incoming connections (many don't, and it's easy to check). 02:04:12 Defense against this would be random max out connections, and ensuring you accept incoming connections. 03:12:55 yeah, my point was if A send a tx, and is connected to C, and C is a sybil, C can blackhole the tx and monitor the rest of the network and deduce that A was the sender if it doesn't appear in the txpool (i.e., if it doesn't get notification of this tx again) 03:14:12 right? Or does A fluff the tx as well after a timeout? the originator of the tx? 03:14:17 A does. 03:14:37 ok, so A can fluff after a timeout. 03:15:18 well yeah, then what you said holds true 03:16:44 gingeropolous: I don't follow. What do you mean with subnet? <<< the attackers collection of AHPs that are highly interconnected with each other 09:39:12 something tells me these nodes are more than a mere annoyance. i mean, it's not like there are hundreds of thousands paying in monero getting stranded at payment terminals -- usually a delay of a a few minutes ,or worst case the full timeout, isn't the end of the world. annoying yes, especially if it doesn't stop happening.. but go through all that job to annoy people paying XMR in situations that are mostly time critical? i mean.. it's 09:39:12 possible, but my gut tells me it's smt more 09:39:41 that job -> that work 09:40:51 from the bits and pieces i've picked up in conversation here, it seems that d++ might have inadvertently magnified whatever funny business is happening here - that according to selsta (iirc) was already happening before, but has now become more noticeable. that seems like a fortuitous (for the bad actor) coincidence. 09:41:50 if i understand correctly, there is a good chance that unless this issue is mitigated, it will remain a reality at the discretion of the attacker(s) 09:43:21 i wonder why the bad nodes mirror block height (someone mentioned it yesterday). perhaps an interesting workaround could be, rather than banning nodes who do that, blacklist them ourselves (if that's what the ban command already does, then ignore this suggestion) 10:28:11 At the very least these nodes watch transaction propagation to associate IP with every transaction 10:28:41 This is why they try to connect to every single legit node on the network, and they connect in big numbers 12:33:13 * moneromooo constantly gets this scene from spaceballs... Just how many assholes do we have on this ship anyway ? "YO!" 12:46:08 -xmr-pr- moneromooo-monero opened pull request #6936: protocol: detect and drop asshole peers 12:46:08 -xmr-pr- > https://github.com/monero-project/monero/pull/6936 14:21:09 selsta: was it you that linked a list of evil peers yesterday? i can't seem to find it 14:21:40 https://gui.xmr.pm/files/block.txt 14:22:28 thanks 15:12:03 need that i2p built in! 15:40:31 I want it so bad 15:43:24 would everyone using i2p/tor not make a sybil attack easier? since then you can't just block an address block or asn 15:48:59 i don't think it would help the sybil, but it would mitigate: " At the very least these nodes watch transaction propagation to associate IP with every transaction" 16:00:08 moneromooo: Major asshole, reporting for duty! 16:00:24 Keep firing, assholes! 16:09:04 FYI since dsc is bringing it up, conversation about using something other than GitHub/GitLab: https://github.com/monero-project/meta/issues/522 16:09:16 Snipa: ^ 16:10:05 I'm in favor of anything that isn't github... 16:11:18 Yes, I mean the youtube-dl scandal probably speaks a great deal to folk here 16:11:28 I'm tired of this conversation :P 16:11:57 Self-hosted is great, but then you're at the mercy of the upstream provider still. 16:12:02 Who's at the mercy of their upstream ISP. 16:12:06 Etc. 16:12:09 We use both Github Actions / Travis CI which both don’t get supported by Gitlab / Gitea 16:12:27 If you want to make the statement "Get out of the US", then you need to start by finding a non-US datacenter provider that matches your requirements. 16:12:59 Tbh, it's pretty trivial to require GA/TCI into GL CI/CD. 16:13:22 I use GL CI/CD on a daily basis for work, and it's pretty much "if you can write a shell script, you can stuff it into gitlab-ci" 16:13:25 re: CDN breaking ssh - do we actually need a CDN to frontend the repo? 16:13:30 gitlab does not support enabling CI for all users 16:13:37 we tried this already with the sites repo 16:13:45 Sure it does? 16:13:51 The issue is disabling it for some users. 16:13:52 :P 16:14:00 why does ssh work with github - they use no CDN? 16:14:10 They intercept and reroute the traffic at the CDN. 16:14:24 Like a sensible company would do, if they talked to their CDN provider. 16:14:41 Generally though, I just use a second SSH address, and tell gitlab to use the SSH address when displaying it. 16:14:48 which means ssh could be overloaded by high volume 16:15:09 Usually. Generally, you'd block SSH for anyone except who's got access to push. 16:15:18 As you should be pulling either packages or over HTTPS. 16:15:26 Both of which get cached at the CDN layer. 16:15:45 The ssh thing could have been done, but required me to configure cloudflare stuff, and I'm not touching it because they're either a TLA in disguise, or pwned by many of those. Either way, they should be dropped by anyone who cares. 16:16:06 ssh worked well with the actual IP. IT was just blocked by cloudflare. 16:17:15 so, do we just forego cloudflare, setup our own squid proxy or whatever if we need to put a cache in front? 16:17:46 Tbh, in my eyes, the argument is kind of absurd. Moving off GH would be nice, but we've also very happily integrated into GH. 16:18:19 If there's a serious want to move away from GH, move issues/bug reporting first, and figure out if there's going to be someone to run that infrastructure. 16:18:34 Step 1 is to de-couple, not look for a full blown replacement. 16:18:53 good point, that can be a major pain. we migrated openldap from our old tracker to bugzilla recently 16:19:07 and exported and re-imported all of our old bug DB, so nothing was lost 16:19:14 otherwise it would have been a non-starter 16:19:36 IIRC github had (maybe still does for now) an exporter for non repo data. 16:20:06 I mean to selsta's point: We've got CI/CD to worry about, both of which are stated not to work with the fastest drop in solution (Self-hosted gitlab). Which means that we need to be concerned with: MR's, Tickets/Bug Reports, CI/CD systems. 16:20:28 So someone who cares about htose should probably keep an eye on that, keep an update export for the time MS goes evil (which, historically, they will). 16:20:46 All of which need to be de-linked from the way we work with GH if this is going to go forwards, because the other solutions are just going to end up with: Well, we've integrated with X, and now we're dependant on X. 16:20:55 Because that's /really/ the point of this arguement I suspect. 16:21:14 MS always goes evil. Remember. Embrace. Extend. Extinguish. 16:21:29 thanks moo 16:21:34 We needed a paid plan to have CI working for all users on gitlab. Otherwise each user needed their own runner. And each users would have counted as member of the project, and we had to pay a plus for each member. I'm going by memory 16:21:39 We already have full regular backup of our repos 16:21:54 Gitlab also has full licenses available for open source projects. 16:22:07 Which I know I pointed out last time, and someone was going to look into. 16:22:15 (repos + metadata) 16:22:27 Also, as we'd want to self-host, why the hell does it matter, as you tie your runners to your self-hosted instance. 16:22:29 Yes, openldap is on a full license, free. 16:22:40 Because gitlab is still functionally centralized like GH if you use the cloud version. 16:22:44 it still requires annual renewal, but it's free. 16:22:45 yeah, i remember that snipa. no idea if somebody did look into that? 16:22:47 Who will pay for the self hosted CI? 16:22:53 Who will set it up? I will not 16:23:10 Tbh, someone's got to pay for all of this crap if we move off GH. 16:23:12 iirc rehrar said was going to contact them at the time. Rehrar am i correct? 16:23:24 Besides, like I said, with a self-hosted GL instance, the issue is moot. 16:23:37 Your self-hosted runners tied to a self-hosted GL instance will work for all users properly configured. 16:23:54 I really don’t like that we tried moving to Gitlab, it was kinda meh and now we are discussing this again lol 16:24:11 To be fair, the issue is not about moving to gitlab, but to gitea 16:24:15 All answers are meh. 16:24:18 We still have the gitlab instance from pony. Still running. 16:24:21 Because we're tied to github. 16:24:24 There's this cloudflare thing still though :P 16:24:32 Until we de-link from github and sort through it, it's not going to be solved. 16:25:15 selsta: we can resolve this conversation now, or wait until M$ has a reason to takedown the project 16:25:57 we have https://www.backhub.co 16:26:26 in the unlikely case that monero repo gets taken down we can import it to a different place 16:26:28 accessible only to core team? 16:27:03 in the event that the repo is taken down, we will need to setup in a different place with quite some urgency 16:27:13 whereas right now we can take our time and investigate the best options 16:27:22 Functionally, GH to self-hosted GL, without being behind CF is the /fastest/ option. 16:27:40 nobody liked our GL though 16:27:48 pushing the repo to a different place is trivial 16:27:50 ErCiccione[irc]: this I don't recall but it's very possible it got lost in my mix 16:28:00 the issues / pr history is a bit more work but nothing unsolvable 16:28:03 Yes, but the infrastructure around it is not. 16:28:38 Which is the point I've made above. If this is a serious discussion to be had, it's time to start looking at what exists around GH that needs to be moved to a self-hosted service. 16:28:46 And work out who the hell is going to maintain those services long-term. 16:29:54 To your point: Travis CI is owned by a US conglomorate. 16:30:00 So that needs to be removed as well. 16:30:19 Purchased last year by Idera, which is a B2B parent company based out of Houston Texas. 16:30:30 Any opinion about Gitea? The issue mention moving specifically to that 16:30:57 Tbh, if we're going to make the move, my preference would be to stop using large, combined software packages. 16:31:03 And suck it up, use multiple smaller systems. 16:31:44 Ticketing/issue management belongs in it's own system. CI/CD should be split apart and only depend on the ability to pull in from an upstream repo, etc. 16:32:05 Snipa: we want to run CI on every PR 16:32:11 ... /me muses on using NNTP as backend for bug tracker 16:32:14 Cool, Gitlab literlaly supports that natively. 16:32:27 Self hosted runners? 16:32:30 Yes. 16:32:46 Because when you PR to the upstream, you tie the runner to the upstream. 16:32:52 ANd it doesn't care any more that the child is pulling it in. 16:33:03 But you do that on self-hosted, or you don't avoid the core problems listed in the ticket. 16:33:04 i remember that being problematic at the time 16:33:15 and was one of the main reasons for moving away 16:33:20 hyc: you mean, LMDB having a bug would be news ? :P 16:33:33 :P 16:33:34 We tested it quite a lot, so i'm sure it was a problem at the time 16:33:58 Naturally, and I use gitlab in that fashion on a daily basis, so I know it works fairly decently for that. 16:34:12 But again, I'd suggest moving away from the large combined package. 16:35:31 losing github in the future be an inconvenience. why would we go through a similar inconvenience now for a hypothetical? 16:35:48 right 16:36:10 to be a pure FOSS project bro 16:36:18 Planning around it is not the /worst/ thing. 16:36:29 Executing on it poorly is a horrid idea. 16:36:48 and no, fireice spies, I don't mean ethnically pure. You nuts. 16:37:06 Github certianly is a poor single point of failure, and the more that we opt to integrate with it, the more dependant we become on it. 16:38:13 Functionally, I don't see why we wouldn't go the kernel.org route if we /really/ wanted to go down this stream. 16:38:53 Where the master repos are behind some super light weight system, everything else is done through sensible patches and patch management structures, most likely /not/ email knowing us, but something along those lines. 16:39:19 why make everything more inconvenient now just because maybe sometime in the future github could take down monero repo? 16:39:35 and even for this case we have backhub like I said so no data is at risk 16:39:39 Accessibility should be taken in consideration. A lot of our contributors barely know how to use git. 16:39:46 Again, not saying we should, but planning should be a consideration. 16:39:50 one of the main reasons we wanted to stay in github is the 'discovery' factor. Everyone is on there. It's THE social media for devs nowadays. 16:39:59 Disaster planning is part of running a business for a reason. 16:40:03 ErCiccione[irc]: eh? if they don't know how to use git, they have to learn 16:40:30 although, realistically, how many people have contributed to Monero based on this discovery factor? 16:40:35 I don't think we have any way of knowing. 16:41:09 hyc: sure, but then you have to keep in consideration the drop of contributors, especially for the website. Github's UI is very convenient from that point of view, but gitlab's is good as well 16:41:38 rehrar: i got some contributors to my GUI guide repository only by participating to the hactoberfest 16:41:46 talking about last year or the one before 16:42:09 In my view anyone can step up and plan and execute on contingency plans. Even different people could propose/architect different solutions and get the infra ready already if they want to. Especially since we're discussing threat modelling and response, decentralization is key. 16:42:20 ^ It's git. 16:42:37 Maybe the website would not be banned if the safety protecting software were to be taken down. 16:42:42 For instance, core team is managing the backhub thing and paying for with donations. But anyone could also do their backup anywhere on their own, adding to resilience. 16:44:30 and it might not be a bad thing if someone does so .. quietly :D 16:44:39 good point 16:45:07 but just having a backup of github repo + issue tracker is one thing 16:45:23 having an operational replacement if github shuts us down is another 16:45:39 ultimately, I'm still for moving to something off github. Gitlab was kinda clunky. I've used Gogs before, and I know Gitea is just souped up Gogs, so I'd be down for either. 16:46:16 but we'd need manpower and infra runners. Two things I'd not exactly say we're in excess of atm 16:46:27 gitea does not even use gitea for their repo yet 16:46:32 hyc i think the only thing missing from the replacement would be the various CI. The rest would be ready to work (talking about the existing self hosted gitlab) 16:47:14 and setting up CI replacement is not crucial fo operations 16:47:20 for 16:49:44 I know it sucks to have conversations like this over and over, but the world seems to be becoming increasingly hostile towards 'wrong-think' 16:50:06 and I think it is prudent to come back to conversations like these as this gets worse 16:50:10 which I suspect it will 16:50:35 well the thing is that we already have DR ready 16:50:45 IMO it's good to prepare alternatives, but I don't see why move away already. Agree with luigi1111 it's just inflicting inconvenience to ourselves. It's a way for the project to self-censor itself out of vague fear before someone needs to actively censor. 16:50:46 if moving away is a pro-con analysis, and even if right now the cons outweigh the pros and so we don't move, the pros of moving may one day outweigh the cons, even just as the balance is tipped further and further by hostile takedowns 16:51:06 rehrar: we have plans in case github takes us down 16:51:43 yes, but the github getting taken down is only one of the pros of moving. dsc outlined others, such as Tor contribution 16:51:44 I don't think we need to move, we tried that already and it didn't work - if we're FORCED to move that's different and we have multiple contingencies in place 16:52:16 rehrar: yes but the handful of additional pros don't outweigh the cons 16:52:25 ErCiccione has a whole write-up on the issues 16:52:34 Right. The moon base is almost in place. 16:52:36 at present, I think I agree 16:52:43 and it wasn't JUST because of GitLab, it was other things 16:52:51 rehrar: Github works with Tor afaik. not reliably, but it works 16:53:34 yes github works fine thru tor, both the webui and git+ssh push/pull 16:55:39 if you guys are satisfied that no changes need to be made at this time, fine with me 16:56:18 summarize these points in the issue and close it... 18:42:32 Snipa: what happens when someone runs rm -rf / on the self hosted gitlab runner? 18:43:09 Runners are run as a sub-user, not root. 18:43:40 Worst comes to worst, it nukes the user's local stuff, but all the config files are owned by root (In a proper setup), or you do Docker in Docker, then it doesn't matter. 18:45:18 There's ways to mostly do it safely, there's always risks involved, but the GL runner tends to be pretty fail-safe. Also, you can protect the gl runner file if you get a higher version so people can't make MR's that change it, which then wouldn't run because they couldn't submit a MR that changes the build file. 19:59:47 dunno bout this...2020-10-26 19:57:12.153 E Exception in main! PID file /home/user/wacky_work/save_pid.txt already exists and the PID therein is valid 20:16:08 -xmr-pr- mj-xmr opened pull request #6937: Add RELINK_TARGETS and monero_add_target_no_relink 20:16:08 -xmr-pr- > https://github.com/monero-project/monero/pull/6937 20:36:19 Can an inbound peer be used as an outbound one? 20:37:25 I just assumed that inbound = connection initated outside = port must be open, while outbound is your daemon initiating the request - and that both types of connections can work both ways for practical purposes 20:38:03 if by used as outbound you mean push stuff into it, then sure (since it works with incoming connections disabled) 20:38:28 if you mean having it connected to you as you are at the same time connected to it, in principle it seems not to make sense 20:38:49 by the way, i'm not getting any incoming connections over tor. it's been 10+ hours now. is this expected? 20:38:58 i passed --anonymous-inbound 20:39:13 I mean both inbound and outbound can be used for downloading blocks and pushing transactions etc.. 20:39:23 yeah 20:39:52 I have like 10+ inbound connections (I2P and Tor), but not a single outbound one 20:40:15 so my daemon complains "Lost all outbound connections to anonymity network - currently unable to send transaction(s)" 20:40:53 yes, that's become my new most seen message in the console too 20:41:25 in my case i only have outbound tor connections, nothing coming in 20:41:36 weird, opposite here :P 20:41:47 at the same time the tor daemon complains about failing to reach this or that [scrubbed], so it does seem to be an issue with tor in my case 21:01:08 -xmr-pr- selsta opened issue #6938: tx-proxy I2P / Tor, no outbound connections (only inbound) 21:01:08 -xmr-pr- > https://github.com/monero-project/monero/issues/6938 21:02:38 selsta I think that issue may overlap with this one, which I've been dealing with for a bit. a temporary solution is to add a couple of i2p / tor priority nodes. https://github.com/monero-project/monero/issues/6631 21:03:24 selsta 21:03:33 could you pm me your --anonymous-inbound line? 21:03:40 or paste here if you prefer 21:05:12 Lyza: do you have inbound also staying stable? but yes, your issue looks similar 21:05:15 kayront: I2P or Tor? 21:06:09 tor 22:47:56 selsta yes I do. some of my inbound connections have lasted for as long as my daemon has been up (right now about 3 hours) 22:48:45 otoh my longest lasting outbound i2p connection at the moment is just 180 seconds old