13:20:10 is there a way to compile or run monerod without huge pages/huge page support? 13:20:12 https://github.com/wownero/wownero/issues/233 13:28:57 It's not fatal, it goes on to allocate "normally". 13:29:23 jwinterm, there should be... thats sorta the point of the "lite" mode for randomx. im guessing it tries to run it big and then defaults to lite mode 13:29:36 If it's the spam that's the problem, I think yo need to renmove the first alloc call. 13:29:56 I think it's the spam that's the problem 13:30:14 those INFO messages 13:30:27 .shrug 13:30:40 Line 225 in rx-slow-hash.c 13:30:52 ima make tshirts. INFO WARN ERROR 13:30:59 cache = randomx_alloc_cache(flags | RANDOMX_FLAG_LARGE_PAGES); 13:31:05 Skipping that should do it. 13:31:11 wasn't there an environment variable to control it? 13:31:33 i thought there was too, but i just scanned the help and didn't find a flag 13:31:43 oh you mean export=something do magic 13:31:52 yea I looked at monerod help and didn't see option 13:32:03 set variable before compile? 13:32:11 .dunno 13:32:54 I remember talk of one, but git grep getenv does not pick up anything in randomx. Maybe only in new versions. 13:33:33 i mean, perhaps a user would want to thrash their CPU instead of use a lot of memory 14:35:57 /window 27 15:40:28 nowindowforyou 17:09:30 use MONERO_RANDOMX_UMASK env var 17:10:40 you'll need to check randomx.h for what the flag values are 17:11:10 jwinterm: ^^ 17:14:10 https://github.com/monero-project/monero/blob/master/src/crypto/rx-slow-hash.c#L86 22:36:00 I ran into an error while importing the chain from file 22:36:17 block height: 1959719 22:36:18 block height: 1964659 22:36:18 block height: 1968799 22:36:18 Done scanning bootstrap file 22:36:18 Full header length: 1028 bytes 22:36:18 Scanned for blocks: 59216999345 bytes 22:36:20 Total: 59217000373 bytes 22:36:22 Number of blocks: 1968800 22:36:26 2020-03-10 22:07:56.796 I bootstrap file last block number: 1968799 (zero-based height) total blocks: 1968800 22:36:29 Preparing to read blocks... 22:36:31 2020-03-10 22:07:56.842 I bootstrap file recognized 22:36:33 2020-03-10 22:07:56.842 I bootstrap::file_info size: 4 22:36:35 2020-03-10 22:07:56.842 I bootstrap file v0.1 22:36:37 2020-03-10 22:07:56.842 I bootstrap magic size: 4 22:36:42 please use paste.debian.net 22:36:43 2020-03-10 22:07:56.842 I bootstrap header size: 1024 22:36:45 2020-03-10 22:07:56.842 I start block: 1696170 stop block: 1968799 22:36:47 2020-03-10 22:07:56.842 I Reading blockchain from bootstrap file... 22:36:49 2020-03-10 22:09:33.873 F ERROR: unexpected end of file: bytes read before error: 0 of chunk_size 30702 22:36:51 Bus error 22:37:12 the file is corrupted 22:37:30 I would sync using monerod, it is way easier 22:37:47 https://paste.debian.net/1134399/ 22:38:06 Yes, I'm trying to do that now. 22:38:09 However 22:38:14 Did you continue the download somehow? 22:38:39 https://paste.debian.net/1134400/ 22:38:55 Network shares are not supported. 22:39:00 Now it complains about a read only database 22:39:13 I would recommend to use an SSD + pruning 22:39:20 only takes 25GB and fast 22:39:29 What is pruning? 22:39:44 Reduces the file size from 70GB to 25GB 22:39:50 How? 22:40:18 You only store 1/7th of some less important data. 22:40:23 Your privacy stays the same. 22:40:33 And other nodes can sync from your node. 22:40:58 `./monerod --prune-blockchain` 22:41:10 How do i fix the database-read-only error? 22:41:12 but you have use a HDD or SDD 22:41:24 you are using a NAS according to the logs 22:41:57 oh, you are using a raspberry 22:42:10 syncing will be slow :) 22:42:40 The RaspberryPi *hosts* the NAS 22:42:52 I'm using the same HDD for the chain 22:43:08 It looked like a mounted NAS. 22:43:15 Same HDD should work. 22:43:30 It is a plugged in HDD 22:43:40 The error occured after I tried to import the chain from file 22:43:52 I had 70% of the chain downloaded already 22:44:05 and then started the import of the .raw file 22:44:06 you can’t import 70% 22:44:14 the 55gb 22:45:11 ok, move `/media/NAS/Monero/lmdb` to a different folder and start the daemon again 22:45:27 Won't that reset my progress? 22:46:05 Just for testing. 22:46:19 ok, rebooting the Pi first, sec 22:46:26 rename lmdb to lmdb2 and try 22:46:39 ok. will do 22:51:07 I restarted and am running the daemon again now 22:52:23 reboot alone might have fixed it 22:53:51 seems to be doing stuff for now at least 22:55:22 ok, I’m not sure if it’s going to work 23:00:01 It seems to have ignored my attempt to import from file completely and is now continuing it's download: 2020-03-10 22:58:22.802 I Synced 1696210/2051663 (82%, 355453 left) 23:00:26 looks good 23:01:00 I think it will take a few more days. 23:01:07 yep 23:01:24 probably 1-2 weeks more 23:01:27 v0.15.0.5 will be out soon, you should update then even if the sync is still in progress 23:01:33 it includes new checkpoints 23:01:40 what are checkpoints? 23:03:05 I can’t explain it in technical terms but the sync will be faster until the last checkpoint. 23:03:56 v0.15.0.1 is 3 months old, so the last checkpoint is also 3 months old 23:04:04 the last 3 months will be super slow to sync 23:04:17 do you have some additional reading on checkpoints and pruning? 23:05:06 https://web.getmonero.org/2019/02/01/pruning.html 23:07:06 thx. seems interesting. but as I can afford the additional space and it helps the network more i think i'll keep the full chain 23:07:27 the difference for the network is minimal 23:07:41 I have nothing on checkpoints but usually it is nothing to worry about as a user 23:08:26 is it like a checksum for all blocks in a specific period? 23:08:31 The binary includes a database of known hashes for blocks 0 to N. If a block you get over the network matches the expected one, it skips some validation tests. 23:08:47 Well, now it's more like the hash of a set of hashes of blocks, since it saves space, but same idea. 23:09:04 makes sense 23:09:16 You can disavle that with --fast-sync 0 IIRC. 23:09:34 checkpoints are only for rollbacks? 23:09:48 what is the correct name for the database? 23:10:29 syncing past 75% definitely seems to go a *lot* slower than 0-50% 23:14:12 I don't understand selsta. 23:16:05 There are checkpoints in checkpoints.cpp and there is the checkpoints.dat with expected_block_hashes_hash[]. 23:16:08 Are they related? 23:16:46 In some way I guess, since they're both about checking block hashes. 23:17:11 It's a bit unfortunate checkpoints.dat is called that but hey. 23:17:27 That is what always confused me :D 23:17:35 Is there a technical explanation somewhere I can read up on these things? 23:20:40 not aware of any