I started by increasing the swap to 500 Mb, which initially left almost 50% of the increased swap space shown as 'free'. After about 12 hours, however, the free swap had dropped to 90 Mb. The rest of the numbers still look much 'better'. I will continue to monitor.You have a paging problem, right from the start it would seem. For contrast here's my PC (not paged yet)....ie: MiB Swap: 0.0 used.Code:
foo@sdu:~$ top -b -n 1 | headtop - 20:32:12 up 18 min, 2 users, load average: 0.17, 0.73, 0.66Tasks: 469 total, 1 running, 468 sleeping, 0 stopped, 0 zombie%Cpu(s): 0.3 us, 1.0 sy, 0.0 ni, 98.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stMiB Mem : 64203.5 total, 47152.6 free, 6752.9 used, 10298.0 buff/cacheMiB Swap: 1024.0 total, 1024.0 free, 0.0 used. 56598.0 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4815 foo 20 0 13388 4284 3408 R 11.8 0.0 0:00.03 top 539 root 20 0 0 0 0 S 5.9 0.0 0:01.49 nvidia-+ 3474 foo 20 0 1788404 250628 184864 S 5.9 0.4 0:23.66 steamwe+
Here's an rpi5 I use to do a lot of compiles. It has paged....ie: MiB Swap: 2458.4 used.Code:
foo@pi23:~ $ top -b -n 1 | headtop - 20:33:35 up 66 days, 4:20, 1 user, load average: 0.13, 0.13, 0.19Tasks: 186 total, 1 running, 185 sleeping, 0 stopped, 0 zombie%Cpu(s): 2.6 us, 2.6 sy, 0.0 ni, 94.9 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 stMiB Mem : 7811.3 total, 110.0 free, 328.7 used, 7372.7 buff/cacheMiB Swap: 16384.0 total, 13925.6 free, 2458.4 used. 7351.9 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND2927075 foo 20 0 9968 3324 2712 R 5.6 0.0 0:00.06 top 1 root 20 0 165364 8704 6264 S 0.0 0.1 1:34.14 systemd 2 root 20 0 0 0 0 S 0.0 0.0 0:10.60 kthreadd
Consider your lines..Something has immediately gobbled up all your pagefile and over time there is less and less of it (avail). This means the system will be "thrashing" as apps fight each other for swap space thus the kernel will be spending a disproportionate amount of time paging things in & out. In the worse scenario parts of the kernel itself may be subject to paging.Code:
MiB Swap: 200.0 total, 0.2 free, 199.8 used. 238.3 avail MemMiB Swap: 200.0 total, 0.1 free, 199.9 used. 184.9 avail MemMiB Swap: 200.0 total, 0.0 free, 200.0 used. 98.4 avail MemMiB Swap: 200.0 total, 0.1 free, 199.9 used. 76.8 avail Mem
Unless you're prepared to change the rpi you're stuck with 1Gb ram so your only option is to increase the pagefile. I'd start with making it 1Gb and see how that goes.
Yes, the 200 Mb was the default setting for dphys, and this is changed as you indicate.I'm guessing that 200Mb swap is the stock dphys one? I'm not too familiar with "/etc/dphys-swapfile" in that if I need to mess with paging, I turn dphys off and use a raw old fashioned "mkswap" type partition. I *think* you'd change "CONF_SWAPSIZE=1024" then reboot but verify that.
Your guess is a good one! Almost all my services access one or both of two local sqlite databases. Only one service writes to each database, but most of the rest execute one or two SELECTs every 5 minutes. I do have a constantly running NAS on the network to which I could move the databases, but I would really rather not!What you're after is a situation where about half the swap gets used then usage doesn't increase. If "avail" stills decreases over time all you've done is push the problem into the future. I envisage two causes: (a) an app is specifically requesting virtual memory and not handing it back. (b) file system operations.
I'd guess at (b) because sqlite3 is going to be constantly accessed so the kernel is going to favour keeping that in ram. Ordinarily the kernel allocates all unused ram to "buff/cache" and shrinks it as apps demand more memory. I'm only guessing at sqlite3 accesses because I'd think "buff/cache" would be smaller if something wasn't constantly accessing the filesystem. Something is causing paging rather than shrinking "buff/cache" is my reasoning. One way to figure that out would be to use (say) mariadb over a remote connection to a mysql server: way overkill unless you're into full on nerd mode!
This (weekly reboots) is essentially what I have been doing manually, and I already back everything important up to the NAS on a nightly basis. The problem with automating the reboots is that my services interact in fairly complex ways and don't always start up cleanly - I really need to be around to clean things up if necessary after a reboot.Reality has a more sensible answer. If you can keep the rpi running for a week without slowing down then reboot it in the early hours sun/mon. You're gonna have to do this anyway in order to back everything up because if it's paging constantly, it'll be killing your sdcard: even ssd's don't fare well.
Statistics: Posted by pfletch101 — Mon Jun 09, 2025 2:35 pm