

Someone should get the ArchiveTeam onto it (if they aren’t already)
Someone should get the ArchiveTeam onto it (if they aren’t already)
I’m in the same boat, but didn’t jump so yet. I’ve been following paperless for a while now but every time I look at scanners I’m blown away by their prices…
Based on this thread it’s the deduplication that requires a lot of RAM.
See also: https://wiki.freebsd.org/ZFSTuningGuide
Edit: from my understand the pool shouldn’t become inaccessible tho and only get slow. So there might be another issue.
Edit2: here’s a guide to check whether your system is limited by zfs’ memory consumption: https://github.com/openzfs/zfs/issues/10251
Then why didn’t you contact the devs or opened a bug report on GitHub?
Additionally this isn’t the community where this needs to be addressed. Either contact the admins or open an issue on GitHub.
Greek and Roman god names all the way!
For syncing I use Syncthing. It’s open-source as well and syncs two/multiple devices without the need for cloud-storage
I counter with friggin Lasers! https://youtu.be/fH_x3kpG8Z4
fyi, I’m getting an 502 error when trying to open the preview…
(Sadly I can’t program, so i can’t help with the actual coding…)
I’m not sure whether it makes sense trying to discuss with you but let’s try…
You couldn’t know how much traffic you saved because you didn’t load the ad. The ad could be 1KB, 1MB or 1GB, but because you didn’t load it you wouldn’t know it’s size. Without knowing it’s size, you wouldn’t be able to calculate the savings.
As mentioned somewhere is in the thread you would have to directly compare two machines visiting the same pages and even then it’s probably only approximate because both machines might get served different ads.