

Assuming “rockets impose” is an autocorrect of “docker compose”, it’s the best one I’ve ever seen.
Assuming “rockets impose” is an autocorrect of “docker compose”, it’s the best one I’ve ever seen.
That’s the one. It’s a bit daunting and I have a caldav migration to complete and some offsite backups to get done first.
I am, the UniFi Java blob that runs on MongoDB. I use it for my 802.11ac access points, although not very often.
I really want to move to openwrt on them (not a big fan of how Ubiquiti treats out-of-support hardware), but I’m scared of taking the big plunge of managing them all with a unified interface. There exist projects to do just this, I guess it’s the work to set it all up.
Very cool.
I wish I had a valid use case for my nodes, but they’re basically just toys at the moment.
You wouldn’t be any more vulnerable to ddos attacks than without WG.
It’s way worse than security being an “afterthought”, most of these projects have no afterthought at all. No human review, poor if any testing, rife with race conditions, bad or no error handling, bad or no human readability standards, etc.
But that isn’t even the problem. The deeper and more concerning issue is that these vibe coders iterate very quickly and drown out by volume any meaningful human review. Just like ai-driven content and web scraping, ai vibe coding is making human-generated code less viable because it iterates more slowly.
IDS, L4 firewall and video streaming from the same machine? It can be done.
Should you do it? That’s a lesson I’m gonna leave to you to learn yourself. For personal growth.
If you’re talking about streaming steam games at 4k, then maybe. But at that point build a dedicated machine.
Sunshine works fine with n100 quicksync for 1080p streaming, plus frigate. I’m running both of these on an 11th gen i5 with a coral tpu for frigate.
Not sure what your “punishment” is for hardware, but your current list really isn’t that demanding.
Lots of ppl deploy sunshine on n100 mini PCs with quicksync, you dont really need a gpu that way.
Sr-iov is electrically expensive to implement. You can have it, you’ll just pay more.
I’m a lifelong Linux user (or since 1999, so half my life), but I was a mixed mac and windows user before that. Anyway, I understand the reluctance you’re facing.
You don’t need to endanger any part of your current experience to start self hosting, you can just start adding to it. The stakes can be very low if you want to learn that way.
I guess. I don’t know why a person would do this, though… Especially just for an LLM.
I don’t need to build a datacenter, i’m fine with building a rack myself in my garage.
During the last GPU mining craze, I helped build a 3-rack mining operation. Gpus are unregulated pieces of power-sucking shit from a power management perspective. You do not have the power requirements to do this on residential power, even at 300amp service.
Think of a microwave’s behaviour ; yes, a 1000w microwave pulls between 700 and 900w while cooking, but the startup load is massive, almost 1800w sometimes, depending on how cheap the thing is.
GPUs also behave like this, but not at startup. They spin up load predictively, which means the hardware demands more power to get the job done, it doesn’t scale down the job to save power. Multiply by 58 rx9070. Now add cooling.
You cannot do this.
K3s (and k8s for that matter) expect you to build a hierarchy of yaml configs, mostly because spinning up docker instances will be done in groups with certains traits applying to whole organization, certain ones applying only to most groups, but not all, and certain configs being special for certain services (http nodes added when demand is higher than x threshold).
But I wonder why you want to cluster navidrome or pihole? Navidrome would require a significant load before service load balancing is required (and non-trivial to implement), and pihole can be put behind a round-robin DNS forwarder, and also be weird to implement behind load balancing.
I don’t think anyone here disagrees that port scanning is bad, nor that you even filed an aws ticket. And congrats on your live service.
But your answers to comments are weird, like this is not only your first server or vps experience with a public interface, but your first time exposing anything to the public web. And even if that’s true, there’s a first time for everyone.
But man, doubling down and insisting that “port scanning is unauthorized traffic” betrays a certain naivete about how tcpip works.
What you are seeing is not only normal, but AWS can’t do anything about it because that’s how IP source and destination sockets work.
Oh, OK. I moved to mikrotik 8 years ago and haven’t looked back.
OpenWRT on a 5009? Why? You’ll lose the switch/cpu integration and a whole lot of speed, not to mention features…
Port scanning is not authorized traffic.
Lol what
I think you should read the terms of your AWS contract. How do you think aws moves instances if not for agents gathering metrics?
And this case is Mandiant, so you’re fine.
Are you sure you’re ready for AWS?
Umm…
You know how that works, right? Like, if you don’t want to expose ports, just… don’t expose them. But you can’t prevent port scanning.
I would love to see the support request from AWS for this.
Edit: also, I think “script kiddy” is a bit of a stretch here.
This is a difficult problem to solve, because everyone has their own (valid) way to name, organize and tag music.
That’s why lidarr is so disappointing to many folks.
Most of us use a couple of tools, I personally use MusicBrainz Picard.