• 0 Posts
  • 69 Comments
Joined 2 years ago
cake
Cake day: July 1st, 2023

help-circle

  • I’ve worked with Windows environments from 2003 until still today migrating to azure. The biggest skills gap with technicians and engineers administrating Windows is actually networking. This single point connects every single service server and user and yet dns, dhcp, routing and it’s protocols, link layer technologies like vlans interface configurations aggregation and more is so poorly understood that engineers and technicians often significantly mistake problems. Almost all issues happen around network layers 2-4 or layer 8 (the end user).

    It doesn’t need to be first but no matter what os or component, networking is core and the single biggest return on investment for systems admin types.

    Sure other basic skills are required but just being able to test TCP by telnet or understand each hop, and is the server listening? What process ID is listening? Did someone configure rdp off 3389 and that’s why it doesn’t work? Was the host file edited and that’s why it’s resolving some old ip for this hostname? Why is it going out the wan interface of the router when it should be going over an ipsec tunnel?

    All this and more has nothing to do with Windows, and yet, anything that isn’t just user training or show and tell about how to do something, there’s a good chance it requires you to follow the networking layers to make sure behaviour is expected.



  • I don’t know where you work but don’t access your tailnet from a work device and ideally not their network.

    Speaking to roku, you could buy a cheap raspberri pi and usb network port. One port to the network the other to roku. The pi can have a tailscale advertised network to the roku, and the roku probably needs nothing since everything is upstream including private tailscale 100.x.y.z networks which will be captured by your device in the middle raspberri pi.

    I guess that’d cost like 40 ish dollars one time.



  • They could be, but I assume say like an apple device won’t install a ccp root authority unconditionally. Huawei and xiamoi probably could be forced, but the browser too, like Chrome, Firefox and safari need to also accept the device certificates as trusted.

    But the pressure in Europe would likely be to trade within Europe, you must comply.

    It fundamentally destroys the whole trust of PKI if this did go ahead. We just need to hope it does not.



  • A country for example could enact their mandatory certificate authority that they control. Then have ISPs who are in the middle use what was mandatory a trusted CA to act as the certificate issuer for a proxy. This already exists in enterprise, a router or proxy appliance is a mitm to inspect ssl traffic intercepting connections to a website say Google, but instead terminates that connection on itself, and creates a new connection to Google from itself. Since the Google certificate on the client side would be trusted from the proxy, all data would be decrypted on the proxy. to proxy data back to clients without a browser certificate trust issue, they use that already mandated CA that they control to create new certificates for the sites they’re proxying the proxy reencrypts it back to the client with a trusted certificate and browsers accept them.

    It’s actually less than theoretical, it’s literally been proposed in Europe. This method is robust and is already what happens in practice in enterprise organisations on company devices with the organisations CA certificate (installed onto organisation computers by policy or at build time). I’ve deployed and maintained this setup on barracuda firewalls, Fortigate firewalls and now Palo alto firewalls.

    https://www.itnews.com.au/news/eu-row-over-certificate-authority-mandates-continues-ahead-of-rule-change-602062








  • Tbf not that hard to increase waste by 25%. Just think of how a new staff takes a long time to do easy work, causes rework, and generally sucks up the time of the people around them until they get the experience and skills they need to do that job.

    That’s the micro scale, but in a macro scale it’s the same. Cause more waste, and not only will the money matter less, the public interest will be deteriorated and the entire foundation compromised.

    Actually just check out the history of CIA ops around in foreign countries. Consider how many of those countries end up with a high inflation rate and that currency becomes more worthless in a deathspiral.

    I think it’s not a single factor like “it’s orchestrated by state nation x” but I fully believe if the US is going to make itself weak then those countries like potentially Russia or China will take full advantage of it and offer a helping push. They’ll do it subtly. So it’s hard to see. But it’s just standard politics, and it would be insane to think they won’t take full advantage of a situation.


  • biscuitswalrus@aussie.zonetoProgrammer Humor@programming.devSafe passwords
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    edit-2
    3 months ago

    Enterprise applications are often developed by the most “quick, ship this feature” form of developers on the world. Unless the client is paying for the development a quick look at the sql table shows often unsalted passwords in a table.

    I’ve seen this in construction, medical, recruitment and other industries.

    Until cyber security requires code auditing for handling and maintaining PII as law, mostly its a “you’re fine until you get breached” approach. Even things like ACSC Australia cyber security centre, has limited guidelines. Practically worthless. At most they suggest having MFA for Web facing services. Most cyber security insurers have something but it’s also practically self reported. No proof. So if someone gets breached because someone left everyone’s passwords in a table, largely unguarded, the world becomes a worse place and the list of user names and passwords on haveibeenpwned grows.

    Edit: if a client pays and therefore has control to determine things like code auditing and security auditing etc as well as saml etc etc, then it’s something else. But say in the construction industry I’ve seen the same garbage tier software used at 12 different companies, warts and all. The developer is semi local to Australia ignoring the offshore developers…



  • I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.

    I can’t see why regular file would be any different.

    I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.

    I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.

    I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.

    I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.

    Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.


  • 3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.

    I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.

    Running that cluster 7 or so years now since I bought them new.

    I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.

    Point is, it’s still capable today.