

Because sycophants keep saying it’s going to take these jobs, eventually real scientists/researchers have to come in and show why the sycophants are wrong.
Australian Cyber Security professional
Because sycophants keep saying it’s going to take these jobs, eventually real scientists/researchers have to come in and show why the sycophants are wrong.
Again, you’re being reductive. My argument is not that we will stop practising critical thinking altogether, but that we will not need to practise it as often. Less practise always makes you worse at something. I do not need evidence for that as it is obvious.
I don’t see a point to continuing this conversation if you keep reducing my argument to “nobody will think anymore”.
I am glad you use AI for reasons that don’t make you stupid, but I have seen how today’s students are using it instead of using their brains. It’s not good. We teach critical thinking in schools for a reason, because it’s something that does not always come naturally, and these students are getting AI to do the work for them instead of learning how to think.
The people who were used to the oral tradition were right. Memorising things is good for your memory. No, I don’t think people will stop thinking altogether (please don’t be reductive like this lmao), just as people didn’t stop remembering things. But people did get worse at remembering things. Just as people might get worse at applying critical thinking if they continually offload those processes to AI. We know that using tools makes us worse at whatever the tool automates, because without practice you become worse at things. This just hasn’t really been a problem before as the tools generally make those things obselete.
You don’t think it’s possible that offloading thought to AI could make you worse at thinking? Has been the case with technology in the past, such as calculators making us worse at math (in our heads or on paper), but this time the thing you’re losing practice in is… thought. This technology is different because it’s aiming to automate thought itself.
Was trying to get a friend to switch to jellyfin the other day and it turns out he’s got a weird Hisense projector that uses VIDAA OS, which does not have a jellyfin app, but DOES have a Plex app. I imagine setups like this are probably limiting Jellyfin’s adoption. VIDAA is actually less niche than I thought as well, heaps of cheap-ish TVs and projectors are running it.
Come on man, you know they didn’t mean it literally 💀
there he is. Vinny.
Apologies, I misread your comment as saying you had to use the terminal to use Linux (I was drunk ngl). I still believe Linux is easier to use than Windows with the caveat that the easiest system to use will always be the one you have the most experience with. I switched from MacOS/Windows to Fedora on my personal machine a few months ago and it’s been smooth sailing for me, though I have always used Linux at least somewhat (I work in cyber security), so that has probably helped.
Dismissing Linux as a tool for a different job (ie not personal/business computing) is an odd position to take for someone with your experience.
lol tell me you’ve never used linux without telling me you’ve never used linux
I don’t think they would do that unless Lemmy continues to grow to a point where it challenges Reddit. Then it becomes a technical issue. I don’t think they can do that. It was one thing for threads to do it, being designed with that in mind from day 1, but it’s completely different for Reddit to do it. There are so many features that just wouldn’t make the jump, and so much content that would need to be reworked.
If they were going to do it, it would most likely be a clean break where you just can’t access old Reddit content on Lemmy, but all their new stuff would be accessible.
I also just don’t see them giving away their content like that after cracking down on the API how they did.
The fact that Facebook are allowing spam pages into this is wild.
I run a Facebook page (periodically). Frequently post things which get 3k+ likes. Facebook has paid me $0.
Strange to equate the other senses to performance in intellectual tasks but sure. Do you think feeding data from smells, touch, taste, etc. into an AI along with the video will suddenly make it intelligent? No, it will just make it more likely to guess what something smells like. I think it’s very clear that our current approach to AI is missing something much more fundamental to thought than that, it’s not just a dataset problem.
Nobody can because of Steam’s monopoly. You can try to create your own store but you won’t have nearly the same selection of games. Monopolies are bad. Even when they’re companies you like. To be clear, I’m not saying Steam should be broken up, I’m not saying they should lose games to other stores. I’m saying they’re a monopoly, and that is bad because it enables Steam to stagnate or even get worse.
It’s also pretty inarguable imo that Steam has been getting worse. Steam sales used to be events. You’d get multiple huge discounts on AAA games. Now you’re lucky to get 40% off a 6 year old game. And don’t get me started on the UI, which, while fine, hasn’t changed meaningfully in like a decade. There simply is no incentive for Steam to be better. So they’re not. We should consider ourselves lucky that they’re still as good as they are, because they won’t be forever.
Oh yeah we’re 100% agreed on that. I’m thinking of the AI evangelicals who will argue tooth and nail that LLMs have “emergent properties” of intelligence, and that it’s simply an issue of training data/compute power before we’ll get some digital god being. Unfortunately these people exist, and they’re depressingly common. They’ve definitely reduced in numbers since AI hype has died down though.
Wtf is with people deciding a monopoly is good because the company hasn’t started enshittifying it yet. It will happen. It’s what monopolies do. Healthy competition is an important part of preventing enshittification.
I feel like the amount of training data required for these AIs serves as a pretty compelling argument as to why AI is clearly nowhere near human intelligence. It shouldn’t take thousands of human lifetimes of data to train an AI if it’s truly near human-level intelligence. In fact, I think it’s an argument for them not being intelligent whatsoever. With that much training data, everything that could be asked of them should be in the training data. And yet they still fail at any task not in their data.
Put simply; a human needs less than 1 lifetime of training data to be more intelligent than AI. If it hasn’t already solved it, I don’t think throwing more training data/compute at the problem will solve this.
Big fan of this series.
Same! Got to log off early 😎
Crazy that this wasn’t already the standard years ago.