Well I mean, slavery is still illegal. Black people are able to vote, hold office, own property, etc.
There’s still a lot of social injustice to solve but there’s been a lot of progress, albeit slow.
Well I mean, slavery is still illegal. Black people are able to vote, hold office, own property, etc.
There’s still a lot of social injustice to solve but there’s been a lot of progress, albeit slow.
I don’t think that’s true. There were violent riots accompanying every major social change in at least recent history.
And famously, it took an entire fucking war to end slavery in the United States.
A gold hard hat? Appropriately tacky.
Yes, video is a lot more reliable than the sparse bits of text I had read at that point.
Boeing, but a generally reliable model of Boeing this time.
I’d guess that it broke up in the air based on the description of the debris crashing… but that just raises more questions.
Spatial reasoning has always been a weakness of LLMs. Other symptoms include the inability to count and no concept of object permanence.
I mean, it still could be. But LLMs are not that AGI we’re expecting.
“Don’t believe that marketing department“ is one of those things everybody needs to learn at some point in their life.
No money is exactly what’s needed to achieve this.
Well that’s not the only thing that makes them obnoxious, but it’s a huge contributor.
How the hell is Starship pictured in an “early SpaceX” post? The problems with Musk have been going on far longer than any Starship hardware has existed.
Finding a job is like dating except if you are too picky you starve to death.
Is it in decline? I mean, I want to believe it, but I haven’t seen any hard data on that.
Subdermal e-ink particles, perhaps?
What kind of moron doesn’t check the diff? Plus, modern AI coding tools explicitly show the diff and ask you to confirm each edit directly.
I wouldn’t let a human muck about in my code unchecked, much less an AI. But that doesn’t mean it’s useless.
As always, the specific situation matters. Some refactors are mostly formulaic, and AI does great at that. For example, “add/change this database field, update the form, then update the api, update the admin page, update the ui, etc.” is perfectly reasonable to send an AI off to do, and can save plenty of programmer time.
Protest MUST disrupt something or it will be ignored. That’s why riots and boycotts get shit done while normal protests fizzle out.
Every major social policy change I’m aware of was accompanied by riots.
Disrupting parliament is far less violent than a riot yet still makes the point effectively.
I don’t think I suggested it wasn’t worrisome, just that it’s expected.
If you think about it, AI is tuned using RLHF, or Reinforcement Learning from Human Feedback. That means the only thing AI is optimizing for is “convincingness”. It doesn’t optimize for intelligence, anything seems like intelligence is literally just a side effect as it forever marches onward towards becoming convincing to humans.
“Hey, I’ve seen this one before!” You might say. Indeed, this is exactly what happened to social media. They optimized for “engagement”, not truth, and now it’s eroding the minds of lots of people everywhere. AI will do the same thing if run by corporations in search of profits.
Left unchecked, it’s entirely possible that AI will become the most addictive, seductive technology in history.
Turns out it doesn’t really matter what the medium is, people will abuse it if they don’t have a stable mental foundation. I’m not shocked at all that a person who would believe a flat earth shitpost would also believe AI hallucinations.
Anyone who gets uncomfortable with government surveillance because it could be used to target certain demographics of people needs to look no further than what Israel has done to prove their point.
The only thing stopping the world from autonomously targeting people by online demographic is common human decency, and humanity is running on very short supply of that these days.