

Thank you so much for this!!!
Thank you so much for this!!!
For those of us who do skip the AI summaries it’s the equivalent of adding an extra click to everything.
I would support optional AI, but having to physically scroll past random LLM nonsense all the time feels like the internet is being infested by something equally annoying/useless as ads, and we don’t even have a blocker for it.
It’s going to require DNA samples in my lifetime.
Sort of but I think influence over emotional states is understating it and just the tip of the iceberg. It also made it sound passive and accidental. The real problem will be overt control as a logical extension to the kinds of trade offs we already see people make about, for example data privacy. With the Replika fiasco I bet heaps of those people would have paid good money to get their virtual love interests de-“lobotomized”.
Thanks!
Trouble is your statement was in answer to @morrowind@lemmy.ml’s comment that labeling lonely people as losers is problematic.
Also it still looks like you think people can only be lonely as a consequence of their own mistakes? Serious illness, neurodivergence, trauma, refugee status etc can all produce similar effects of loneliness in people who did nothing to “cause” it.
And Hastalavista if you wanted to find things that Altavista didn’t.
That’s really interesting. Its output to this prompt totally ignored the biggest and most obviously detrimental effect of this problem at scale.
Namely, emotional dependence will give AI’s big tech company owners increased power over people.
It’s not as if these concepts aren’t widely discussed online, everything from Meta’s emotional manipulation experiments or Cambridge Analytica through to the meltdowns Replika owners had over changes to the algorithm are relevant here.
That thing about macaques is interesting.
Idk, it looks like it works (or maybe people are just getting better at not littering and it correlates), but this is one of those things that can be measured so I’d trust department of conservation research over my own anecdotal evidence.
I don’t know. I haven’t seen the research.
I was alarmed by it at first but it’s been a few years now and the parks where I go which used to have them don’t seem any more littered fwiw. If anything less so.
But that’s anecdotal and as I understand it the decision was made based on more than that.
This has been happening in New Zealand for a while. The theory seems to be that bins attract more litter and are a hazard to wildlife.
I was sceptical at first but it actually seems to work.
Perturbs me that they are selling food though. Surely yhe food sellers should have bins for which they are responsible in their immediate vicinity.
I’m seriously impaired, so all humans will start dying in a matter of weeks.
On the plus side, everything in that book The World Without Us will come to pass, and the planet’s environment and ecology will be better off.
Nooo, enshitification. I’ve only recently stared using it.
What do we use instead? Is Matrix the only option?
I think I’m just going to have to agree to disagree.
AI getting a diagnosis wrong is one thing.
AI being bulit in such a way that it hands out destructive advice human scientists already know is wrong, like vaccines cause autism, homeopathy, etc, is a malevolent and irresponsible use of tech imo.
To me, it’s like watching a civilization downgrading it’s own scientific progress.
I take your point. The version I heard of that joke is “the person who graduated at the bottom of their class in med school”.
Still, at the moment we can try to avoid those doctors. I’m concerned about the popularizing and replication of bad advice beyond that.
The problem here is this tool is being marketed to GPs, not patients, so you wouldn’t necessarily know where the opinion is coming from.
I’d hope the bar for medical advice is higher than “better than the worst doctor”.
Will be interesting to see where liability lies with this one. In the example given, following the advice could permanently worsen patients.
Given that the advice is proven to be wrong and goes against official medical guidance for doctors, that could potentially be material for a class action lawsuit.
When we look at passing scores, is there any way to quantitatively grade them for magnitude?
Not all bad advice is created equal.
It would have to be the fail rate of an average doctor, because if average doctors are the use case then moving the bar to fail rate of a bad doctor doesn’t make any sense. You would end up saying worse outcomes = better.
I think the missing piece here is accountability.
If doctors are being encouraged to give harmful out-of-date advice, who will end up with a class action lawsuit on their hands - doctors or OE?
True!