

I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I skimmed the paper, and it seems pretty cool. I’m not sure I quite follow the “diffusion model-based architecture” it mentioned, but it sounds interesting
I’m not talking about the specifics of the architecture.
To the layman, AI refers to a range of general purpose language models that are trained on “public” data and possibly enriched with domain-specific datasets.
There’s a significant material difference between using that kind of probabilistic language completion and a model that directly predicts the results of complex processes (like what’s likely being discussed in the article).
It’s not specific to the article in question, but it is really important for people to not conflate these approaches.
There really needs to be a rhetorical distinction between regular machine learning and something like an llm.
I think people read this (or just the headline) and assume this is just asking grok “what interactions will my new drug flavocane have?” Where these are likely large models built on the mountains of data we have from existing drug trials
Because someone keeps taking my blood to fill these jars!
This is a very dishonest headline. Sex trafficking and racketeering charges didn’t stick, and it was “just” prostitution charges. He’ll see jail time, but it’s a relative slap on the wrist
That’s true, but all of their problem with docker are that it’s Linux
docker is a layer to run on top of Linux’s KVM
My understanding is that this is only true for docker desktop, which there’s not really any reason to use on a server.
Sure, since containers use the host’s kernel, any Linux containers do need to either have Linux as the host or run a VM (as docker desktop does by default), but that’s not particularly unique to docker
Is it really vendor lock-in if you can fork it at your whim?
Try rereading the whole tweet, it’s not very long. It’s specifically saying that they plan to “correct” the dataset using Grok, then retrain with that dataset.
It would be way too expensive to go through it by hand
Will find out soon
Working fine for me in the prologue using the default settings in steam
The “de-escalator” is sending me
Please don’t drag the insane down to this level. This is planned and intentional (even if there is factional opposition)
That doesn’t give me a memorable mnemonic though.
tar -eXtract Ze Vucking File
It’s a huge difference
I’m not really trying to argue the technical correctness of these terms, rather their effectiveness as rhetoric.
That was always a dumb argument that no one genuinely found confusing. It was always a red herring.
The Bush administration pushed the “climate change, not global warming” narrative (I’m not saying they invented it, only that they spearheaded the rhetorical framing and made it popular)
It’s undeniable that the end result of changing this framing is that fewer people believe now that changes should be made to mitigate long term effects of carbon emissions than 25 years ago.
Yeah, people are broadly dumb, that’s exactly why it’s important rhetorically to make the tone of your message match the severity.
?
Reproducibility of what we call LLM 's as opposed to what we call other forms of machine learning?
Or are you responding to my assertion that these are different enough to warrant different language with a counterexample of one way in which they are similar?