

A shitty one, then
I’m an anarchocommunist, all states are evil.
Your local herpetology guy.
Feel free to AMA about picking a pet/reptiles in general, I have a lot of recommendations for that!
A shitty one, then
How did you buy 50 for 20?
What are you talking about this phone is established, this is their 6th one… and the bootloader is unlocked.
It found out who made it so it knew what to do
It will never be possible to use this for ftl communications. This is like saying in 100 years we will use very long steel rods to communicate ftl by pushing on them. The problem is fundamental.
No they didn’t, they sent a conventional signal that was encrypted with an entangled particle. Nothing was sent ftl, this is like if I had two boxes that I know have the same thing in them, an encryption key, and traveled across the world, and sent you a message, you have the other box, the information in that box didn’t go ftl you just opened it later.
there is no path to ftl communication here.
have a basic video on the topic: https://www.youtube.com/watch?v=9oBiS_Yb9Ac
That’s not the only way to make meaningful change, getting people to give up on llms would also be meaningful change. This does very little for anyone who isn’t apple.
Meaningful change is not happening because of this paper, either, I don’t know why you’re playing semantic games with me though.
It does need to do that to meaningfully change anything, however.
that’s very true, I’m just saying this paper did not eliminate the possibility and is thus not as significant as it sounds. If they had accomplished that, the bubble would collapse, this will not meaningfully change anything, however.
also, it’s not as unreasonable as that because these are automatically assembled bundles of simulated neurons.
It is, but this did not prove all architectures cannot reason, nor did it prove that all sets of weights cannot reason.
essentially they did not prove the issue is fundamental. And they have a pretty similar architecture, they’re all transformers trained in a similar way. I would not say they have different architectures.
those particular models. It does not prove the architecture doesn’t allow it at all. It’s still possible that this is solvable with a different training technique, and none of those are using the right one. that’s what they need to prove wrong.
this proves the issue is widespread, not fundamental.
That indicates that this particular model does not follow instructions, not that it is architecturally fundamentally incapable.
I think it’s important to note (i’m not an llm I know that phrase triggers you to assume I am) that they haven’t proven this as an inherent architectural issue, which I think would be the next step to the assertion.
do we know that they don’t and are incapable of reasoning, or do we just know that for x problems they jump to memorized solutions, is it possible to create an arrangement of weights that can genuinely reason, even if the current models don’t? That’s the big question that needs answered. It’s still possible that we just haven’t properly incentivized reason over memorization during training.
if someone can objectively answer “no” to that, the bubble collapses.
No, it won’t hold up for 50 years, but if you don’t want one don’t get it?
that’s where regulators step in, do you honestly believe elon musk would not be implanting healthy people with neuralinks if regulators would allow? They won’t, this is tech for people whose lives are so awful that not having one is worse than the things that may go wrong, for a very, very long time.
Why does it have to? All current bci’s are designed for the disabled, why would this one be an exception?
this isn’t for you, you’re not a paraplegic, are you?
You live long enough to help paraplegics game?
Nah