I’m vaguely aware of Org-mode but only as an alternative to Markdown. Last time I looked into it, though (years ago), Markdown seemed like a much better option for me for various reasons. Do you have a good argument for why Org-mode is a better choice for common use cases than the relatively universal GitHub-flavored Markdown?
keegomatic
- 0 Posts
- 20 Comments
keegomatic@kbin.socialto Programming@programming.dev•Hear me out: A scripting language that compiles to bash or sh (any suggestions?)16·2 years agoOkay at first I was pretty convinced that this was just the wrong way to accomplish what I thought your goal was. But now, after reading the StackOverflow post and your README, I think this is fascinating and frankly really awesome. What a clever and strange thing, using multiline comments that way, and string no-ops. I think just knowing this exists will cause me to find reason to use it.
I’ve been using Kagi. It works well. I like it. Costs money, but that’s a positive in my book.
keegomatic@kbin.socialto Asklemmy@lemmy.ml•What do you call Marshmallow in your native language?17·2 years agoThis one I can really get behind
keegomatic@kbin.socialto Lemmy.World Announcements@lemmy.world•Lemmy.world Rammy Statement2·2 years agoNot really an issue. If you want to see this content from defederated instances that everyone else finds obnoxious or disruptive, then you can either browse from an instance that doesn’t defederate that content, or spin up your own personal instance to browse from. It’s easy to move to a different instance. Your choice.
keegomatic@kbin.socialto Technology@lemmy.world•The only way to avoid Grammarly using your data for AI is to pay for 500 accounts353·2 years agoI see this complaint a lot but honestly I don’t quite understand what the big deal is. Not everyone is subscribed to the same communities. Personally, I’d love a feature on kbin/lemmy that rolled up duplicate posts on the client, but it’s really not that annoying for me to see a couple dupes in my feed if they’re posted in relevant communities /shrug
keegomatic@kbin.socialto Asklemmy@lemmy.ml•What would be the specific applications of a room temperature superconductor?3·2 years agoYou’ve misunderstood me. None of those things are what that commenter is referring to. It’s not about improving another energy storage technology by using superconductors, it’s about having a room temperature, ambient pressure version of an existing technology that we already use superconductors for.
keegomatic@kbin.socialto Asklemmy@lemmy.ml•What would be the specific applications of a room temperature superconductor?2·2 years agoI think what they’re referring to is the idea that superconductors can trap current effectively indefinitely; more like replacing a battery with a capacitor than enhancing existing battery chemistry.
keegomatic@kbin.socialto Technology@beehaw.org•Failed replication of claimed superconductor reported on arxiv6·2 years agoGot a source? When I first read about this people were cautiously optimistic partly because the head researcher was well-respected.
keegomatic@kbin.socialto Technology@beehaw.org•Failed replication of claimed superconductor reported on arxiv52·2 years agoour compound shows greatly consistent x-ray diffraction spectrum with the previously reported structure data
Uhh, doesn’t look like it to me. This paper’s X-ray diffraction spectrum looks pretty noisy compared to the one from the original paper, with some clear additional/different peaks in certain regions. That could potentially affect the result. I was under the impression from the original paper that a subtle compression of the lattice structure was pretty important to formation of quantum wells for superconductivity, so if the X-ray diff isn’t spot on I’ll wait for some more failures before calling it busted.
keegomatic@kbin.socialto Technology@lemmy.world•An indepth explanation of how LLMs work with an minimum of jargon101·2 years agoThis is a really terrific explanation. The author puts some very technical concepts into accessible terms, but not so far from reality as to cloud the original concepts. Most other attempts I’ve seen at explaining LLMs or any other NN-based pop tech are either waaaay oversimplified, heavily abstracted, or are meant for a technical audience and are dry and opaque. I’m saving this for sure. Great read.
keegomatic@kbin.socialto Technology@lemmy.world•Brands that don't buy enough Twitter ads will lose verification4·2 years agoFair enough!
keegomatic@kbin.socialto Technology@lemmy.world•Brands that don't buy enough Twitter ads will lose verification13·2 years agoI’m not saying this to be an asshole, because I’m happy that you got to the right conclusion eventually, but I have to clarify for history’s sake: if you thought Trump was playing 4D chess in 2015-2016 then you were being duped. Most of us understood what he was from the get-go. Claims of 4D chess have always been stupid.
Again, I’m happy that you figured it out. Everyone makes mistakes. But “we” didn’t think he was playing 4D chess. The hypothesis about Musk/Twitter above is hardly the same.
keegomatic@kbin.socialto Technology@lemmy.world•AI does not exist but it will ruin everything anyway355·2 years agoI honestly only made it a few minutes in, and there is probably plenty of merit to the rest of her perspective. But… I just couldn’t get past the “AI doesn’t exist” part. I get that you don’t know or care about the difference and you associate the term “AI” with sci-fi-like artificial sentience/AGI, but “AI” has been used for decades to refer to things that mimic intelligence, not just full-on artificial general intelligence. Algorithms governing NPC behavior and pathfinding in video games is AI, and that’s a perfectly accurate description. SmarterChild was AI… even ELIZA was AI. Stuff like GAN models and LLMs are certainly AI. The goal posts for “intelligence” have moved farther and farther back with every innovation. The AI we have now was fantasy just 20 years ago. Even just five years ago, to most people.
keegomatic@kbin.socialto Technology@lemmy.world•Why AI detectors think the US Constitution was written by AI112·2 years agoThat’s not really how LLMs work. You’re basically describing Markov chains. The statement “It’s just a statistical prediction model with billions of parameters” also applies to the human brain. An LLM is much more of a black box than you’re implying.
keegomatic@kbin.socialto Asklemmy@lemmy.ml•Is it normal for a person to "feel" less as they get older?4·2 years agoThis really is true. Experiencing it now, myself.
keegomatic@kbin.socialto Asklemmy@lemmy.ml•Do I need to remove metadata from pictures before uploading?1·2 years agoOh, interesting! Thanks for pointing that out. Side note: entries… I hope kbin adopts better language for what to call Reddit-like posts (articles), Twitter-like microblog posts (posts), and comments (entries?). I never would have guessed entries == comments. Maybe this is ActivityPub-specific naming? It reminds me of a past job where we surfaced internal technical names as the names of products and features… it just confused customers.
keegomatic@kbin.socialto Asklemmy@lemmy.ml•Do I need to remove metadata from pictures before uploading?2·2 years agoNot a kbin thing… might be an extension though. I’m on kbin and no automatic mention was added to the top of this comment when I replied to you.
So is your comment. And mine. What do you think our brains do? Magic?
edit: This may sound inflammatory but I mean no offense