• 0 Posts
  • 17 Comments
Joined 2 years ago
cake
Cake day: October 24th, 2023

help-circle


  • Yeah just start your own instance on a different planet with situations that only provoke your preferred amount of existential dread!

    Barring that you’re going to be stuck identifying the sources of this widespread misery and trying to help people overcome it. For most people that might be difficult but I don’t know your budget so I won’t assume. There’s also the option of interacting with machines that pose as happier humans but your goal overall seems contrary to growing that 80% by adding yourself unless I misread your intent.



  • In dungeons and dragons there is a type of hybrid character you can play called an Artificer who treats magic more like technology, and there are a ton of examples in popular media that others have mentioned. I do think you have to determine how and if you’ll keep them distinct if that’s important to your plot, but if they developed alongside eachother maybe the technology of that world relies on magic to work.

    Or maybe your magic relies on elder gods that don’t like the mortal hubris of critiquing the gods works so attempts to unravel magic gets you cursed or worse.

    I think they can go together and the way you fit them can even become a plot point!



  • It sounds like they found themselves in a situation they are not prepared to handle, and they are attempting to rush you through a major decision to compensate. It may not be malicious or a scam, and it may be a fluke that is not indicative of the normal pace and handling of their business, but it does not signal a healthy well run organization. If you do choose to proceed, do so with some level of caution and awareness of that fact. Do not give them any money, and if they give you any information that alarms or frightens you, slow the process down to give your self more time to evaluate.






  • Black hat and Defcon just ended and I’ll share my impression from LLM related talks given there. Microsoft VPs charged additional money to CISOs attending the summit talking about how AI will disrupt and be the future and blah blah magical thinking.

    Meanwhile Microsoft engineers and others said things like “this is logarithmic regression for people who are bad at math, and is best for cases where 75% accuracy is good enough. Try to break use cases into as many steps as possible and keep the LLM away from any automation that could have any consequences. These systems have no separation between the control plane and user input, which is re-exposing us to problems that were solved 15 years ago.”

    I think there are some neat possibilities that are lost in marketing hype as venture capitalist anger grows that they might have been scammed by yet another hammer in search of nails.





  • It’s valid to point out that we have difficulty defining knowledge, but the output from these machines are inconsistent at a conceptual level, and you can easily get them to contradict themselves in the spirit of being helpful.

    If someone told you that a wheel can be made entirely of gas do you have confidence that they have a firm grasp of a wheel’s purpose? Tool use is a pretty widely agreed upon marker of intelligence and so not grasping the purpose of a thing that they can describe at great length and exhaustive detail, while also making boldly incorrect claims on occassion should raise an eyebrow.