• 9 Posts
  • 24 Comments
Joined 4 months ago
cake
Cake day: April 4th, 2025

help-circle

  • Can somebody summarize the issue? I was thinking that wayland and Xorg are different projects? So what is the incentive that people stop using X11? It is also not like Python2 where any effort to support it further would retract ressources from Python developers developing Python3. (And compare that to Perl6 developers renaming it “Raku” and continuing to support Perl 5, or SBCL developers just quietly adding support for Unicode -Python3’s most consequential change - without breaking existing stuff?)

    And one thing more, we saw companies taking influence in Web standards like HTTP 2.0. Yes, it is still open standard and supported by FLOSS software - but one cannot deny that many development in the modern web like advertising, tracking, data collection, and centralization are not in the interest of users, and this us why the interests behind specific standards matter. Technology is not free of interests and technological change is not automatically in the interests of users.


  • It is very interesting to see how with Rust and Guix, there is some convergence between programming worlds which so far have been rather separate universes. For example, Rust makes it easy to write modern system libraries which previously would have been written in C, the Linux kernel is slowly adopting Rust, and Guix makes it easy to use such libraries in strong-dynamically typed languages like Guile, Racket, or Python.

    For the general programming community, the promise is that Guix kinda solves the packaging and dependency resolution problem for multi-language projects. And it is making good strides - Guix contains over 50,000 packages now, not counting the nonguix channels which add e.g. non-free firmware. (Just for convenience, here how to install the Guix package manager im Arch).





  • Ah still rolling out the old “stochastic parrot” nonsense I see.

    It is a bunch of stochastic parrots. It just happens frequently that the words they are parroting were orginally written by a bunch of intelligent people which were knowledgeable in their fields.

    Note this doesn’t makes the parrots intelligent - in the same way that a book written by Einstein to explain special relativity has any own intelligence. Einstein was intelligent, his words transport his intelligent ideas, but the book conveying them to other people (as, the printed pages with cardboard cover) is as dumb as a stone. You would not ask a piece of cardboard so solve a math problem, would you?


  • Reponding to another comment in opensource@lemmy.ml:

    Writing code is itself a process of scientific exploration; you think about what will happen, and then you test it, from different angles, to confirm or falsify your assumptions.

    What you confuse here is doing something that can benefit from applying logical thinking with doing science. For exanple, mathematical arithmetic is part of math and math is science. But summing numbers is not necessarily doing science. And if you roll, say, octal dice to see if the result happens to match an addition task, it is certainly not doing science, and no, the dice still can’t think logically and certainly don’t do math even if the result sometimes happens to be correct.

    For the dynamic vs static typing debate, see the article by Dan Luu:

    https://danluu.com/empirical-pl/

    But this is not the central point of the above blog post. The central point of it is that, by the very nature of LLMs to produce statistically plausible output, self-experimenting with them subjects one to very strong psychological biases because of the Barnum effect and therefore it is, first, not even possible to assess their usefulness for programming by self-experimentation(!) , and second, it is even harmful because these effects lead to self-reinforcing and harmful beliefs.

    And the quibbling about what “thinking” means is just showing that the arguments pro-AI has degraded into a debate about belief - the argument has become “but it seems to be thinking to me” even if it is technically not possible and also not in reality observed that LLMs apply logical rules, cannot derive logical facts, can not explain output by reasoning , are not aware about what they ‘know’ and don’t ‘know’, or can not optimize decisions for multiple complex and sometimes contradictory objectives (which is absolutely critical to any sane software architecture).

    What would be needed here are objective controlled experiments whether developers equipped with LLMs can produce working and maintainable code any faster than ones not using them.

    And the very likely result is that the code which they produce using LLMs is never better than the code they write themselves.





  • What I find interesting is that move semantics silently add something to C++ that did not exist before: invalid objects.

    Before, if you created an object, you could design it so that it kept all invariants until it was destroyed. I’d even argue that it is the true core of OOP that you get data structures with guaranteed invariants - a vector or hash map or binary heap never ceases to guarantee its invariants.

    But now, you can construct complex objects and then move their data away with std::move() .

    What happens with the invariants of these objects?




  • Did you ever note that when intelligent engineers talk about designs (or quite generally when intelligent people talk about consequential decisions they took), they talk about their goals, about the alternatives they had, about what they knew about the properties of these alternatives and how these evaluated with their goals, about which alternatives they chose in the end and how they addressed the inevitable difficulties they encountered?

    For me, this is quite a very telling sign of intelligence in individuals. And truly good engineering organizations do collect and treasure that knowledge - it is path-dependent and you cannot quickly and fully reproduce it when it is lost. And more importantly, some fundamental reasons for your decisions and designs might change, and you might have to revise them. Good decisions also have a quality of stability which is that the route taken does not change dramatically when an external factor changes a little.

    So and now compare that to when you let automatically plan a route through a dense, complex suburban train network, by using a routing app. The route you get will likely be the fastest one, with the implicit assumption that this is what you of course want - but any small hiccup or delay in the transport network can well make it the slowest option.






  • If you walk around in my city and open your eyes, you will see that half of the bars and restaurants are closed because there is a shortage of even unskilled staff and restaurants didn’t pay enough to people. They now work in other sectors.

    And yes, software developers are leaving jobs with unreasonable demands and shitty work conditions. Last not least because conserving mental health is more important. Go, for exanple, to the news.ycombinators.com forum and just search for the keyword “burnout”. That’s becoming a massive problem for companies because rising complexity is not matched by adequate organizational practices.

    And AI is not going to help with that - it is already massively increasing technical debt.


  • It’s the Dunning-Kruger effect.

    And it’s fostered by an massive amount of spam and astroturfing coming from “AI” companies, lying that LLMs are good at this or that. Sure, algorithms like neural networks can recognize patterns. Algorithms like backtracking can play chess or solve or transform algebraic equations. But these are not LLMs and LLMs will not and can not replace software engineering.

    Sure, companies want to pay less for programming. But they don’t pay for software developers to generate some gibberish in source code syntax, they need working code. And this is why software engineers and good programmers will not only remain scarce but will become even shorter in supply.

    And companies that don’t pay six-figure salaries to developers will find that experienced developers will flat out refuse to work on AI-generated codebases, because they are unmaintainable and lead to burnout and brain rot.



  • The early stages of a project is exactly where you should really think hard and long about what exactly you do want to achieve, what qualities you want the software to have, what are the detailed requirements, how you test them, and how the UI should look like. And from that, you derive the architecture.

    AI is fucking useless at all of that.

    In all complex planned activities, laying the right groundwork and foundations is essential for success. Software engineering is no different. You won’t order a bricklayer apprentice to draw the plan for a new house.

    And if your difficulty is in lacking detailed knowledge of a programming language, it might be - depending on the case ! - the best approach to write a first prototype in a language you know well, so that your head is free to think about the concerns listed in paragraph 1.