• 2 Posts
  • 123 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Hehe, good point.

    people need to read more code, play around with it, break it and fix it to become better programmers.

    I think AI bots can help with that. It’s easier now to play around with code which you could not write by yourself, and quickly explore different approaches. And while you might shy away from asking your colleagues a noob question, ChatGPT will happily elaborate.

    In the end, it’s just one more tool in the box. We need to learn when and how to use it wisely.











  • Activists (try to) do that as well. But it’s much harder to get close to a rich person or their property, than it is to do something in public spaces. They, too, have to see what they can do with their limited resources.

    Next, the media coverage is very unequal, as well as reader’s interest. You are much more likely to click on an article covering a potentially outrageous action, than you are to read about something which does not bother anyone. Although you can rest assured, these things are tried and done frequently.

    So naturally, to the uninvolved reader, it may seem as if activists don’t do anything but stupid stunts. And naturally, each outsider seems to think they have a much better grasp of strategy and what actions might make sense than the people who are actually involved in these things.

    Of course, a particular action can still be silly. I just want to draw attention to biases at play, in general.

    And if you really have a much better idea how to do something about the climate crisis, then go ahead and shine as an example. Not only would you author an actually impactful action (which in itself should be reason enough), you could also show all these rookie activists how to get things done. If your example is convincing, you should see less media coverage about inferior actions.


  • You can use more debug outputs (log(…)) to narrow it down. Challenge your assumptions! If necessary, check line by line if all the variables still behave as expected. Or use a debugger if available/familiar.

    This takes a few minutes tops and guarantees you to find at which line the actual behaviour diverts from your expectations. Then, you can make a more precise search. But usually the solution is obvious once you have found the precise cause.



  • I think that’s one of the best use cases for AI in programming; exploring other approaches.

    It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.

    Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.






  • Yes, and no.

    First and foremost, you need no “justification” for being a decent person. And there are other reasons to be that way, as arbitrary as “I like it this way”.

    Game theory is strongly related to evolution. It is safe to assume that everything we can observe in nature is a successful strategy. So this confirms the statement: Cooperation is a successful strategy. But the other side of the picture also exists: Betrayal is as well.

    What the excerpt omits about the Prisoner’s Dilemma (not sure wether it’s mentioned in the video, which I did not watch now): The Nash Equilibrium can be the overall worst outcome. What does that mean?

    A Nash Equilibrium is a situation in which no player can improve their own position. It is therefore a stable state. Things will change until they have settled in a stable state. It can be shown for Prisoner’s Dilemma that the Nash Equilibrium can be the worst case, where each betrays the other. Yes, they would both score better if they cooperated, but the system will still tend towards the state where both play nasty.

    When multiple iterations are played, this changes a bit. It seems, if you not just meet once in a lifetime, but can remember your past, and have a common future, it makes more sense to cooperate. But there is still a place for uncooperative exploitation.

    So yes, it’s true what you say about “best performing strategies”, but it should be noted that “evil” strategies don’t go extinct either.

    It should be questioned how much these theories can be applied to our lifes. I mean questioned, not implying an answer. Either way I find it interesting how behaviour which we associate with morals emerges in very simple and abstract games.



  • Spzi@lemm.eetolinuxmemes@lemmy.worldLinus does not fuck around
    link
    fedilink
    English
    arrow-up
    5
    ·
    10 months ago

    That’s kind of two of my main points:

    1. Treat your volunteers well, or why should they continue volunteering?
    2. Kernel maintainers have plenty of other opportunities.

    I don’t know if they are volunteering or being paid. The other person said they are being paid.

    Either way, no one deserves being talked down to like that, even if they made a mistake. It’s a matter of respect and self-respect. And as a skilled person like a kernel developer, it should be trivially easy to find other work in a more appropriate environment.

    That being said, maybe I’m missing something. Torvalds has been known to be like that for a long time (although that seems to be over now). And still, Linux has been developed over decades. So apparently, skilled people flocked around Torvalds, or maybe rather his project. Not entirely sure why, but I’m taking it as a hint I might be missing something.