Much like that comment. Can you give a better example, or express why it’s a bad example? That would bring some quality in.
Much like that comment. Can you give a better example, or express why it’s a bad example? That would bring some quality in.
FYI you can self-host GitLab, for example in a Docker container.
Hehe, right! (technically). Context matters! When talking about fruit, people usually don’t include stellar objects when weighing their options. Still true when taking in consideration that “apples to oranges” is usually metaphorical and not really about fruit.
I like that, especially this insight:
when two things have very few attributes in common or the attributes they can be compared on are very broad, general or abstract, it is harder to compare them.
A melon and a pogo stick are harder to compare, for their defining attributes hardly overlap except on a very abstract way.
Good on you to say “harder to compare” :D
it’s all semantic subjectivity. Poetry compares dissimilar things and equates unequal concepts all the time.
Another thing worthwhile to point out; subjectivity. I guess that part bothered me too. “cannot be compared” attempts to establish some kind of objective truth, whereas it only can be a subjective opinion.
The reference to poetry was nice, too.
My point works just as well with an arbitrary amount of options. Someone could say “These quintillion things cannot be compared”.
The number of options is irrelevant to what I tried to address. Though my examples were only pairs, so sorry for causing confusion.
Thanks for taking the time to write this detailed reply. I guess you’re right about the equivocation and I can see the irony :D
Though I have not fully understood yet. Following your example, the two different concepts are …
What blocks me from fully agreeing is that still, both are comparisons. And they don’t feel so different to me that I would call them different concepts. When I look up examples for equivocations, those do feel very different to me.
I still guess you’re right. If you (or someone else) could help me see the fallacy, I’d appreciate.
Agreed, yeah. Guess I was taking the word too literally.
Right?
“I would have helped avoiding the apocalypse! But then some random guys sprayed paint on some things!”
Activists (try to) do that as well. But it’s much harder to get close to a rich person or their property, than it is to do something in public spaces. They, too, have to see what they can do with their limited resources.
Next, the media coverage is very unequal, as well as reader’s interest. You are much more likely to click on an article covering a potentially outrageous action, than you are to read about something which does not bother anyone. Although you can rest assured, these things are tried and done frequently.
So naturally, to the uninvolved reader, it may seem as if activists don’t do anything but stupid stunts. And naturally, each outsider seems to think they have a much better grasp of strategy and what actions might make sense than the people who are actually involved in these things.
Of course, a particular action can still be silly. I just want to draw attention to biases at play, in general.
And if you really have a much better idea how to do something about the climate crisis, then go ahead and shine as an example. Not only would you author an actually impactful action (which in itself should be reason enough), you could also show all these rookie activists how to get things done. If your example is convincing, you should see less media coverage about inferior actions.
You can use more debug outputs (log(…)) to narrow it down. Challenge your assumptions! If necessary, check line by line if all the variables still behave as expected. Or use a debugger if available/familiar.
This takes a few minutes tops and guarantees you to find at which line the actual behaviour diverts from your expectations. Then, you can make a more precise search. But usually the solution is obvious once you have found the precise cause.
Adding a comment so people can experiment more in this thread.
I think that’s one of the best use cases for AI in programming; exploring other approaches.
It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.
Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.
There’s a very naive, but working approach: Ask it how :D
Or pretend it’s a colleague, and discuss the next steps with it.
You can go further and ask it to write a specific snippet for a defined context. But as others already said, the results aren’t always satisfactory. Having a conversation about the topic, on the other hand, is pretty harmless.
Those LLMs are great fools, but I am just paranoid to use it in that manner.
Exquisite typo. I also agree to everything else you said.
You can do that when you control the frontend UI. Then, you can set up the input field for their name, applying input validation.
But I would rather not rely on telling the user, in hopes they understand and comply. If they have ways to do it wrong, they will.
Best take imo. Yes, the “bliss” is that we are ruled by ruthless billionnaires instead of cruel dictators. At least some of us.
As pointed out in my top level comment, the post is quite one-sided, omitting the dark truths. Cooperation is the overall best strategy, but so is to exploit as much as you can. Both are true, the combination is true.
Yes, and no.
First and foremost, you need no “justification” for being a decent person. And there are other reasons to be that way, as arbitrary as “I like it this way”.
Game theory is strongly related to evolution. It is safe to assume that everything we can observe in nature is a successful strategy. So this confirms the statement: Cooperation is a successful strategy. But the other side of the picture also exists: Betrayal is as well.
What the excerpt omits about the Prisoner’s Dilemma (not sure wether it’s mentioned in the video, which I did not watch now): The Nash Equilibrium can be the overall worst outcome. What does that mean?
A Nash Equilibrium is a situation in which no player can improve their own position. It is therefore a stable state. Things will change until they have settled in a stable state. It can be shown for Prisoner’s Dilemma that the Nash Equilibrium can be the worst case, where each betrays the other. Yes, they would both score better if they cooperated, but the system will still tend towards the state where both play nasty.
When multiple iterations are played, this changes a bit. It seems, if you not just meet once in a lifetime, but can remember your past, and have a common future, it makes more sense to cooperate. But there is still a place for uncooperative exploitation.
So yes, it’s true what you say about “best performing strategies”, but it should be noted that “evil” strategies don’t go extinct either.
It should be questioned how much these theories can be applied to our lifes. I mean questioned, not implying an answer. Either way I find it interesting how behaviour which we associate with morals emerges in very simple and abstract games.
One reason might be: Each instance only has a partial knowledge of content in Lemmy. It can be unaware of certain communities on other instances, if your instance has not discovered them yet. Hence the need for all these meta-search tools.
That’s kind of two of my main points:
I don’t know if they are volunteering or being paid. The other person said they are being paid.
Either way, no one deserves being talked down to like that, even if they made a mistake. It’s a matter of respect and self-respect. And as a skilled person like a kernel developer, it should be trivially easy to find other work in a more appropriate environment.
That being said, maybe I’m missing something. Torvalds has been known to be like that for a long time (although that seems to be over now). And still, Linux has been developed over decades. So apparently, skilled people flocked around Torvalds, or maybe rather his project. Not entirely sure why, but I’m taking it as a hint I might be missing something.
Hehe, good point.
I think AI bots can help with that. It’s easier now to play around with code which you could not write by yourself, and quickly explore different approaches. And while you might shy away from asking your colleagues a noob question, ChatGPT will happily elaborate.
In the end, it’s just one more tool in the box. We need to learn when and how to use it wisely.