• 0 Posts
  • 261 Comments
Joined 8 months ago
cake
Cake day: January 15th, 2024

help-circle
  • If people use the term meme to mean more than macros, then the definition changes. Language reflects the way people communicate. So if a bunch of people use language a certain, unorthodox way, they are not wrong.

    Sometimes people will call a particular strategy a “meme” when they just mean it’s bad.

    I think you misunderstand what those people are saying. They probably call it a “meme”, because players imitate some weird behavior, they saw in a video or on Discord something without thinking through the strategy. IMHO, “meme” is a way better description of that phenomenon than “bad strategy”, because it includes why people are deploying that strategy.

    Give people a bit more credit, will you?




  • Prunebutt@slrpnk.nettoMicroblog Memes@lemmy.worldMozilla leadership.
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    1
    ·
    edit-2
    19 hours ago

    At this point I’ve seen people use meme for something as generic as something being funny, something being bad, literally just images of tweets, etc.

    But that’s what a meme literally is:

    meme is an idea, behavior, or style that spreads by means of imitation from person to person within a culture and often carries symbolic meaning representing a particular phenomenon or theme. A meme acts as a unit for carrying ideas, symbols, or practices, that can be transmitted from one mind to another through writing, speech, gestures, rituals, or other imitable phenomena with a mimicked theme.

    Source: Wikipedia

    Your definition is describing a macro.










  • Name one instance where that worked when it wasn’t connected to a massive public outcry. Do you remember the Call of Duty “boycott”? Ubisoft is still in business, too.

    Also, the majority of book buyers don’t know or care about this verdict.

    Also also: The publishers won’t even be able to correlate the “lost” revenue of individual people boycotting with their shitty behavior. It’s not “Oh, we sold 1000 less copies of this book than expected, because we fought the archive.” It’s more: “Our predictions were off by 1000 copies. No idea if that’s because of some tangible factor, or just ‘noise’.”






  • Prunebutt@slrpnk.nettomemes@lemmy.worldNot all ai is bad, just most of it
    link
    fedilink
    arrow-up
    2
    arrow-down
    1
    ·
    edit-2
    23 days ago

    It’s funny tou bring up luddites, since they actually had the right idea about technology like LLMs. They were highly skilled textile workers who opposed the introducyion of dangerous medhanical looms that produced low quality goos, but were so easy to use so that a child could work them (because they wanted to employ children). They only got their bad name of backward anti-technology lunatics afterwards. But they were actually concerned for low quality technology being deployed to weaken worker’s rights, cheapen products and make bosses even richer. That’s actually the main issue I have with what’s happening with AI.

    There’s a book by Brian Merchant called “Blood in the machine” on the topic, if you’re interested. He’s also on a bunch of podcasts, if you’re not the big reader.

    I’m referring to “bullshit” in the way argued in this paper:

    Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

    The technology is neat. I’ll give you that. But it’s incredibly overhyped.


  • Prunebutt@slrpnk.nettomemes@lemmy.worldNot all ai is bad, just most of it
    link
    fedilink
    arrow-up
    5
    arrow-down
    2
    ·
    edit-2
    23 days ago

    If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.

    I’m specifically claiming that they’re bullshit machines. i.e. they’re generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.

    There have already been cases where phone operators where replaced with LLMs which gave dangerops advice to anorexig patients.