• FaceDeer@kbin.social
    link
    fedilink
    arrow-up
    5
    arrow-down
    8
    ·
    9 months ago

    None of which can be used to “kill all humans.”

    Kill a bunch of humans, sure. After which the AI will be shut down and unable to kill any more, and next time we build systems like that we’ll be more cautious.

    I find it a very common mistake in these sorts of discussions to blend together “kills a bunch of people” or even “destroys civilization” with “kills literally everyone everywhere, forever.” These are two wildly different sorts of things and the latter is just not plausible without assuming all kinds of magical abilities.

    • subignition@kbin.social
      link
      fedilink
      arrow-up
      10
      ·
      9 months ago

      While I appreciate the nitpick, I think it’s likely the case that “kills a bunch of people” is also something we want to avoid…

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        1
        arrow-down
        1
        ·
        9 months ago

        Oh, certainly. Humans in general just have a poor sense of scale, and I think it’s important to keep aware of these sorts of things. I find it comes up a lot in environmentalism and the lack of nuance between “this could kill a few people” and “this could kill everyone” seriously hampers discussion.

    • young_broccoli@kbin.social
      link
      fedilink
      arrow-up
      1
      ·
      9 months ago

      Until it becomes more intelligent than us, then we are fucked, lol

      What worries me more about AI right now is who will be in controll of it and how will it be used. I think we have more chances of destroying ourselves by misusing the technology (as we often do) than the technology itself.

      • Icalasari@kbin.social
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        9 months ago

        One thing which actually scares me with AI ia we get one chance. And there are a bunch who don’t think of repercussions, just profit/war/etc.

        A group can be as careful as possible but it doesn’t mean shit if their Smarter Than Human AI isn’t also the first one out because as soon as it can improve itself, nothing is catching up

        EDIT: This is also with the assumption of any groups doing so being dumb enough to give it capabilities to build its own body, obviously yes one that can’t jump to hardware capable of creating and assembling new parts is much less of a threat, as the thread points out