• MxM111@kbin.social
    link
    fedilink
    arrow-up
    11
    arrow-down
    10
    ·
    9 months ago

    While it is not alive, whether it is a mind is not a clear cut. It can be called kind of a mind, a mind different of that of human.

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      4
      ·
      9 months ago

      Unless you want to call your predictive text on your keyboard a mind you really can’t call an LLM a mind. It is nothing more than a linear progression from that. Mathematically proven to not show any form of emergent behavior.

      • Kogasa@programming.dev
        link
        fedilink
        English
        arrow-up
        4
        ·
        9 months ago

        No such thing has been “mathematically proven.” The emergent behavior of ML models is their notable characteristic. The whole point is that their ability to do anything is emergent behavior.

        • huginn@feddit.it
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          9 months ago

          Here’s a white paper explicitly proving:

          1. No emergent properties (illusory due to bad measures)
          2. Predictable linear progress with model size

          https://arxiv.org/abs/2304.15004

          The field changes fast, I understand it is hard to keep up

          • Kogasa@programming.dev
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            9 months ago

            Sure, if you define “emergent abilities” just so. It’s obvious from context that this is not what I described.

              • Kogasa@programming.dev
                link
                fedilink
                English
                arrow-up
                2
                ·
                9 months ago

                Their paper uses terminology that makes sense in context. It’s not a definition of “emergent behavior.”

      • MxM111@kbin.social
        link
        fedilink
        arrow-up
        4
        arrow-down
        1
        ·
        edit-2
        9 months ago

        I do not think that it is “linear” progression. ANN by definition is nonlinear. Neither I think anything is “mathematically proven”. If I am wrong, please provide a link.

          • MxM111@kbin.social
            link
            fedilink
            arrow-up
            2
            ·
            edit-2
            9 months ago

            Thank you. This paper though does not state that there are no emergent abilities. It only states that one can introduce a metric with respect to which the emergent ability behaves smoothly and not threshold-like. While interesting, it only suggests that things like intelligence are smooth functions, but so what? Some other metrics show exponential or threshold dependence and whether the metric is right depends only how one will use it. And there is no law that emerging properties have to be threshold like. Quite the opposite - nearly all examples in physics that I know, the emergence appears gradually.

    • Corgana@startrek.website
      link
      fedilink
      English
      arrow-up
      1
      ·
      9 months ago

      Sorry you’re getting downvoted, you’re correct. It’s not implausible to assume that generative AI systems may have some kind of umwelt, but it is highly implausible to expect that it would be anything resembling that of a human (or animal). I think people are getting hung up on it because they’re assuming a lack of understanding language implies a lack of any concious experience. Humans do lots of things without understanding how they might be understood by others.

      To be clear, I don’t think these systems have experience, but it’s impossible to rule out until an actual robust theory of mind comes around.