• melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        2
        ·
        3 days ago

        Thank you! That is indeed a valid point. I was hoping more people came up with this valid remark. Do you have any other questions or predictions you would like to know? So that we don’t get “surprises” in the field of technology again?

          • melroy@kbin.melroy.org
            link
            fedilink
            arrow-up
            3
            ·
            1 day ago

            Sure!

            • More and more (AI) spyware / malware is getting injected into projects and operating systems. Without the user consent. Mobile phones, laptops, desktop PCs, smart devices, etc. This comes from companies, but also from governments (no, not just China, but also US and EU).
            • AI bubble itself will burst for the “normal users” and most companies who won’t really benefit from AI / LLMs as they thought they will. This will be apparenty only after several years. Where the highly skilled developers left the companies, and you are left with software engineers using AI tools which generates wrong code. The damage LLM (like AI Code generation) is doing and will be continue to do in the upcoming years is very untransparent, but it won’t be nice. We suddently are not getting AGI.
            • More research and efforts will be put into alternative computers, like computers based on biology. Like using living cells. After all nature is so much more efficient then our current technologies. This could fix the energy demand issues we now see with AI.
            • Biology computer will then also create huge moral issues. Since, how do we know the cells are not becoming aware? How do we know it won’t feel pain or the cells are feeling trapped? After all, we, humans, don’t even know how conscious really works and self aware.
            • Users & companies want to get back in control over 5 or 15 years from now. So their could be a big move back from “Cloud” to on-prem. You are already seeing this now with the fediverse.
            • The internet becomes too much centralized and controlled by goverments. Blocking public DNS IPs. Overruling them. The only answer would be is to create a much more decentralized internet alternative, so over 20 or 30 years from now (so we can still talk which each other about issues in the goverments par example). The current internet is just too fragile. And the root of the problem is already DNS. Meaning you need to basically start from scratch.
            • Over 80 years Windows might only be used by corporate businesses. Most people might only use Android or any Linux based distro. This mainly depends on how fast we change our education process, so young people learn about alternatives. And schools should stop promoting and forcing people to use Microsoft products only. If schools won’t change, then we might have a huge issue, and this topic won’t be valid.
            • Google will be split into multiple companies.
            • Microsoft might be split later as well into multiple companies, but only much later, after Google.
            • … Should I continue or stop here…?

            @[email protected] @[email protected]

            #it #software #ai #predictions

    • Eheran@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      6
      ·
      3 days ago

      So you predicted that security flaws in software are not going to vanish with AI?

      • melroy@kbin.melroy.org
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        3 days ago

        I predicted that introducing AI on software engineer (especially juniors) will result in overall worse code, since apparently people don’t feel responsible for the genAI code. While I believe the responsibility is still fully at the humans who try to deliver code. And on top of that, most devs are not doing good code reviews in general (often due to lack of time or … skill issue). And now we have AI that generates code which are too easily accepted on top of reviewers who blindly accept code… And no unit tests or integration tests… And then we have this current situation. No wonder this would happen. If you are in software engineering, you would know exactly where I’m talking about. Especially if you would work at larger companies.

        • melroy@kbin.melroy.org
          link
          fedilink
          arrow-up
          3
          arrow-down
          1
          ·
          3 days ago

          My point exactly, now you have genAI code written by AI, who doesn’t know what it is doing. Instructed by a developer, who doesn’t understand the programming language. Reviewed by a co-worker, who doesn’t know what is doing on. It’s madness I tell you!