Here comes the push.

  • Dr. Dabbles@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    11 months ago

    Not at all useful something like running neutral networks

    Um. lol What? You may want to do your research here, because you’re so far off base I don’t think you’re even playing the right game.

    There’s a reason why datacenters don’t lease ASIC instances.

    Ok, so you should just go ahead and tell all the ASIC companies then.

    https://www.allaboutcircuits.com/news/intel-and-google-collaborate-on-computing-asic-data-centers/

    https://www.datacenterfrontier.com/servers/article/33005340/closer-look-metas-custom-asic-for-ai-computing

    https://ieeexplore.ieee.org/document/7551392

    Seriously. You realize that the most successful TPUs in the industry are ASICs, right? And that all the “AI” components in your phone are too? What are you even talking about here?

    • just_another_person@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      2
      ·
      11 months ago

      TPU units are specific to individual model frameworks, and engineers avoid using them for that reason. The most successful adoptions for them so far are vendor locked-in NN Models a la Amazon (Trainium), and Google (Coral), and neither of them has wide adoption since they have limited scopes. The GPU game being flexible in this arena is exactly why companies like OpenAI are struggling to justify the costs in using them over TPUs: it’s easy to run up front, but the cost is insane, and TPU is even more expensive in most cases. It’s also inflexible should you need to do something like multi-model inference (detection+evaluation+result…etc).

      As I said, ASICs are single purpose, so you’re stuck running a limited model engine (Tensorflow) and instruction set. They also take a lot of engineering effort to design, so unless you’re going all-in on a specific engine and thinking you’re going to be good for years, it’s short sighted to do so. If you read up, you’ll see the most commonly deployed edge boards in the world are…Jetsons.

      Enter FPGAs.

      FPGAs have speedup improvements for certain things like transcoding and inference in the 2x-5x range for specific workloads, and much higher for ML purposes and in-memory datasets (think Apache Ignite+Arrow workloads), and at a massive reduction in power and cooling, so obviously very attractive for datacenters to put into production. The newer slew of chips out are even reprogrammable “on the fly”, meaning a simple context switch and flash can take milliseconds, and multi-purpose workloads can exist in a single application, where this was problematic before.

      So unless you’ve got some articles about the most prescient AI companies currently using GPUs and moving to ASIC, the field is wide open for FPGA, and the datacenter adoption of such says it’s the path forward unless Nvidia starts kicking out more efficient devices.

      • Dr. Dabbles@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        2
        ·
        11 months ago

        Now ask open AI to type for you what the draw backs of FPGA is. Also the newest slew of chips is using partially charged NAND gates instead of FPGA.

        Almost all ASIC being used right now is implementing the basic math functions, activations, etc. and the higher level work is happening in more generalized silicon. You can not get the transistor densities necessary for modern accelerator work in FPGA.

        • just_another_person@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          11 months ago

          Friend, I do this for a living, and I have no idea why you’re even bringing gating into the equation, because it doesn’t even matter.

          I’m assuming you’re a big crypto fan, because that’s about all I could say of ASIC in an HPC type of environment to be good for. Companies who pay the insane amounts of money for “AI” right now want a CHEAP solution, and ASIC is the most short-term, e-wastey, inflexible solve to that problem. When you get a job in the industry and understand the different vectors, let’s talk. Otherwise, you’re just spouting junk.

          • Dr. Dabbles@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            2
            ·
            11 months ago

            I’m assuming you’re a big crypto fan

            Swing and a miss.

            because that’s about all I could say of ASIC in an HPC type of environment to be good for

            Really? Gee, I think switching fabrics might have a thing to tell you. For someone that does this for a living, to not know the extremely common places that ASICs are used is a bit of a shock.

            want a CHEAP solution

            Yeah, I already covered that in my initial comment, thanks for repeating my idea back to me.

            and ASIC is the most short-term

            Literally being atabled to the Intel tiles in Sapphire Rapids and beyond. Used in every switch, network card, and millions of other devices. Every accelerator you can list is an ASIC. Shit, I’ve got a Xilinx Alveo 30 in my basement at home. But yeah, because you can get an FPGA instance in AWS, you think you know that ASICs aren’t used. lmao

            e-wastey

            I’ve got bad news for you about ML as a whole.

            inflexible

            Sometimes the flexibility of a device’s application isn’t the device itself, but how it’s used. Again, if I can do thousands, tens of thousands, or hundreds of thousands of integer operations in a tenth of the power, and a tenth of the clock cycles, then load those results into a segment of activation functions that can do the same, and all I have to do is move this data with HBM and perhaps add some cheap ARM cores, bridge all of this into a single SoC product, and sell them on the open market, well then I’ve created every single modern ARM product that has ML acceleration. And also nvidia’s latest products.

            Woops.

            When you get a job in the industry

            I’ve been a hardware engineer for longer than you’ve been alive, most likely. I built my first FPGA product in the 90s. I strongly suspect you just found this hammer and don’t actually know what the market as a whole entails, let alone the long LONG history of all of these things.

            Do look up ASICs in switching, BTW. You might learn something.

            • just_another_person@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              2
              ·
              11 months ago

              Let’s just shut this down right now. If you built FPGAs ever, it was in college in the 90s, at an awful program of a US university that trained you in SQL on the side and had zero idea of how hardware works. I’m sorry for that.

              The world has changed since 30 years ago, and the future of integer operations is in reprogrammable chips. All the benefit of a fab chip, and none of the downside in a cloud environment.

              The very idea that you think all these companies are looking to design and build their own single purpose chips for things like inference shows you have zero idea of where the industry is headed.

              You’re only describing how ASIC is used in switches, cool. That’s what it’s meant for. That’s not how general use computing works in the world anymore, buddy. It’s never going to be a co-proc in a laptop that can load models and do general inference, or be a useful function for localized NN. It’s simply for the single purpose uses as you said.

              • Dr. Dabbles@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                2
                ·
                11 months ago

                I mean, you’re such an absolute know-nothing that it’s hilarious. Nice xenophobic bullshit sprinkled in too. Sorry, no university for me, let alone FPGA in university in the 90s. When my friends were in university they were still spending their time learn Java.

                The world has changed since 30 years ago

                Indeed. And people like me have been there every step of the way. Your ageism is showing.

                and the future of integer operations is in reprogrammable chips

                Yes, I remember hearing this exact sentiment 30 years ago. Right around the time we were hearing (again) how neural networks were going to take over the world. People like you are a dime a dozen and end up learning their lessons in a painfully humbling experience. Good luck with that, I hope you take it for the lesson it is.

                All the benefit of a fab chip

                Except the amount of wasted energy, and extreme amount of logic necessary to make it actually work. You know. The very fucking problem everybody’s working hard to address.

                The very idea that you think all these companies are looking to design and build their own single purpose chips

                The very idea that you haven’t kept up with the industry and how many companies have developed their own silicon is laugh out loud comedy to me. Hahahaha. TSMC has some news for you.

                You’re only describing how ASIC is used in switches

                Nope, I actually described how they are used in SoCs, not in switching fabrics.

                That’s not how general use computing works in the world anymore, buddy

                Except all those Intel processors I mentioned, those ARM chips in your iPhones and Pixels, the ARM processors in your macbooks. You know. Real nobodies in the industry.

                It’s never going to be a co-proc in a laptop that can load models and do general inference, or be a useful function for localized NN.

                Intel has news for you. It’s impressive how in touch you pretend to be in “the industry” but how little you seem to know about actual products being actually sold today.

                Hey, quick question. Does nvidia have FPGAs in their GPUs? No? Hmm. Is the H100 just a huge set of FPGA? No? Oh, weird. I wonder why, since you in all your genuis has said that’s the way everybody’s going. Strange that their entire product roadmap shows zero FPGA on their DPUs, GPUs, or on their soon to arrive SoCs. You should call Jensen, I bet he has so much to learn from a know-it-all like you that has some amazing ideas about US universities. Hey, where is it that all these tech startup CEOs went to university?

                Tell you what. Don’t bother responding, nothing you’ve said holds any water or value.

                  • Dr. Dabbles@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    11 months ago

                    They can be a xenophobic, ageist jagoff all they want. I’m not engaging with them anymore. They’re the carpenter that thinks a hammer solves all problems, if we pretend they actually did anything with FPGA as their day job.

                • just_another_person@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  arrow-down
                  2
                  ·
                  11 months ago

                  Because literally everyone else saw the writing on the wall and is preparing FPGA chips EXCEPT for NVIDIA. 🤦

                  NVidia is just now trying to make their own ARM chips ffs. 5 years late. You’re dated and outmoded. Get with the future.

            • ZahzenEclipse@kbin.social
              link
              fedilink
              arrow-up
              1
              arrow-down
              2
              ·
              edit-2
              11 months ago

              You’re such a dick for no reason. It definitely bolsters your claims your an old school tech guy lol

              • Dr. Dabbles@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                arrow-down
                1
                ·
                11 months ago

                Not for no reason. They made claims, I provided links, they whined about it. They provided zero links backing up their 40 year old claim that FPGA would replace anything that didn’t run away fast enough.