Just out of curiosity. I have no moral stance on it, if a tool works for you I’m definitely not judging anyone for using it. Do whatever you can to get your work done!

  • Atramentous@lemm.ee
    link
    fedilink
    arrow-up
    62
    ·
    1 year ago

    High school history teacher here. It’s changed how I do assessments. I’ve used it to rewrite all of the multiple choice/short answer assessments that I do. Being able to quickly create different versions of an assessment has helped me limit instances of cheating, but also to quickly create modified versions for students who require that (due to IEPs or whatever).

    The cool thing that I’ve been using it for is to create different types of assessments that I simply didn’t have the time or resources to create myself. For instance, I’ll have it generate a writing passage making a historical argument, but I’ll have AI make the argument inaccurate or incorrectly use evidence, etc. The students have to refute, support, or modify the passage.

    Due to the risk of inaccuracies and hallucination I always 100% verify any AI generated piece that I use in class. But it’s been a game changer for me in education.

    • Atramentous@lemm.ee
      link
      fedilink
      arrow-up
      26
      ·
      1 year ago

      I should also add that I fully inform students and administrators that I’m using AI. Whenever I use an assessment that is created with AI I indicate with a little “Created with ChatGPT” tag. As a history teacher I’m a big believer in citing sources :)

      • limeaide@lemmy.ml
        link
        fedilink
        arrow-up
        11
        arrow-down
        1
        ·
        1 year ago

        How has this been received?

        I imagine that pretty soon using ChatGPT is going to be looked down upon like using Wikipedia as a source

        • Atramentous@lemm.ee
          link
          fedilink
          arrow-up
          14
          ·
          1 year ago

          I would never accept a student’s use of Wikipedia as a source. However, it’s a great place to go initially to get to grips with a topic quickly. Then you can start to dig into different primary and secondary sources.

          Chat GPT is the same. I would never use the content it makes without verifying that content first.

          • limeaide@lemmy.ml
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            Well the people that use it know that, but for the average person, chatgpt still has a high reputation

    • phillaholic@lemm.ee
      link
      fedilink
      arrow-up
      11
      arrow-down
      1
      ·
      1 year ago

      Is it fair to give different students different wordings of the same questions? If one wording is more confusing than another could it impact their grade?

      • Wörk@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Sure it could but the same issue is present with one question. Some students will get the wording or find it easy others may not. Having a test in groups to limit cheating is very common and never led to any problems as far as my anecdotal evidence goes.

        • phillaholic@lemm.ee
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          You’re increasingly the odds by changing the wording. I don’t see why it’s necessary. Just randomize the order of the questions would suffice.

    • jossbo@lemmy.ml
      link
      fedilink
      arrow-up
      5
      ·
      1 year ago

      I’m a special education teacher and today I was tasked with writing a baseline assessment for the use of an iPad. Was expecting it to take all day. I tried starting with ChatGPT and it spat out a pretty good one. I added to it and edited it to make it more appropriate for our students, and put it in our standard format, and now I’m done, about an hour after I started.

      I did lose 10 minutes to walking round the deserted college (most teachers are gone for the holidays) trying to find someone to share my joy with.

    • UlyssesT [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I wish I had that much opportunity to write (or fabricate) my own teaching material. I’m in a standardized testing hellscape where almost every month there’s yet another standardized test or preparation for one. debord-tired

      • Atramentous@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        It’s one of the fascinating paradoxes of education that the more you teach to standardized tests, the worse test results tend to be. Improved test scores are a byproduct of strong teaching - they shouldn’t be the only focus.

        Teaching is every bit as much an art as it is a science and straight-jacketing teachers with canned curricula only results in worse test scores and a deteriorated school experience for students. I don’t understand how there are admins out there that still operate like this. The failures of No Child Left Behind mean we’ve known this for at least a decade.

  • paNic@feddit.uk
    link
    fedilink
    English
    arrow-up
    40
    ·
    1 year ago

    A junior team member sent me an AI-generated sick note a few weeks ago. It was many, many neat and equally-sized paragraphs of badly written excuses. I would have accepted “I can’t come in to work today because I feel unwell” but now I can’t take this person quite so seriously any more.

  • bitsplease@lemmy.ml
    link
    fedilink
    arrow-up
    28
    ·
    edit-2
    1 year ago

    not chatGPT - but I tried using copilot for a month or two to speed up my work (backend engineer). Wound up unsubscribing and removing the plugin after not too long, because I found it had the opposite effect.

    Basically instead of speeding my coding up, it slowed it down, because instead of my thought process being

    1. Think about the requirements
    2. Work out how best to achieve those requirements within the code I’m working on
    3. Write the code

    It would be

    1. Think about the requirements
    2. Work out how best to achieve those requirements within the code I’m working on
    3. Start writing the code and wait for the auto complete
    4. Read the auto complete and decide if it does exactly what I want
    5. Do one of the following depending on 4 5a. Use the autocomplete as-is 5b. Use the autocomplete then modify to fix a few issues or account for a requirement it missed 5c. Ignore the autocomplete and write the code yourself

    idk about you, but the first set of steps just seems like a whole lot less hassle then the second set of steps, especially since for anything that involved any business logic or internal libraries, I found myself using 5c far more often than the other two. And as a bonus, I actually fully understand all the code committed under my username, on account of actually having wrote it.

    I will say though in the interest of fairness, there were a few instances where I was blown away with copilot’s ability to figure out what I was trying to do and give a solution for it. Most of these times were when I was writing semi-complex DB queries (via Django’s ORM), so if you’re just writing a dead simple CRUD API without much complex business logic, you may find value in it, but for the most part, I found that it just increased cognitive overhead and time spent on my tickets

    EDIT: I did use chatGPT for my peer reviews this year though and thought it worked really well for that sort of thing. I just put in what I liked about my coworkers and where I thought they could improve in simple english and it spat out very professional peer reviews in the format expected by the review form

    • rgb3x3@beehaw.org
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      Those different sets of steps basically boil down to a student finding all the ways they can to cheat and spending hours doing it, when they could have just used less time to study for the test.

      Not saying that you’re cheating, just that it’s the same idea. Usually the quickest solution is to just tackle the thing head-on rather than find the lazy workaround.

      • mobyduck648@beehaw.org
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        What I think ChatGPT is great for in programming is ‘I know what I want to do but can’t quite remember the syntax for how to do it’. In those scenarios it’s so much faster than wading through the endless blogspam and SEO guff that search engines deal in now, and it’s got much less of a superiority complex than some of the denizens of SO too.

    • Aceticon@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      1 year ago

      As a side note, whilst I don’t really use AI to help with coding, I was kinda expecting what you describe, more so for having stuff like ChatGPT doing whole modules.

      You see, I’ve worked as a freelancer (contractor) most of my career now and in practice that does mostly mean coming in and fixing/upgrading somebody else’s codebase, though I’ve also done some so-called “greenfield projects” (entirelly new work) and in my experience the “understanding somebody else’s code” is a lot more cognitivelly heavy that “coming up with your own stuff” - in fact some of my projects would’ve probably gone faster if we just rewrote the whole thing (but that wasn’t my call to make and often the business side doesn’t want to risk it).

      I’m curious if multiple different pieces of code done with AI actually have the same coding style (at multiple levels, so also software design approach) or not.

    • itwasawednesday@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      Urgh one of my coworkers (technically client, but work closely alongside) clearly uses it for every single email he sends, and it’s nauseating. He’s crass and very poorly spoken in person, yet overnight all his email correspondence is suddenly robotic and unnecessarily flowery. I use it regularly myself, for fast building of Excel formulas and so forth, but please, don’t dump every email into it.

  • fidodo@lemm.ee
    link
    fedilink
    arrow-up
    26
    arrow-down
    2
    ·
    edit-2
    1 year ago

    Why should anyone care? I don’t go around telling people every time I use stack overflow. Gotta keep in mind gpt makes shit up half the time so I of course test and cross reference everything but it’s great for narrowing your search space.

    • akulium@feddit.de
      link
      fedilink
      arrow-up
      16
      ·
      1 year ago

      I did some programming assignments in a group of two. Every time, my partner sent me his code without further explanation and let me check his solution.

      The first time, his code was really good and better than I could have come up with, but there was a small obvious mistake in there. The second time his code to do the same thing was awful and wrong. I asked him whether he used ChatGPT and he admitted it. I did the rest of the assignments alone.

      I think it is fine to use ChatGPT if you know what you are doing, but if you don’t know what you are doing and try to hide it with ChatGPT, then people will find out. In that case you should discuss with the people you are working with before you waste their time.

      • IndefiniteBen@feddit.nl
        link
        fedilink
        arrow-up
        5
        ·
        1 year ago

        I’ve had partners like that in the past. If ChatGPT didn’t exist they would’ve found another way to cheat or avoid work.

        The type of partner who takes the task you asked them to complete, posts the task description on an online forum and hope someone gives them the answer.

        • akulium@feddit.de
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Yes but I think it is a bit different because it just lowers the bar for this a lot. You also really lose trust in everything once you realize that you have spent a lot of time interacting with and checking AI generated stuff without knowing.

          • IndefiniteBen@feddit.nl
            link
            fedilink
            arrow-up
            2
            ·
            1 year ago

            I get that. Before ChatGPT if I had a bad partner it is very quickly obvious that their work is bad.

            Now you might be tricked into thinking they’re competent, which I can imagine is more frustrating because it’s unpredictable.

            I guess that right now people are overusing it as it’s so new, but in the end the people who want to graduate without trying to learn will always try to abuse whatever tools they have to cheat. Usually they face the consequences at some point in their lives.

            • fidodo@lemm.ee
              link
              fedilink
              arrow-up
              1
              ·
              1 year ago

              To really be successful you need to be curious enough to want to understand things at a deep level. With LLMs people who don’t really care well learn even less than before.

      • chaos@beehaw.org
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        This is the key with all the machine learning stuff going on right now. The robot will create something, but none of them have a firm understanding of right, wrong, truth, lies, reality, or fiction. You have to be able to evaluate its output because you have no idea if the robot’s telling the truth or not at that moment. Images are pretty immune to this because everyone can evaluate a picture for correctness or realism, and even if it’s a misleading photorealistic image, well, we’ve already had Photoshops for a long time. With text, you always have to keep in mind that the robot might be low quality or outright wrong, and if you aren’t equipped to evaluate its answers for that, you shouldn’t be using it.

        • fidodo@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          1 year ago

          Even with images, unless you’re looking for it most people will miss glaring problems. It’s like that basketball video psychology experiment.

          The problem is definitely bigger with LLMs though since you need to be an expert to check the output for validity. I will say when it’s right it saves a ton of time, but when it’s wrong you need to know enough to tell.

      • fidodo@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Yes, LLMs are great as a research assistant if you know what to look for, but they’re a horrible learning tool. It’s even worse if you don’t know the correct way to search for an answer, it will set you down a completely wrong path. I don’t use any answer without cross referencing and testing it myself. I also rewrite most of the code it spits out too since a lot of it follows terrible programming patterns and outdated standards.

      • Shush@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        He should’ve at least looked at the code and tested it before sending it to you. Ugh. Hate doing assignments with people who do the bare minimum and just waste your time.

      • Shush@reddthat.com
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        We’ve been instructed to use ChatGPT generically. Meaning, you ask it generic questions that have generic usage, like setting up a route in Express. Even if there is something more specific to my company, it almost always can be transformed into something more generic, like “I have a SQL DB with users in it, some users may have the ‘age’ field, I want to find users that have their age above 30” where age is actually something completely different (but still a number).

        Just need to work carefully on ChatGPT.

  • flynnguy@programming.dev
    link
    fedilink
    English
    arrow-up
    17
    ·
    1 year ago

    I had a coworker come to me with an “issue” he learned about. It was wrong and it wasn’t really an issue and the it came out that he got it from ChatGPT and didn’t really know what he was talking about, nor could he cite an actual source.

    I’ve also played around with it and it’s given me straight up wrong answers. I don’t think it’s really worth it.

    It’s just predictive text, it’s not really AI.

    • Echo71Niner@kbin.social
      link
      fedilink
      arrow-up
      6
      arrow-down
      2
      ·
      1 year ago

      I concur. ChatGPT is, in fact, not an AI; rather, it operates as a predictive text tool. This is the reason behind the numerous errors it tends to generate and its lack of self-review prior to generating responses is clearest indication of it not being an AI. You can identify instances where CHATGPT provides incorrect information, you correct it, and within 5 seconds of asking again, it repeat the same inaccurate information in its response.

    • EliasChao@lemmy.one
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      More often than not you need to be very specific and have some knowledge on the stuff you ask it.

      However, you can guide it to give you exactly what you want. I feel like knowing how to interact with GPT it’s becoming similar as being good at googling stuff.

    • idle@158436977.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Isn’t that what humans also do and it’s what makes us intelligent? We analyze patterns and predict what will come next.

    • dbilitated@aussie.zoneOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      i think learning where it can actually help is a bit of an art - it’s just predictive text, but it’s very good predictive text - if you know what you need and get good and giving it the right input it can save a huge about of time. you’re right though, it doesn’t offer much if you don’t already know what you need.

      • 7bicycles [he/him]@hexbear.net
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Can you hand me an example? I keep hearing this but every time somebody presents something, be it work related or not, it feels like at best it would serve as better lorem ipsum

        • surrendertogravity@wayfarershaven.eu
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          I’ve had good success using it to write Python scripts for me. They’re simple enough I would be able to write them myself, but it would take a lot of time searching and reading StackOverflow/library docs/etc since I’m an amateur and not a pro. GPT lets me spend more time actually doing the things I need the scripts for.

          • LordXenu@lemm.ee
            link
            fedilink
            arrow-up
            1
            ·
            1 year ago

            A use it with web development by describing what I want something to look like and have it generate a React component based on my description.

            Is what it gives me the final product? Sometimes, but it’s such a help to knock out a bunch of boilerplate and get me close to what I want.

            Also generating documentation is nice. I wanted to fill out some internal wiki articles to help people new to the industry have something to reference. Spent maybe an hour having a conversation asking all of the questions I normally run into. Cleaned up the GPT text, checked for inaccuracies, and cranked out a ton of resources. That would have taken me days, if not weeks.

            At the end of the day, GPT is better with words than I am, but it doesn’t have the years of experience I have.

  • CaptainPike@beehaw.org
    link
    fedilink
    arrow-up
    14
    ·
    1 year ago

    I’m a DM using ChatGPT to help me build things for my DnD campaign/world and not telling my players. Does that count? I still do most of the heavy lifting but it’s nice to be able to brainstorm and get ideas bounced back. I don’t exactly have friends to do that with.

    • boatswain@infosec.pub
      link
      fedilink
      arrow-up
      4
      ·
      1 year ago

      I do the same thing; it’s been great. ChatGPT is often problematic in other scenarios because it will sometimes just make stuff up, but that’s nothing but a positive for brainstorming D&D plots. I did tell my players though.

      • init@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        It’s phenomenal for making statblocks for NPCs too. fleshes out the whole thing in seconds. spells, feats, abilities, everything.

    • Jo Miran@lemmy.ml
      link
      fedilink
      arrow-up
      3
      ·
      1 year ago

      I use Midjourney to create illustrations of what I’m trying to describe as well as NPCs and PCs.

      • init@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        Midjourney isn’t free though, correct? I was thinking about doing this, but I’m also just bit behind the curve with image generation AI and just not sure about how to best go about it.

    • Gnubyte@lemdit.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      For code snippets especially. I mean the thing is limited to input sizes and doesn’t remember context of running conversations that well

  • limeaide@lemmy.ml
    link
    fedilink
    arrow-up
    12
    ·
    1 year ago

    My supervisor uses ChatGPT to write emails to higher ups and it’s kinda embarrassing lol. One email he’s not even capitalizing or spell checking, and the next he has these emails are are over explaining simple things and are half irrelevant.

    I’ve used it a couple times when I can’t fully put into words that I’m trying to say, but I use it more for inspiration than anything. I’ve also used it once or twice in my personal life for translating.

  • jayemecee@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    I’m a devops engineer, use it daily. Not to write e-mails, but to frequently ask the best approach to solve an issue or bash/sql/anything queries. My boss and colleagues know about it and use it too though

  • a_seattle_ian@lemmy.ml
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    I’m interested in finding ways to use it but when if I’m writing code I really like the spectrum of different answers on stack overflow with comment’s on WHY they did it that way. Might use it for boring emails though.

    • Ilflish@lemm.ee
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      I think my best use case is creating regex just dump a bunch of examples, test if it’s wrong and tell them what is wrong

      • a_seattle_ian@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        re-builder in Emacs works really good for this because I’ll usually have the text in a buffer already that I want to match or replace or adjust or select - I’m constantly using it.

    • ExLisper@linux.community
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I tried using it for coding couple of times and it wasn’t very helpful. For simple stuff it’s not much faster than looking at the docs directly. It’s kind of a nice interface for them but it’s nothing revolutionary. And it’s not really much faster then just checking on ddg. For more complex things it often skips steps, references outdated libraries or just gives wrong answers.

  • Platypus@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’ve been using it a little to automate really stupid simple programming tasks. I’ve found it’s really bad at producing feasible code for anything beyond the grasp of a first-year CS student, but there’s an awful lot of dumb code that needs to be written and it’s certainly easier than doing it by hand.

    As long as you’re very precise about what you want, you don’t expect too much, and you check its work, it’s a pretty useful tool.

    • vrighter@discuss.tchncs.de
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      1 year ago

      I, like most peaple, find it easier to write code than to read it. That “check its work” step means more work actually, for me

    • jecxjo@midwest.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I’ve found it useful for basically finding the example code for a 3rd party library. Basically a version of Stack Exchange that can be better or worse.

  • Lockely@pawb.social
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I’ve played around with it for personal amusement, but the output is straight up garbage for my purposes. I’d never use it for work. Anyone entering proprietary company information into it should get a verbal shakedown by their company’s information security officer, because anything you input automatically joins their training database, and you’re exposing your company to liability when, not if, OpenAI suffers another data breach.