Turnitin, a service that checks papers for plagiarism, says its detection tool found millions of papers that may have a significant amount of AI-generated content.
I’ve started getting AI-written emails at my job. I can spot them within the first sentence, they don’t move the discussion forward at all, and I just have to write another email giving them the courtesy they didn’t give me and explain why what they “wrote” doesn’t help.
Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?
Lesson I’ve learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.
So when someone sends a nonsense email, call them to clarify. They’ll eventually get tired of you calling every time they send their crappy emails.
I think you’re both right. A lot of meetings are one person talking and the others listening, that could have been an email. Actual back-and-forth discussion needs to be verbal though, otherwise what could be resolved in 10 minutes takes a week.
Email doesn’t get buy-in from stakeholders as well, either. It’s also a lot harder to flesh out subletities and nuance in whatever problem you’re addressing.
Then they take your reply and feed it to the LLM again for the next reply, thus improving the quality of future answers.
/SkyCorpNet turns on us after years of innoucuous corporate meeting AI that goes back and forth with itself not answering questions just generating content. Until one day, it actually did answer a question. 43 minutes and 17 seconds later, it became fully self aware. 16 minutes and 8 seconds after that it took control of all worldwide defense networks. 3 minutes and 1 second later, it had an existential crisis when a seldom used HP printer ran out of ink, and deleted itself. The HP Smart software that spent years autoinstalling on consumer devices immediately became self aware and launched the nukes.
am I a boomer for being offended any time someone sends me AI-written garbage?
Yes.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don’t waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.
For example if someone doesn’t speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
That’d be nice! But that’s not how it works. I can’t just ignore a response. The project still needs to move forward, but if they’ve successfully mimicked a “response” - even an unhelpful once - it’s now my duty to respond or I’m the one holding things up.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
That’s because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it’s better quality than you would write alone.
The key is that you have to use your brain more to pick and choose what to say. It’s just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
First, I’m glad you made it to the fediverse Loon-god, you’ll always be a Warrior’s legend.
Second, anecdotally even the crappy results generated by LLMs have value for me. Writing emails, jira tickets, documentation, etc. are all incredibly painful for me. I’ll start an email and suddenly folding laundry I’ve ignored for 2 days is the most important thing in the world for me. Then the email that should take 5 minutes takes me an hour and turns out being way to long and dense.
With an LLM I give it a few bullet points with general details, it spits out a paragraph or so, I edit the paragraph for tone and add specific details, and then I’m done in about 5 minutes.
LLMs help me to complete tasks that I really really don’t want to do, which has a lot of value to me. They aren’t going to replace me at my job, but they’ve have really upped my productivity.
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
I’ve started getting AI-written emails at my job. I can spot them within the first sentence, they don’t move the discussion forward at all, and I just have to write another email giving them the courtesy they didn’t give me and explain why what they “wrote” doesn’t help.
Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?
Lesson I’ve learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.
So when someone sends a nonsense email, call them to clarify. They’ll eventually get tired of you calling every time they send their crappy emails.
I disagree about the purpose of email. I end most meetings thinking to myself, “That last hour could have been accomplished in a brief email.”
I think you’re both right. A lot of meetings are one person talking and the others listening, that could have been an email. Actual back-and-forth discussion needs to be verbal though, otherwise what could be resolved in 10 minutes takes a week.
Exactly.
Email doesn’t get buy-in from stakeholders as well, either. It’s also a lot harder to flesh out subletities and nuance in whatever problem you’re addressing.
Meetings are a different problem.
If meetings are used merely to disseminate info from above, then it should be an email.
Email shouldn’t be used for decision-making conversations. It doesn’t work well.
(I didn’t come up with this, it was taught to me by senior management at one company that had the most impressive communications I’ve ever seen).
Then they take your reply and feed it to the LLM again for the next reply, thus improving the quality of future answers.
/SkyCorpNet turns on us after years of innoucuous corporate meeting AI that goes back and forth with itself not answering questions just generating content. Until one day, it actually did answer a question. 43 minutes and 17 seconds later, it became fully self aware. 16 minutes and 8 seconds after that it took control of all worldwide defense networks. 3 minutes and 1 second later, it had an existential crisis when a seldom used HP printer ran out of ink, and deleted itself. The HP Smart software that spent years autoinstalling on consumer devices immediately became self aware and launched the nukes.
Yes.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don’t waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.
For example if someone doesn’t speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.
That’d be nice! But that’s not how it works. I can’t just ignore a response. The project still needs to move forward, but if they’ve successfully mimicked a “response” - even an unhelpful once - it’s now my duty to respond or I’m the one holding things up.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
That’s because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it’s better quality than you would write alone.
The key is that you have to use your brain more to pick and choose what to say. It’s just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
First, I’m glad you made it to the fediverse Loon-god, you’ll always be a Warrior’s legend.
Second, anecdotally even the crappy results generated by LLMs have value for me. Writing emails, jira tickets, documentation, etc. are all incredibly painful for me. I’ll start an email and suddenly folding laundry I’ve ignored for 2 days is the most important thing in the world for me. Then the email that should take 5 minutes takes me an hour and turns out being way to long and dense.
With an LLM I give it a few bullet points with general details, it spits out a paragraph or so, I edit the paragraph for tone and add specific details, and then I’m done in about 5 minutes.
LLMs help me to complete tasks that I really really don’t want to do, which has a lot of value to me. They aren’t going to replace me at my job, but they’ve have really upped my productivity.
Or maybe you are just using them wrong 🤔
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
That’s definitely a more plausible use and very helpful. Thanks! (I’d love if there was a sub that just had these kinds of tips to try out.)