• 0 Posts
  • 290 Comments
Joined 1 year ago
cake
Cake day: June 17th, 2023

help-circle









  • That explains your optimism. Code generation is at a stage where it slaps together Stack Overflow answers and code ripped off from GitHub for you. While that is quite effective to get at least a crappy programmer to cobble together something that barely works, it is a far cry from having just anyone put out an idea in plain language and getting back code that just does it. A programmer is still needed in the loop.

    I’m sure I don’t have to explain to you that AI development over the decades has often reached plateaus where the approach needed to be significantly changed in order for progress to be made, but it could certainly be the case where LLMs (at least as they are developed now) aren’t enough to accomplish what you describe.









  • It sucks for livestreams on youtube too, since it only starts downloading the next chunk of video when it’s almost done playing through the current chunk and if you experience a hiccup, then youtube’s solution is to send you back in the livestream (amount depends on latency setting of the streamer) so instead of getting a nice live stream, you could be going back as far as around 20 seconds in the past, so if you want to participate then you’re going to be that slow on your reaction. Instead of waiting for the full 5 seconds of the buffer to play through before downloading the next chunk, I wish they’d query for the next chunk before then and not only that, but if there’s a hiccup, don’t send the stream back by so much, because also if you fall too far behind then it skips ahead. It’s all over the place.