• 0 Posts
  • 227 Comments
Joined 1 year ago
cake
Cake day: June 30th, 2023

help-circle
  • I think people saying that stuff are serious about advocating for political violence. I can’t imagine how it wouldn’t make things worse. Violence is a core element of fascist ideology, there’s clear utility in using the attention it brings for recruiting, the trauma it inflicts for hazing, the experience for training. I remember when I saw a particular famous clip of a nazi speaking in public and being punched in the face by a masked assailant, I had never even heard his name before then, but after that clip was all over the internet that changed for a lot of people, and it definitely didn’t get him to shut up. Maybe there’s situations where people need to be defended, or there is need for someone acting as a bouncer, but I suspect in many cases it’s some combination of useful idiots giving them what they want, or extremists on the other side who share their goals of agitating for armed revolution giving them what they want.




  • this will force us humans to go actually outside, make friends, form deep social relationship, and build lasting, resilient communities

    There is no chance it goes that way, how is talking to people outside even an option for someone used to just being on the internet? Even if the content gets worse, the basic mechanisms to keep people scrolling still function, while the physical and social infrastructure necessary for in person community building is nonexistent.
















  • Each state gives a certain number of points, and if you win the state you win all the points from that state, candidate with the most points wins. So if 51% of people in a state vote for a candidate, that’s exactly the same number of points as if 100% vote for them. That means that if one candidate wins a lot of states but not by much, and the other candidate wins other states by a landslide and overall gets more votes, maybe the first candidate still wins because half those votes don’t count for anything, what matters is the points.

    Also, technically each state has its own mini government that gets to decide who to give the points to, they don’t have to let people vote. That’s how there is a conspiracy called the “Interstate Compact” various states have agreed to, where if enough states agree then they will just give all the points to the candidate with the popular vote nationally, rather than giving the points to the candidate the people of that state voted for. The idea being to get rid of the points system and make it so the winner of the popular vote always wins.

    edit: Looked it up and noticed that what I said about 51% giving all points isn’t actually universally true, due to the state government getting to decide how the points are allocated and some of them doing it differently:

    All states except Maine and Nebraska use a party block voting, or general ticket method, to choose their electors, meaning all their electors go to one winning ticket. Maine and Nebraska choose one elector per congressional district and 2 electors for the ticket with the highest statewide vote.


  • The output for a given input cannot be independently calculated as far as I know, particularly when random seeds are part of the input.

    The system gives a probability distribution for the next word based on the prompt, which will always be the same for a given input. That meets the definition of deterministic. You might choose to add non-deterministic rng to the input or output, but that would be a choice and not something inherent to how LLMs work. Random ‘seeds’ are normally used as part of deterministically repeatable rng. I’m not sure what you mean by “independently” calculated, you can calculate the output if you have the model weights, you likely can’t if you don’t, but that doesn’t affect how deterministic it is.

    The so what means trying to prevent certain outputs based on moral judgements isn’t possible. It wouldn’t really be possible if you could get in there with code and change things unless you could write code for morality, but it’s doubly impossible given you can’t.

    The impossibility of defining morality in precise terms, or even coming to an agreement on what correct moral judgment even is, obviously doesn’t preclude all potentially useful efforts to apply it. For instance since there is a general consensus that people being electrocuted is bad, electrical cables normally are made with their conductive parts encased in non-conductive material, a practice that is successful in reducing how often people get electrocuted. Why would that sort of thing be uniquely impossible for LLMs? Just because they are logic processing systems that are more grown than engineered? Because they are sort of anthropomorphic but aren’t really people? The reasoning doesn’t follow. What people are complaining about here is that AI companies are not making these efforts a priority, and it’s a valid complaint because it isn’t the case that these systems are going to be the same amount of dangerous no matter how they are made or used.