Hell, I’ll take someone who wants to be a billionaire, as long as they do it without exploitation. It’s just that that’s nearly impossible to do, since very few people actually individually create a billion dollars worth of value.
Hell, I’ll take someone who wants to be a billionaire, as long as they do it without exploitation. It’s just that that’s nearly impossible to do, since very few people actually individually create a billion dollars worth of value.
Look at their actions, not their words specifically.
It’s a culture where being unkind is particularly unacceptable, not specifically where you’re not allowed to be honest or forthright.
You’re allowed to not like someone, but telling someone you dislike them is needlessly unkind, so you just politely decline to interact with them.
You’d “hate to intrude”, or “be a bother”. If it’s pushed, you’ll “consider it and let them know”.
Negative things just have to be conveyed in the kindest way possible, not that they can’t be conveyed.
Brian Acton is the only billionaire I can think of that hasn’t been a net negative.
Co-founded WhatsApp, which became popular with few employees. Sold the service at a reasonable rate.
Sold the business for a stupid large sum of money, and generously compensated employees as part of the buyout.
Left the buying company, Facebook, rather than do actions he considered unethical, at great personal expense ($800M).
Proceeded to cofound signal, which is an open, and privacy focused messaging system which he has basically bankrolled while it finds financial stability.
He also has been steadily giving away most of his money to charitable causes.
Billionaires are bad because they get that way by exploiting some combination of workers, customers or society.
In the extremely unlikely circumstance where a handful of people make something fairly priced that nearly everybody wants, and then uses the wealth for good, there’s nothing intrinsically wrong with being that person.
Selling messaging to a few billion people for $1 a lifetime is a way to do that.
That might just be a growing up near water thing. I think that on average, Canadians live closer to larger bodies of water than Americans do, since more than half are within day trip distance of the great lakes waterway, and then there’s Halifax and Vancouver.
Growing up in a place with water, basically everyone I know also has at least a passing knowledge of recreational small watercraft.
Where I live basically every location is some combination of “French, native American, English, Scandinavian”, “pronounced natively or not”, and “spelled like it’s pronounced or not”.
The fun ones are the English pronunciation of the French transliteration of the native word.
I believe their point was that even encrypted messages convey data. So if you have a record of all the encrypted messages, you can still tell who was talking, when they were talking, and approximately how much they said, even if you can’t read the messages.
If you wait until someone is gone and then loudly raid their house, you don’t need to read their messages to guess the content of what they send to people as soon as they find out. Now you know who else you want to target, despite not being able to read a single message.
This type of metadata analysis is able to reveal a lot about what’s being communicated. It’s why private communication should be ephemeral, so that only what’s directly intercepted can be scrutinized.
In this case however, Janelle Shane is actually quite well aware of how different types of AI works. She writes about them, how they work and their various limitations.
Her blog is just focused on cases of them acting oddly, or not how you would expect , or just “funny”.
If you have an unutilized asset, there’s pressure to get rid of it for the cost savings.
If you sell your asset at a loss, it looks bad for you and the company. Same for paying cancelation fees.
If you legitimately think that you’re going to need that space in the future, for example because you think that we’ll find an equilibrium between “everyone work from office” and where we are now, and that we’re trending towards an organic level of office need/desire higher than we’re at now, you might see selling now as the first step to needing to buy again later, likely for higher than you sold for. So you try to “mandate” the equilibrium that you expect so you’re not in a position to have to explain why you’re holding onto a dead and losing value property.
Executives spend a lot of time talking to people and having meetings. The job selects for people who thrive on and value face to face communication. Naturally, they overestimate how much that social aspect of the job is true for everyone else, so they estimate that the equilibrium will have a lot more office time than other people would.
To make it worse, the more power you have to influence that decision, the more likely you are to have a similar bias.
This isn’t an excuse of course, since you can overcome that bias simply by telling teams to discuss what their ideal working arrangement would be, and then running a survey. Now you have data, and you can use it to try to scale offices to what you actually want.
This is already a thing we need to deal with, security wise. An application making use of encryption doesn’t know the condition of what it views as ram, and it could very well be transferred to a durable medium due to memory pressure. Same thing with hibernation as opposed to suspension.
Depending on your application and how sensitive it is, there are different steps you can take to deal with stuff like that.
To me it’s important to ask “what problem is it solving”, and “how did we solve that problem in the past”, and “what does it cost”.
Crypto currency solves the problem of spending being tracked by a third party. We used to handle this by giving each other paper. The new way involves more time, and a stupendous amount of wasted electricity.
Nfts solve the problem of owning a digital asset. We used to solve this by writing down who owned it. The cost is a longer time investment, and a stupendous amount of wasted electricity.
Generative AI is solving the problem of creative content being hard to produce, and expensive. We used to solve this problem by paying people to make things for us, and not making things if you don’t have money. The cost is pissing off creatives.
The first two feel like cases where the previous solution wasn’t really bad, and so the cost isn’t worth it.
The generative AI case feels mixed, because pissing off creatives to make more profit feels shitty, but lowering barriers to entry to creativity doesn’t.
We should also ban long hair.
I’m sure plenty of women only prefer to have long hair because they think they would be shunned or stan out if they cut it short.
I’m all for people getting to wear their hair like they want, but I’m confident that many women would actually prefer to wear their hair short, and so can’t be trusted to make that choice for themselves or express an honest opinion about it.
The first step in women’s liberation is making it clear that they lack agency and that other people know what’s best for them.
Depends on your level of security consciousness. If you’re relying on security identifiers or apis that need an “intact” system, it certainly can be a security issue if you can’t rely of those.
That being said, it’s not exactly a plausible risk for most people or apps.
Sure, I suppose. Or just don’t expand the system until there’s some measure of system in place to keep the AI cars from fucking around in emergency situations.
Some of the vehicles don’t have anyone in them.
https://missionlocal.org/2023/05/waymo-cruise-fire-department-police-san-francisco/
One of the incidents in question.
Big difference is that a human can be yelled at and told what to do, and we currently don’t have a good way for someone to do that with an autonomous vehicle.
I don’t think they work the same way, but I think they work in ways that are close enough in function that they can be treated the same for the purposes of this conversation.
Pen and pencil are “the same”, and either of those and printed paper are “basically the same”.
The relationship between a typical modern AI system and the human mind is like that between a pencil written document and a word document: entirely dissimilar in essentially every way, except for the central issue of the discussion, namely as a means to convey the written word.
Both the human mind and a modern AI take in input data, and extract relationships and correlations from that data and store those patterns in a batched fashion with other data.
Some data is stored with a lot of weight, which is why I can quote a movie at you, and the AI can produce a watermark: they’ve been used as inputs a lot. Likewise, the AI can’t perfectly recreate those watermarks and I can’t tell you every detail from the scene: only the important bits are extracted. Less important details are too intermingled with data from other sources to be extracted with high fidelity.
The question to me is how you define what the AI is doing in a way that isn’t hilariously overbroad to the point of saying “Disney can copyright the style of having big eyes and ears”, or “computers can’t analyze images”.
Any law expanding copyright protections will be 90% used by large IP holders to prevent small creators from doing anything.
What exactly should be protected that isn’t?
I disagree that recognition implies you contain it. It’s much closer to a description than the actual thing, and a description isn’t the same as the thing. This is evidenced by you being able to look at a letter P in a font you’ve never seen before and recognize it without issue. If it was just comparison, you couldn’t do that.
Changes the torque and the application of said torque for each bolt. As in “tool head has 5° of give until in place, then in ramps torque to 5nM over half a second, and holds for 1 second and then ramps to zero over .1 seconds”, and then something different for the next bolt. Then it logs that it did this for each bolt.
The tool can also be used to measure and correct the bolts as part of an inspection phase, and log the results of that inspection.
Finally, it tracks usage of the tool and can log that it needs maintenance or isn’t working correctly even if it’s just a subtle failure.