• 0 Posts
  • 25 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle
  • Uneducated 2 cents. afaik the publishers have some kind of “part ownership”, where they can pull it out from the store whenever. The “anti-piracy” feature you get with DRMs is why many publishers actually like them tho. The part ownership thing is just icing on the cake. So no, a good chunk of publishers won’t be furious at all. DRM gives what publishers want and more, at the expense of the consumers in a way that most wouldn’t realize.

    And if anything, I think it makes more sense to think that these publishers are also just granting Amazon some kind of “license” to sell their e-books.

    Amazon would absolutely be destroying their relationship with a publisher though, if they decide to block the selling or access of a book to large group of people who are would-be buyers. But, at the end of the day, publishers want to know how much they’re making from putting their e-books on Amazon, and as long as that revenue is enough to satisfy their needs, they don’t need to care too much about the odd customer who had their book revoked, and they would generally be pretty shielded from any sort of disputes as long as Amazon is making those revoking calls.


  • This was pointed out in another comment but I will basically echo it to just give that call a boost: Point your instructor to well-regarded sources for introversion and extroversion, and let them know that the labelling in their note is not only inaccurate, it falsely attaches a wrongly defined word onto problematic behaviours that have nothing to do with what introversion and extroversion is, which is not good because it propagates a false narrative.

    If your instructor doesn’t seem cooperative and insists on being correct, talk to other instructors that you trust, or even go to those with more authority to tell them about the issue. If you can’t get anyone to actually do something, I suggest you change schools immediately, and call the school out for what they did.

    Maybe it’s just one of those days, but I have no tolerance for this sort of false narrative being spread, even if the original intention is innocuous, and especially in a school. Being forced to act in a certain way that deviates from one’s personality to not be perceived as a problematic person, especially over a badly-informed opinion, can have lasting negative consequences to children and adolescents. I’m tired of seeing introverted friends and family members suffer over the fact that they’re introverts, to the point where they will deny being an introvert and even echo these sorts of statements in order to blend in.



  • You could create an account that blocks off communities for news and technology, and any other communities that have a high likelihood of reporting on current events. Just switch to the account on days where you just don’t want to read such news, for any respectable reason you may have (it’s understandable, it can be draining).

    This should be a no-brainer, but Lemmy doesn’t really filter stuff out by default, unless the admins decide so. So as long as you’ve created an account on a fairly managed instance, and given that the current news cycle, especially in the Western & English-speaking world, you won’t be able to escape Trump and Musk, especially when they’re dominating headlines due to how they are literally affecting the lives of millions, if not billions, of people.




  • This. Any time someone’s tries to tell me that AGI will come in the next 5 years given what we’ve seen, I roll my eyes. I don’t see a pathway where LLMs become what’s needed for AGI. It may be a part of it, but it would be non-critical at best. If you can’t reduce hallucinations down to being virtually indistinguishable from misunderstanding a sentence due to vagueness, it’s useless for AGI.

    Our distance from true AGI (not some goalpost moved by corporate interests) has not significantly moved from before LLMs became a thing, in my very harsh opinion, bar the knowledge and research being done by those who are actually working towards AGI. Just like how we’ve always thought AI would come one day, maybe soon, before 2020, it’s no different now. LLMs alone barely closes that gap. It gives us that illusion at best.



  • One case where I find it useful, tho it operates in a more limited way, is code in block blocks within code comments in Rust, which are also printed out in the generated documentation. They essentially get ran as part of your unit tests. This is great for making sure that, eg, your examples left in code comments actually work, especially if they’re written in a way that functions like a unit test.



  • I’ll admit that chalking it up to defeatism is a stretch, but it’s not too far in my opinion. It’s the admission that the “machines” (though it’s really just big tech companies with a vested interest in as much data as possible so that they can sell it one way or another for profit) have already won and there’s not only no point in struggling against it, you get something out of it. I don’t necessarily agree with the gun analogy as I find it difficult to distinguish that from a threat of your life, but I see where you’re coming from: the easy path towards what most people current perceive as a modern life of tech is built in a way that pushes people into line as products, by enticing them with a “service” and taking advantage of their FOMO, and all other ways are either too much work or too technical for the common person.

    When these services that people have come to rely on gets enshittified, these people would then just shrug and say “well what can you do,” maybe send some angry message somewhere into the aether and continue with the service, continuing to be a milk cow.

    For myself, I see privacy as a tool towards encouraging a healthier variety in the ecosystem. It is a way to attain at least some healthy level of anonymity, as you would walking down streets in different parts of the world, so that I do not have to constantly maintain a single, outward personality everywhere I go. Supporting privacy is my way of saying I don’t like how many big tech business works, by essentially exploiting human nature and stepping all over it. That IS ideological; I simply believe that we can do good business without resorting to dirty tactics and opportunism; that humans should not be milk cows to business or capitalism.

    That said, I have some vested interest in having more options: my interest and hobbies are niche and none of these services can or will sufficiently provide for what I seek. By the milk cow analogy, I do not sufficiently benefit from the blanket offers of these businesses. I also do not like the consequences of which they bring to humans and their relationships, and not fixing those consequences is out of a conflict of interest where they are motivated to exploit human nature and relationships to profiteer off us all, as is the many examples that we’re all starting to see and realize from capitalism.







  • As someone who was working really hard trying to get my company to be able use some classical ML (with very limited amounts of data), with some knowledge on how AI works, and just generally want to do some cool math stuff at work, being asked incessantly to shove AI into any problem that our execs think are “good sells” and be pressured to think about how we can “use AI” was a terrible feel. They now think my work is insufficient and has been tightening the noose on my team.



  • It’s not possible for everyone to just tell if it’s supposed to be sarcasm. ADHD makes it hard. A bad day makes it hard. A tiring day makes it hard.

    The downside of the misunderstanding isn’t just downvotes. It’s possibly a proliferation of misinformation and an impression that there are people who DO think that way.

    Being not serious while saying something grim is not a globally understood culture either. It’s more common and acceptable in the Western world as a joke.

    So… call it accessibility, but it’s just more approachable for everyone to just put an “/s”.


  • Many of these meanings seem to be captured in some modern solutions already:

    • We plan to provide a value, but memory for this value hasn’t been allocated yet.
    • The memory has been allocated, but we haven’t attempted to compute/retrieve the proper value yet
    • We are in the process of computing/retrieving the value

    Futures?

    • There was a code-level problem computing/retrieving the value

    Exception? Result monads? (Okay, yea, we try to avoid the m word, but bear with me there)

    • We successfully got the value, and the value is “the abstract concept of nothingness”

    An Option or Maybe monad?

    • or the value is “please use the default”
    • or the value is “please try again”

    An enumeration of return types would seem to solve this problem. I can picture doing this in Rust.