• 0 Posts
  • 14 Comments
Joined 6 days ago
cake
Cake day: September 13th, 2024

help-circle
  • Kind of like how true thoughts and opinions on complex topics are boiled down to digestible concepts for others to understand who then perpetuate those concepts without understanding them and the meaning degrades and we dont think anymore, just repeat stuff in social media comments.

    Side note… this article sucks and seems like it was ai generated. Repetitive and no author credit? Just says it was originally posted elsewhere.

    Generative AI isnt in danger of being killed as this clickbait titled suggests… just hindered.


  • Sorry but a new pico headset wouldnt do much of anything. New meta headset, new valve headset would give a bump.

    Really needs better content. The hardware is almost there (in terms of cost and accessibility of the experience).

    Its slowly getting there. But the current population of vr users is characterized by: who would play the same limited experiences consistently with hardware that is often cumbersome and loading screens that arent super long but become your entire existence and its annoying.

    Meta sucks but they have been a boon for vr development.


  • I really truly suggest diversifying to newsfeeds without comment sections like techmeme for a bit.

    Increasing complexity is overwhelming and theres plenty of bad shit going on but theres a lot overblown in your post.

    Sorry for the long edit: i personally felt improvement for my mental health when i did this for 6 months or so. Because seriously, whatever disinformation is happening in american news is so exhausting. We need to think whatever we want and then engage with each other when our thoughts are more individualized. Dont be afraid to ask questions that might seem like you are questioning some holy established lemmy/reddit consensus. If you are being honest about your opinions and arent afraid to look dumb then you are doing the internet a HUGE service. We need more dumb questions and vulnerability to combat the obsession of appearing as an expert. So thank you for making a genuine post of concern.





  • I agree that is a bit of an ethical minefield to employ it to make decisions that affect peoples livelihood. But my point is if a company uses it to decide if an insurance claim should be paid out, the models ability to make those decisions isnt changed by what we call the steps it takes to come to a decision.

    If an insurance company can dissect any particular claim decision and agree with each step the model took, then is it really different than having someone do it? Might it be better in some ways? A real concern is the fact that ai isnt perfect and mistakes made are pretty hard to accept… seems pretty dystopian i get that. But if less mistakes are made and you can still appeal decisions then maybe its overblown?