• 0 Posts
  • 22 Comments
Joined 1 year ago
cake
Cake day: March 3rd, 2024

help-circle
  • Even more surprising: the droplets didn’t evaporate quickly, as thermodynamics would predict.

    “According to the curvature and size of the droplets, they should have been evaporating,” says Patel. “But they were not; they remained stable for extended periods.”

    With a material that could potentially defy the laws of physics on their hands, Lee and Patel sent their design off to a collaborator to see if their results were replicable.

    I really don’t like the repeated use of the phrase “defy the laws of physics.” That’s an extraordinary claim, and it needs extraordinary proof, and the researchers already propose a mechanism by which the droplets remained stable under existing physical laws, namely that they were getting replenished from the nanopores inside the material as fast as evaporation was pulling water out of the droplets.

    I recognize the researchers themselves aren’t using the phrase, it’s the Penn press release organization trying to further drum up interest in the research. But it’s a bad framing. You can make it sound interesting without resorting to clickbait techniques like “did our awesome engineers just break the laws of physics??” Hell, the research is interesting enough on its own; passive water collection from the air is revolutionary! No need for editorializing!





  • People are making fun of the waffling and the apparent indecision and are missing the point. Trump isn’t flailing and trying to figure out how to actually make things work. He’s doing exactly what he intended: he’s holding the US economy for ransom and building a power base among the billionaires.

    He used the poor and ignorant to get control of the public institutions, and now he’s using that power to get control over the private institutions (for-profit companies). He’s building a carbon copy of Russia with himself in the role of Putin. He’s almost there, and it’s taken him 2 months to do it.




  • Ah, I think I misread your statement of “followers by nature” as “followers of nature.” I’m not really willing to ascribe personality traits like “follower” or “leader” or “independent” or “critical thinker” to humanity as a whole based on the discussion I’ve laid out here. Again, the possibility space of cognition is bounded, but unimaginatively large. What we can think may be limited to a reflection of nature, but the possible permutations that can be made of that reflection are more than we could explore in the lifetime of the universe. I wouldn’t really use this as justification for or against any particular moral framework.


  • I think that’s overly reductionist, but ultimately yes. The human brain is amazingly complex, and evolution isn’t directed but keeps going with whatever works well enough, so there’s going to be incredible breadth in human experience and cognition across everyone in the world and throughout history. You’ll never get two people thinking exactly the same way because of the shear size of that possibility space, despite there having been over 100 billion people to have lived in history and today.

    That being said, “what works” does set constraints on what is possible with the brain, and evolution went with the brain because it solves a bunch of practical problems that enhanced the survivability of the creatures that possessed it. So there are bounds to cognition, and there are common patterns and structures that shape cognition because of the aforementioned problems they solved.

    Thoughts that initially reflect reality but that can be expanded in unrealistic ways to explore the space of possibilities that an individual can effect in the world around them has clear survival benefits. Thoughts that spring from nothing and that relate in no way to anything real strike me as not useful at best and at worst disruptive to what the brain is otherwise doing. Thinking in that perspective more, given the powerful levels of pattern recognition in the brain, I wonder if creation of “100% original thoughts” would result in something like schizophrenia, where the brain’s pattern recognition systems are reinterpreting (and misinterpreting) internal signals as sensory signals of external stimuli.


  • The problem with that reasoning is it’s assuming a clear boundary to what a “thought” is. Just like there wasn’t a “first” human (because genetics are constantly changing), there wasn’t a “first” thought.

    Ancient animals had nervous systems that could not come close to producing anything we would consider a thought, and through gradual, incremental changes we get to humanity, which is capable of thought. Where do you draw the line? Any specific moment in that evolution would be arbitrary, so we have to accept a continuum of neurological phenomena that span from “not thoughts” to “thoughts.” And again we get back to thoughts being reflections of a shared environment, so they build on a shared context, and none are original.

    If you do want to draw an arbitrary line at what a thought is, then that first thought was an evolution of non-/proto-thought neurological phenomena, and itself wasn’t 100% “original” under the definition you’re using here.


  • From your responses to others’ comments, you’re looking for a “thought” that has absolutely zero relationship with any existing concepts or ideas. If there is overlap with anything that anyone has ever written about or expressed in any way before, then it’s not “100% original,” and so either it’s impossible or it’s useless.

    I would argue it’s impossible because the very way human cognition is structured is based on prediction, pattern recognition, and error correction. The various layers of processing in the brain are built around modeling the world around us in a way to generate a prediction, and then higher layers compare the predictions with the actual sensory input to identify mismatches, and then the layers above that reconcile the mismatches and adjust the prediction layers. That’s a long winded way to say our thoughts are inspired by the world around us, and so are a reflection of the world around us. We all share our part of this world with at least one other person, so we’re all going to share commonalities in our thoughts with others.

    But for the sake of argument, assume that’s all wrong, and someone out there does have a truly original, 100% no overlap with anything that has come before, thought. How could they possibly express that thought to someone else? Communication between people relies on some kind of shared context, but any shared context for this thought means it’s dependent on another idea, or “prior art,” so it couldn’t be 100% original. If you can’t share the thought with anyone, nor express it in any way to record it (because that again is communication), it dies with you. And you can’t even prove it without communicating, so how would someone with such an original thought convince you they’ve had it?


  • Math, physics, and to a lesser extent, software engineering.

    I got degrees in math and physics in college. I love talking about counterintuitive concepts in math and things that are just way outside everyday life, like transfinite numbers and very large dimensional vector spaces.

    My favorite parts of physics to talk about are general relativity and the weirder parts of quantum mechanics.

    My day job is software engineering, so I can also help people get started learning to program, and then the next level of building a solid, maintainable software project. It’s more “productive” in the traditional sense, so it’s satisfying to help people be more productive, but when it’s just free time to shoot the shit, talking about math and science are way more fun.


  • I’m sorry, I mostly agree with the sentiment of the article in a feel-good kind of way, but it’s really written like how people claim bullies will get their comeuppance later in life, but then you actually look them up later and they have high paying jobs and wonderful families. There’s no substance here, just a rant.

    The author hints at analogous cases in the past of companies firing all of their engineers and then having to scramble to hire them back, but doesn’t actually get into any specifics. Be specific! Talk through those details. Prove to me the historical cases are sufficiently similar to what we’re starting to see now that justifies the claims of the rest of the article.



  • I’m not the person you’re replying to, but I think their point is that the bars don’t scale linearly. The red bar (2014 price) for the McChicken is supposed to represent $1 and the yellow bar (2024 price) ~$3, but the yellow bar is not 3 times the length of the red bar. This means the relative differences between the bar lengths doesn’t match the percent increase number printed above then. This is most egregious comparing relative differences between the McChicken and the Quarter Pounder with Cheese meal: why does a 122% increase look so much worse than the 199% increase?

    I suspect the cause of problem is that the small bars were stretched a bit to fit printing the dollar value within then, but if it throws off the visual accuracy of the bars, what’s the point of using bars at all?


  • Robin Williams as the Bicentennial Man. The movie was okay; his performance was amazing. I’ve struggled with mortality for a while, like I expect a lot of people do, and to see him as a character who started their existence immortal, and to choose mortality. His death in the movie hit me much, much harder than I expected. I haven’t watched the movie again since my first viewing because I’m honestly afraid of going through that again.



  • Not OP, but in my circles the simplest, strongest point I’ve found is that no cryptocurrency has a built-in mechanism for handling mistakes. People are using these systems, and people make mistakes. Without built in accommodations, you’re either

    1. Creating real risk for anyone using the system, because each mistake is irrecoverable financial loss, and that’s pretty much the definition of financial risk, or
    2. Encouraging users to subvert the system in its core functionality in order to accommodate mistakes, which undermines the entire system and again creates risk because you don’t really know how anything is going to work with these ad hoc side systems

    Either way, crypto is just more costly to use than traditional systems when you properly factor those risks. So the only people left using it are those who expect greater rewards to offset all that additional risk, which are just speculators and grifters.