• 0 Posts
  • 29 Comments
Joined 11 months ago
cake
Cake day: September 2nd, 2024

help-circle
  • 💡 Lifehack: Unclog Your Sink with… Your Own Urine? Science Says Yes!

    If you’ve ever had a rough night and ended up vomiting in the sink (hey, it happens), you may have found yourself with a gross, clogged mess. But before you reach for the plunger—or worse, call a plumber—consider this weird but effective trick: pee in the sink.

    Yup, you read that right. According to fluidic chemistry enthusiasts and some Reddit plumbing veterans, urine can actually help break down and dislodge vomit clogs.


    🧪 The Science Behind It

    • Urea & Ammonia Action: Your urine contains urea, which breaks down into ammonia—a compound found in many household cleaners. When urine sits on the clog, the ammonia can start to denature the proteins in the vomit (like partially digested meat, dairy, or stomach mucus), helping to loosen the goop.
    • Temperature Matters: Fresh urine is close to body temperature (98.6°F), which is actually warmer than most tap water. This warmth helps soften fatty or gelatinous chunks that may have solidified in the drain.
    • pH Balancing: Vomit is highly acidic (thanks to stomach acid). Urine tends to be slightly acidic to neutral, and when mixed together, they may chemically neutralize some of the acidity, reducing corrosive buildup and helping dislodge bio-sludge stuck to pipe walls.
    • Flow Dynamics: A good strong stream of urine can generate a pulsed pressure wave, which some claim helps to dislodge partial clogs. (Think of it as “hydro-jetting on a budget.”)

    🛠️ How to Do It

    1. Remove your pants.
    2. Stand over the sink. (Yes, aim is important.)
    3. Let it flow.
    4. Brag to your friends about your eco-friendly DIY plumbing hack!





  • I understand what you’re having in mind, I’ve had similar intuitions about AI in early 2000s. What exactly is “truly new” is an interesting topic ofc, but it’s a separate topic. Nowadays I’m trying to look at things more empyrically, without projecting my internal intuitions on everything. In practice it does generalize knowledge, use many forms of abstract reasoning and transfer knowledge across different domains. And it can do coding way beyond the level of complexity of what average software developer does at everyday work.


  • An LLM has “zero context” about your project’s specific stack and style guidelines. In other words, an AI might produce a generic <Modal> component, but integrating it into your app’s unique architecture is still a human task.

    This is very old. Nowadays, in Copilot for example, you add files to context and tell “hey look how I did that thing there, do this new thing following the same structure, with the same naming conventions” and it’s enough. And tools like Cursor just throw your whole project into context by default.


  • They don’t really transfer solutions to new problems

    Lets say there is a binary format some old game uses (Doom), and in it some of its lumps it can store indexed images, each pixel is an index of color in palette which is stored in another lump, there’s also a programming language called Rust, and a little known/used library that can look into binary data of that format, there’s also a GUI library in Rust that not many people used either. Would you consider it an “ability to transfer solutions to new problems” that it was able to implement extracting image data from that binary format using the library, extracting palette data from that binary format, converting that indexed image using extracted palette into regular rgba image data, and then render that as window background using that GUI library, the only reference for which is a file with names and type signatures of functions. There’s no similar Rust code in the wild at all for any of those scenarios. Most of this it was able to do from a few little prompts, maybe even from the first one. There sure were few little issues along the way that required repromting and figuring things together with it. Stuff like this with AI can take like half an hour while doing the whole thing fully manually could easily take multiple days just for the sake of figuring out APIs of libraries involved and intricacies of recoding indexed image to rgba. For me this is overpowered enough even right now, and it’s likely going to improve even more in future.



  • This only proves some of them can’t solve all complex problems. I’m only claiming some of them can solve some complex problems. Not only by remembering exact solutions, but by remembering steps and actions used in building those solutions, generalizing, and transferring them to new problems. Anyone who tries using it for programming, will discover this very fast.

    PS: Some of them were already used to solve problems and find patterns in data humans weren’t able to get other ways before (particle research in CERN, bioinformatics, etc).


  • Yeah, this is correct analogy, but much more complex problems than calculator. How much it is similar or not to humans way of thinking is completely irrelevant. And how much exact human type of thinking is necessary for any kind of problem solving or work is not something that we can really calculate. Considering that scientific breakthroughs, engineering innovations, medical stuff, complex math problems, programming, etc, do necessarily need human thinking or benefit from it as opposed to super advanced statistical meta-patterning calculator is wishful thinking. It is not based on any real knowledge we have. If you think it is wrong to give it our problems to solve, to give it our work, then it’s a very understandable argument, but you should say exactly that. Instead this AI-hate hivemind tries to downplay it using dismissive braindead generic phrases like “NoPe ItS nOt ReAlLy UnDeRsTaNdInG aNyThInG”. Okay, who tf asked? It solves the problem. People keep using it and become overpowered because of it. What is the benefit of trying to downplay its power like that? You’re not really fighting it this way if you wanted to fight it.


  • Coming up with even more vague terms to try to downplay it is missing the point. The point is simple: it’s able to solve complex problems and do very impressive things that even human struggle to, in very short time. It doesn’t really matter what we consider true abstract thought of true inference. If that is something humans do, then what it does might very well be more powerful than true abstract thought, because it’s able to solve more complex problems and perform more complex pattern matching.






  • I agree those are some of possible motivations, but I also think there are countless other motivations for it in the wild. The “We get to choose exactly what is included and what is not” thing I personally think is more a “minimalism” mindset than realism, but that’s just my perspective. A lot of people who do realism, just go there and draw exactly what they see, or they have people pose for them. They ofc choose the scene and pose, but they don’t deliberately strip detail for artistic value like minimalists do, which means minimalists push way heavier into this “control what’s included and what not” territory.




  • It’s the breaking of the patterns that sound good in music, but only in specific ways. Other ways sound discordant.

    I like a lot of different music and I also like harsh noise, when it’s adventurous like Merzbow. It sounds discordant, but it sounds great and I enjoy listening to it. Maybe you should go more fundamental, “why do we humans like information entropy” or something like that.