• jjjalljs@ttrpg.network
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    Any ethical super intelligence would immediately remove billionaires from power. I’d like to see that.

    • Saledovil@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      2 days ago

      A superintelligence would likely quickly become the sovereign of the earth. And it’s generally a good idea to kill the old elite after conquering a nation, then install new ones. The new ones will like you, because you made them rich, and they’ll fear you, because you killed off all of their predecessors. Of course, there’s also the risk that a super intelligence would just do away with humans in general. But anybody holding significant power right now is much more at risk.

      And we can’t forget that we currently can’t even build something that’s actually intelligent, and that a super intelligence might not actually be possible.

      • Perspectivist@feddit.uk
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.

    • Perspectivist@feddit.uk
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.