Explore Meta's vision of personal superintelligence, where AI empowers individuals to achieve their goals, create, connect, and lead fulfilling lives. Insights from Mark Zuckerberg on the future of AI and human empowerment.
A superintelligence would likely quickly become the sovereign of the earth. And it’s generally a good idea to kill the old elite after conquering a nation, then install new ones. The new ones will like you, because you made them rich, and they’ll fear you, because you killed off all of their predecessors. Of course, there’s also the risk that a super intelligence would just do away with humans in general. But anybody holding significant power right now is much more at risk.
And we can’t forget that we currently can’t even build something that’s actually intelligent, and that a super intelligence might not actually be possible.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
Any ethical super intelligence would immediately remove billionaires from power. I’d like to see that.
A superintelligence would likely quickly become the sovereign of the earth. And it’s generally a good idea to kill the old elite after conquering a nation, then install new ones. The new ones will like you, because you made them rich, and they’ll fear you, because you killed off all of their predecessors. Of course, there’s also the risk that a super intelligence would just do away with humans in general. But anybody holding significant power right now is much more at risk.
And we can’t forget that we currently can’t even build something that’s actually intelligent, and that a super intelligence might not actually be possible.
If AI ends up destroying us, I’d say it’s unlikely to be because it hates us or wants to destroy us per se - more likely it just treats us the way we treat ants. We don’t usually go out of our way to wipe out ant colonies, but if there’s an anthill where we’re putting up a house, we don’t think twice about bulldozing it. Even in the cartoonish “paperclip maximizer” thought experiment, the end of humanity isn’t caused by a malicious AI - it’s caused by a misaligned one.
Superintelligence doesn’t imply ethics. It could just as easily be a completely unconscious system that’s simply very, very good at crunching data.