• 1 Post
  • 14 Comments
Joined 2 years ago
cake
Cake day: June 19th, 2023

help-circle
  • Yes. Kind of. Probably.

    What we have is an issue with terminology. The thing is, “white” only makes sense when specifically referring to human vision.

    Our eyes have cells (cone cells) that are tuned to specific wavelengths in the EM spectrum. Three different wavelengths - one set of cone cells peak at 560nm that we see as Red, one at 530nm that we see as Green, and one at 420nm that we see as Blue.

    “White” is just our interpretation of a strong signal in these three frequencies.

    If, everything else being equal, our cones cells responded to higher wavelengths that our eyes can’t currently see, then our “white” might easily be what we see as “red” now, because we’d be also seeing the infra-red that we’re currently not.





  • At the end of the day, isn’t that just how we work, though? We tokenise information, make connections between these tokens and regurgitate them in ways that we’ve been trained to do.

    Even our “novel” ideas are always derivative of something we’ve encountered. They have to be, otherwise they wouldn’t make any sense to us.

    Describing current AI models as “Fancy auto-complete” feels like describing electric cars as “fancy Scalextric”. Neither are completely wrong, but they’re both massively over-reductive.