• 5PACEBAR@lemmy.world
    link
    fedilink
    English
    arrow-up
    50
    arrow-down
    1
    ·
    edit-2
    4 days ago

    This picture is AI generated

    Edit: OP removed the picture from the post

  • doomcanoe@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    26
    ·
    5 days ago

    Everyone here saying “no shit, LLMs were not even designed to play chess” are not the people who this is directed at.

    Multiple times at my job I have had to explain, often to upper management, that LLMs are not AGIs.

    Stories like these help an under informed general public to wrap their heads around the idea that a “computer that can talk” =/= a “computer that can truly think/reason”.

    • MolecularCactus1324@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      edit-2
      2 days ago

      They say LLMs can “reason” now, but they obviously can’t. At best, they could be trained to write a code snippet and run the code to get the answer. I’ve noticed when asked to do math ChatGPT will now translate my math question into python and run that to get the answer, since it can’t do math itself reliably.

      There are algorithms for playing chess that win by analyzing every possible move for 5, 10, 100, or more moves in advance and choosing the one most likely to lead to an optimal outcome. This is essentially what the Atari game is probably doing. LLMs could probably be given the tools to run that algorithm themselves. However, the LLM itself can’t possibly do the same thing.

  • Portosian@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    What exactly is it that makes the image generating AI use the ugliest colors for backgrounds? This one is like the stained walls in chain-smoker’s house.

  • AZX3RIC@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    5 days ago

    Can the Atari answer my questions incorrectly with confidence?

    Checkmate.

  • vrighter@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    10
    ·
    4 days ago

    for some reason it reminds me of a quote from friends: “voice recognition is gonna be pretty much standard on any computer you buy. So you can be like ‘wash my car’, ‘clean my room’. You know it’s not gonna be able to do any of those things, but it’ll understand what you’re saying”

  • deadcatbounce@reddthat.com
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    edit-2
    5 days ago

    “Everybody is a genius. But if you judge a fish by its ability to climb a tree, it will live its whole life believing that it is stupid.” attributed to Einstein but I read he didn’t say it.

    • Devmapall@lemmy.zip
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      I’m not sure I’ve ever read an actual quote by Einstein at this point.

      Or Thomas Jefferson for that matter

      I do like the quote regardless

  • hperrin@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    5 days ago

    I’d like to see the Atari write a shitty article about the seven best and worst kinds of moviegoers. Or role play with me that they are a 300 year old sparkly vampire and I am an insufferable teenage girl with zero friends or ability to emote proper human emotions.

  • Rookeh@startrek.website
    link
    fedilink
    English
    arrow-up
    4
    ·
    5 days ago

    No shit. Chess programs are specifically built and optimised to the nth degree for this specific use case and nothing else. They do not share the massive compute overhead and convoluted nondeterministic nature of an LLM.

    This is like drag racing an F1 car and a Camry and being surprised at the result.

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      This is like drag racing an F1 car and a Camry

      More like racing a Reliant Robin and an answering machine.

    • wer2@lemmy.zip
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      5 days ago

      I don’t think the Atari Chess program is as optimized as you think.

      • vrighter@discuss.tchncs.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        1.19 MHz, 1/8 kB RAM

        so no transposition tables, no endgame databases, nothing that requires pretty much any memory.

    • morphballganon@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Or have a real engine designer design a moderately powerful engine vs a computer throws together a blob of metal that looks kinda like an engine

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      arrow-down
      2
      ·
      5 days ago

      In that analogy, billions would be being spent by exotic car manufacturers saying they will replace all vehicules: airplanes, boats, scooters, bicycles, rockets…

      Also inexplicably the Lamborghini sometimes just throws itself into reverse and insists that it’s moving forward.

        • leftzero@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          4 days ago

          LLM’s are not meant to do anything like playing a chess game

          They’re being sold as being able to play chess, though.

          • Honytawk@feddit.nl
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            4 days ago

            Nobody is making those claims. Or they would get sued for false advertising.

            People are interpreting as if chatbots are capable of playing chess, but that isn’t on the company.

            The claims those companies make are real. It are dumb features that are mostly useless, but the claims are not fake.

            • leftzero@lemmynsfw.com
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              4 days ago

              Every single one of those scammers is making those claims, and worse, starting with calling their shit AI.

              They’re selling useless and harmful shit, they know they’re selling useless and harmful shit, and they plan to run away with the money the second the bubble starts to burst, without a single care for the state they’ll leave the internet and the economy in.

  • ComradeSharkfucker@lemmy.ml
    link
    fedilink
    English
    arrow-up
    3
    ·
    5 days ago

    In all fairness chessbots are REALLY REALLY good. Like incredibly good at chess. I am not shocked the guessing machine lost to one.

      • Gronk@aussie.zone
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        5 days ago

        There are chess engines floating around out there under 4kb, probably even less if you were mainlining the thing in assembly for a 6502 instruction set, still 128 bytes of RAM to work with is a punish.

        But chess is mostly a ‘solved problem’ computationally, it’s impressive constrained to the hardware but this whole Atari vs ChatGPT thing is like a grandmaster in a Mechanical Turk vs a toddler

        • JcbAzPx@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          4 days ago

          The story says the game is also from the '70s. So that would be before we were anywhere near video game chess being good. Much less a solved problem.

  • Blue_Morpho@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    5 days ago

    4O got wrecked. My ai fan friend said O3 is their reasoning model so it means nothing. I don’t agree but can’t find proof.

    Has someone done this with O3?

    • otacon239@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      arrow-down
      1
      ·
      5 days ago

      It’s a fundamental limitation of how LLMs work. They simply don’t understand how to follow a set of rules in the same way as a traditional computer/game is programmed.

      Imagine you have only long-term memory that you can’t add to. You might get a few sentences of short-term memory before you’ve forgetting the context of the beginning of the conversation.

      Then add on the fact that chess is very much a forward-thinking game and LLMs don’t stand a chance against other methods. It’s the classic case of “When all you have is a hammer, everything looks like a nail.” LLMs can be a great tool, but they can’t be your only tool.

      • Lucy :3@feddit.org
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        5 days ago

        Or: If it’s possible to create a simple algirithm, that will always be infinitely more accurate than ML.

        • snooggums@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          5 days ago

          That is because the algorithm has an expected output that can be tested and verified for accuracy since it works consistently every time. If there appears to be inconsistency, it is a design flaw in the algorithm.

      • snooggums@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        5 days ago

        MY biggest disappointment with how AI is being implemented is the inability to incorporate context specific execution if small programs to emulate things like calculators and chess programs. Like why does it doe the hard mode approach to literally everything? When asked to do math why doesn’t it execute something that emulates a calculator?

        • otacon239@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 days ago

          I’ve been waiting for them to make this improvement since they were first introduced. Any day now…

        • Ephera@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          5 days ago

          That’s definitely being done. It’s referred to as “tool calling” or “function calling”: https://python.langchain.com/docs/how_to/tool_calling/

          This isn’t as potent as one might think, because:

          1. each tool needs to be hooked up and described extensively.
          2. the naive approach where the LLM generates heaps of text when calling these tools, for example to describe the entire state of the chessboard as JSON or CSV, is unreliable, because text generation is unreliable.
          3. smarter approaches, like having an external program keeping track of the chessboard state and sending it to a chess engine, so that the LLM only has to forward the move that the user described, don’t really make sense to incorporate into a general-purpose language model. You can find chess chatbots on the internet, though.

          But all-in-all, it is a path forward where the LLMs could just do the semantics and then call a different tool for each thinky job, serving at least as a user interface.
          The hope is for it to also serve as glue between these tools, automatically calling the right tools and passing their output into other tools. I believe, the next step in this direction is “agentic AI”, but I haven’t yet managed to cut through the buzzword soup to figure out what that actually means.

        • Zos_Kia@lemmynsfw.com
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          5 days ago

          Chatgpt definitely does that. It can write small python programs and execute them, but it doesn’t do it systematically, you have to prompt for it. It can even use chart libs to display data.

      • Blue_Morpho@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        5 days ago

        It’s a fundamental limitation of how LLMs work.

        LLMs have been adding reasoning front ends to them like O3 and deep seek. That’s why they can solve problems that simple LLM’s failed at.

        I found one reference to O3 rated at chess level 800 but I’d really like to see Atari chess vs O3. My telling my friend how I think it would fail isn’t convincing.