Absolutely needed: to get high efficiency for this beast … as it gets better, we’ll become too dependent.
“all of this growth is for a new technology that’s still finding its footing, and in many applications—education, medical advice, legal analysis—might be the wrong tool for the job,”
Bold assumption.
Yeah, I think there was some efforts, until we found out that adding billions of parameters to a model would allow both to write the useless part in emails that nobody reads and to strip out the useless part in emails that nobody reads.
Historically AI always got much better. Usually after the field collapsed in an AI winter and several years went by in search for a new technique to then repeat the hype cycle. Tech bros want it to get better without that winter stage though.
Each winter marks the beginning and end of a generation of AI. We are now seeing more progress and as long as there is no technical limit it seems that its progress will not be interrupted.
What progress are we seeing?
In what area of AI? Image generation is increasing in leaps and bounds. Video generation even more so. Image reconstruction for games (DLSS, XeSS, FSR) is having generational improvements almost every year. AI chatbots are getting much much smarter seemingly every month.
What’s one main application of AI that hasn’t improved?
Which chatbots are getting smarter?
I know AI has potential, but specifically LLMs (which most people mean when talking about AI) seem to have hit their technological limits.
Advanced Reasoning models came out like 4 months ago lol
Advanced reasoning? Having LLM talk to itself?
Lul yes but no, but they are clearly better at many types of tasks.
Copilot, ChatGPT, pretty much all of them.
Smarter how? Synthetic benchmarks?
Because I’ve heard the opposite from users and bloggers.
AI usually got better when people realized it wasn’t going to do all it was hyped up for but was useful for a certain set of tasks.
Then it turned from world-changing hotness to super boring tech your washing machine uses to fine-tune its washing program.
Like the cliché goes: when it works, we don’t call it AI anymore.
The smart move is never calling it “AI” in the first place.
Unless you’re in comp sci, and AI is a field, not a marketing term. And in that case everyone already knows that’s not “it”.
The major thing that killed 1960s/70s AI was the Vietnam War. MIT’s CSAIL was funded heavily by DARPA. When public opinion turned against Vietnam and Congress started shutting off funding, DARPA wasn’t putting money into CSAIL anymore. Congress didn’t create an alternative funding path, so the whole thing dried up.
That lab basically created computing as we know it today. It bore fruit, and many companies owe their success to it. There were plenty of promising lines of research still going on.
Pretty sure “AI” didn’t exist in the 60s/70s either.
The perceptron was created in 1957 and a physical model was built a year later
Yes, it did. Most of the basic research came from there. The first section of the book “Hackers” by Steven Levy is a good intro.
Historically “AI” still doesn’t exist.