I like to think of LLMs as the internet's Librarian. They've read nearly all the books in the library, can't always cite the exact page, but can point you in the right direction most of the time.
Completely agree, and for me it is not just about the easier/quicker access to information, but the interactivity. I can ask Claude to spend half an hour to create a learning plan for me, then refine it by explaining what I already know and where I see my main gaps.
And then I can, in the same context, ask questions while reading the articles suggested for learning. - There's also danger involved there, as the constant affirmation ("Great Point!", "You're absolutely right!") breeds overconfidence, but it has led me to learn quite a few things in a more formal capacity that I would have endlessly postponed before.
For example, I work quite a lot with k8s, but during the day, I'm always trying to solve a specific problem. I have never just sat down, and started reading about the architecture, design decisions, and underlying tech in a structured format. Now I have a detailed plan ready on how to fill my foundational gaps over the Christmas break, and this will hopefully save me time during the next big deployment/feature rollout.
I am of the age where the internet was pivotal to my education, but the teacher’s still said “don’t trust Wikipedia”
Said another way: I grew up on Google
I think many of us take free access to information for granted
With LLMs, we’ve essentially compressed humanity’s knowledge into a magic mirror
Depending on what you present to the mirror, you get some recombined reflection of the training set out
Is it perfect? No. Does it hallucinate? Yes. It it useful? Extremely.
As a kid that often struggled with questions he didn’t have the words for, Google was my salvation
It allowed me to search with words I did know, to learn about words I didn’t know
These new words both had answer and opened new questions
LLMs are like Google, but you can ask your exact question (and another)
Are they perfect? No.
The benefit of having expertise in some area, means I can see the limits of the technology.
LLMs are not great for novelty, and sometimes struggle with the state of the art (necessarily so).
Their biggest issue is when you walk blindly, LLMs will happily lead the unknowing junior astray.
But so will a blogpost about a new language, a new TS package with a bunch of stars on GitHub, or a new runtime that “simplifies devops”
The biggest tech from the last five years is undoubtedly the magic mirror
Whether it can evolve to Strong AI or not is yet to be seen (and I think unlikely!)