The loops that AI enables, and why your kids will be smarter than you
tl;dr it's because they have new knowledge interpolation pathways.
I think the world is too focused on the particular capabilities of LLMs without considering the new behaviors they enable. One aspect of this is as follows: knowledge acquisition is the rate-limiting step in nearly all human endeavors, and LLMs dismantle this barrier by acting as knowledge interpolation machines.
To see why this is, consider what happens when you want to learn to do something new, make a decision, take an action. You can only start after acquiring the requisite knowledge. Want to build a house? You need to learn how to build a house. Want to build a new product feature? You need to learn more about the market first. Want to learn to skateboard? You get the idea.
But LLMs represent a paradigm shift here. LLMs let you answer basically any question you have. Need to do something? You no longer need a person or do a stupid amount of research to unblock you ā you can just ask an LLM. As a result, any process rate-limited by knowledge (basically, any process) can fill in knowledge gaps with answers from an LLM.
Traditionally, though, the world has been thinking about how this can fit into our world as-is. But the exciting thing about this is not that it simply makes existing behaviors faster, but that it enables new behaviors altogether.
To make this concrete, Iām going to talk about what I find to be a particularly compelling application of LLMs: learning (andā¦ it also happens to be the one most obviously rate-limited by knowledge acquisition).
How LLMs make learning better
We were raised by textbooks, but people are better.
Historically (pre-LLM), whenever Iāve wanted to learn something, I would instinctively reach for textbooks and courses. Thereās nothing quite as satisfying as turning to page 1 and shoehorning information into your brain. When Iād get stuck on a particularly concept, Iād turn to Google, Wikipedia, Youtube videos to fill in the gaps ā alone, of course, these systems were never so reliable a source that you could rely on them to attain a comprehensive understanding of a subject, but they were strong support vectors. But this whole process was long, arduous, and only tenable for the academically inclined.
And in that world, the best alternatives for faster knowledge ingestion were people ā learning from a person 1-1 allows you to unblock your thinking faster than any textbook possibly can. This was the beauty of education systems: [assuming you leveraged your schoolās resources in this way,] you could present your learning blockers to another person, and theyād unblock you directly. I probably attended more office hours + problem set nights than lectures during my undergraduate years, and this was precisely why.
But LLMs are more efficient than people.
LLMs offer us a lower-friction alternative to people. Just as with other people, with LLMs, you get coverage. Nearly every question you have will be answered. Not with perfect accuracy, of course, but with enough proximity to truth that you can navigate a space in which you would otherwise have much slower pathing through ā the LLM is the pedagogical equivalent of a GPS, where we only had people, paper maps, and exhaustively driving around as mechanisms before. You can navigate to unblocking answers faster ā much more expediently than waiting to ask a person, with more reliability than Google, and with greater efficiency than taking a course.
And the great benefit: this encourages curiosity loops.
If faster pathing to your desired answer was the only benefit, LLMs would be a marginal improvement. But the chief, more profound repercussion of LLMs is that they enable you to learn in a fundamentally different way: you don't just get to your answers faster, but you can take different paths through subject matter. In particular, you can follow curiosity-led pathways.
Other media is inherently structured according to another personās curiosity chain or what they believe your curiosity chain ought to be. As a result, textbooks/courses/etc. are rarely nicely catered to your own brain. LLMs, on the other hand, allow you to naturally fit information to the natural flow of your own questions. And these curiosity loops are simply better than learning pathways built without curiosity. Knowledge acquired through curiosity is generally better retained, better understood. This is a new loop enabled by the LLM.
This is the opportunity kids growing up with LLMs will have, and I find this so exciting. Look at the following chain of questions my 4-year-old recently asked ChatGPT, and youāll immediately see how fundamentally different this is from a normal conversation with an adult:
E: āHow do beavers not get splinters?ā
ChatGPT [paraphrased]: They have thick fur and skin.E: āHow does beavers fur stay on?ā
C: Itās like how your hair sticks on your head. Fur goes through tiny holes in the skin, and itās snug, so it doesnāt fall out easily.E: āHow do walrusesā tusks stay on?ā
C: Itās similar to how teeth stay in our mouths. Hereās a simple way to understand itā¦E: āWhy are there not regular dogs or pups?ā
C: Can you clarify?
Me (Dad): āYes, she was asking why there isnāt a standard dog, why do all dogs look different?ā
C: Oh I see! Tell her that dogs look different because they come from different places and were bred for different jobs. Thatās why we have many different types of dogs, and why there isnāt one standard dog. Each dog is special in its own way.E: āWhy do otters not have ears?ā
C: They do have ears! They might look small and hard to see, especially when theyāre in the water.E: āI donāt see any ears up there.ā
C: I understand it might be hard to spot their ears. But if you look closely, you might be able to see their little ears sticking out just a bit. Theyāre always there, just not easy to spot.E: āHow do trees stay up?ā
E: āHow do birds grow their wings?ā
E: āWhere does yarn come from?ā
E: āHow do birds fly?ā
E: āWhere do socks come from?ā
E: āWhere do chairs come from?ā
E: āPlastic is made of oil!?ā
E: āWhat are phones made of?ā
E: āDo you think phones are made of gold and copper too?ā
E: āHow do phones do all these things?ā
E: āWant me to show you a picture of something?ā
My answers would have paled in comparison ā theyād be immediately biased by my own framing, riddled with concepts too advanced for a 4 year old, and, while not my intent, immediately curiosity-breaking. But LLMs, on the other hand, have a breadth of knowledge, speed of delivery, and concision that a person just canāt compete with. For once, there exists a system thatās able to convey ideas at the rate and level of nuance at which a child wants to ingest them.
Final comments
And so your kids will likely be smarter than you, in many ways ā at least from the perspective of having access to unprecedentedly expeditious knowledge interpolation systems. Practically speaking, though, this is something we all ought to take advantage of immediately. Iāve personally found it much faster to learn about a subject by using an LLM to ask questions about it, rather than taking a course or trying to read through documentation (or better yet, throw a textbook/documentation at the LLM).
But a broader comment here: this is one, particularly compelling loop born of LLMs, but consider how often youāre blocked by knowledge ā companies are often rate-limited by alignment; engineering work is rate-limited by your understanding of a codebase or knowledge of design patterns and libraries; even the entire data industry is about disseminating knowledge locked in data more efficiently. There are certainly existing pathways that can be made more efficient through integration of LLMs, but consider what pathways are yet to be discovered that have been, until now, impossible. AI companies springing up should keep this in mind (including our own, of course), as the tremendous opportunity comes not from what AI can do, but what we new behaviors are now enabled because of AI.