The loops that AI enables, and why your kids will be smarter than you
tl;dr it's because they have new knowledge interpolation pathways.
I think the world is too focused on the particular capabilities of LLMs without considering the new behaviors they enable. One aspect of this is as follows: knowledge acquisition is the rate-limiting step in nearly all human endeavors, and LLMs dismantle this barrier by acting as knowledge interpolation machines.
To see why this is, consider what happens when you want to learn to do something new, make a decision, take an action. You can only start after acquiring the requisite knowledge. Want to build a house? You need to learn how to build a house. Want to build a new product feature? You need to learn more about the market first. Want to learn to skateboard? You get the idea.
But LLMs represent a paradigm shift here. LLMs let you answer basically any question you have. Need to do something? You no longer need a person or do a stupid amount of research to unblock you β you can just ask an LLM. As a result, any process rate-limited by knowledge (basically, any process) can fill in knowledge gaps with answers from an LLM.
Traditionally, though, the world has been thinking about how this can fit into our world as-is. But the exciting thing about this is not that it simply makes existing behaviors faster, but that it enables new behaviors altogether.
To make this concrete, Iβm going to talk about what I find to be a particularly compelling application of LLMs: learning (andβ¦ it also happens to be the one most obviously rate-limited by knowledge acquisition).
How LLMs make learning better
We were raised by textbooks, but people are better.
Historically (pre-LLM), whenever Iβve wanted to learn something, I would instinctively reach for textbooks and courses. Thereβs nothing quite as satisfying as turning to page 1 and shoehorning information into your brain. When Iβd get stuck on a particularly concept, Iβd turn to Google, Wikipedia, Youtube videos to fill in the gaps β alone, of course, these systems were never so reliable a source that you could rely on them to attain a comprehensive understanding of a subject, but they were strong support vectors. But this whole process was long, arduous, and only tenable for the academically inclined.
And in that world, the best alternatives for faster knowledge ingestion were people β learning from a person 1-1 allows you to unblock your thinking faster than any textbook possibly can. This was the beauty of education systems: [assuming you leveraged your schoolβs resources in this way,] you could present your learning blockers to another person, and theyβd unblock you directly. I probably attended more office hours + problem set nights than lectures during my undergraduate years, and this was precisely why.
But LLMs are more efficient than people.
LLMs offer us a lower-friction alternative to people. Just as with other people, with LLMs, you get coverage. Nearly every question you have will be answered. Not with perfect accuracy, of course, but with enough proximity to truth that you can navigate a space in which you would otherwise have much slower pathing through β the LLM is the pedagogical equivalent of a GPS, where we only had people, paper maps, and exhaustively driving around as mechanisms before. You can navigate to unblocking answers faster β much more expediently than waiting to ask a person, with more reliability than Google, and with greater efficiency than taking a course.
And the great benefit: this encourages curiosity loops.
If faster pathing to your desired answer was the only benefit, LLMs would be a marginal improvement. But the chief, more profound repercussion of LLMs is that they enable you to learn in a fundamentally different way: you don't just get to your answers faster, but you can take different paths through subject matter. In particular, you can follow curiosity-led pathways.
Other media is inherently structured according to another personβs curiosity chain or what they believe your curiosity chain ought to be. As a result, textbooks/courses/etc. are rarely nicely catered to your own brain. LLMs, on the other hand, allow you to naturally fit information to the natural flow of your own questions. And these curiosity loops are simply better than learning pathways built without curiosity. Knowledge acquired through curiosity is generally better retained, better understood. This is a new loop enabled by the LLM.
This is the opportunity kids growing up with LLMs will have, and I find this so exciting. Look at the following chain of questions my 4-year-old recently asked ChatGPT, and youβll immediately see how fundamentally different this is from a normal conversation with an adult:
E: βHow do beavers not get splinters?β
ChatGPT [paraphrased]: They have thick fur and skin.E: βHow does beavers fur stay on?β
C: Itβs like how your hair sticks on your head. Fur goes through tiny holes in the skin, and itβs snug, so it doesnβt fall out easily.E: βHow do walrusesβ tusks stay on?β
C: Itβs similar to how teeth stay in our mouths. Hereβs a simple way to understand itβ¦E: βWhy are there not regular dogs or pups?β
C: Can you clarify?
Me (Dad): βYes, she was asking why there isnβt a standard dog, why do all dogs look different?β
C: Oh I see! Tell her that dogs look different because they come from different places and were bred for different jobs. Thatβs why we have many different types of dogs, and why there isnβt one standard dog. Each dog is special in its own way.E: βWhy do otters not have ears?β
C: They do have ears! They might look small and hard to see, especially when theyβre in the water.E: βI donβt see any ears up there.β
C: I understand it might be hard to spot their ears. But if you look closely, you might be able to see their little ears sticking out just a bit. Theyβre always there, just not easy to spot.E: βHow do trees stay up?β
E: βHow do birds grow their wings?β
E: βWhere does yarn come from?β
E: βHow do birds fly?β
E: βWhere do socks come from?β
E: βWhere do chairs come from?β
E: βPlastic is made of oil!?β
E: βWhat are phones made of?β
E: βDo you think phones are made of gold and copper too?β
E: βHow do phones do all these things?β
E: βWant me to show you a picture of something?β
My answers would have paled in comparison β theyβd be immediately biased by my own framing, riddled with concepts too advanced for a 4 year old, and, while not my intent, immediately curiosity-breaking. But LLMs, on the other hand, have a breadth of knowledge, speed of delivery, and concision that a person just canβt compete with. For once, there exists a system thatβs able to convey ideas at the rate and level of nuance at which a child wants to ingest them.
Final comments
And so your kids will likely be smarter than you, in many ways β at least from the perspective of having access to unprecedentedly expeditious knowledge interpolation systems. Practically speaking, though, this is something we all ought to take advantage of immediately. Iβve personally found it much faster to learn about a subject by using an LLM to ask questions about it, rather than taking a course or trying to read through documentation (or better yet, throw a textbook/documentation at the LLM).
But a broader comment here: this is one, particularly compelling loop born of LLMs, but consider how often youβre blocked by knowledge β companies are often rate-limited by alignment; engineering work is rate-limited by your understanding of a codebase or knowledge of design patterns and libraries; even the entire data industry is about disseminating knowledge locked in data more efficiently. There are certainly existing pathways that can be made more efficient through integration of LLMs, but consider what pathways are yet to be discovered that have been, until now, impossible. AI companies springing up should keep this in mind (including our own, of course), as the tremendous opportunity comes not from what AI can do, but what we new behaviors are now enabled because of AI.