Evan Lin

AIMachine LearningPhilosophyFutureHuman-AI Integration

Machine Learning and the Brain: Emergence, Learning, and the Future of Human-AI Integration

8 min read

The wisdom emerging from vast amounts of data in large models is remarkably similar to the infinite neurons in the human brain. To this day, we still don't understand exactly why or at what moment AI suddenly exhibits "emergence"—just as we can't explain how a collection of neurons can possess such profound intelligence. It's all too magical, and too similar.

Supervised and unsupervised learning mirror different stages of how we learn. Supervised learning is like how we imitate adults as children, or follow mentors in university—accepting and adjusting within established frameworks. Unsupervised learning is like those moments of self-realization: while solving problems, thinking deeply, or experiencing the world, something stirs deep within, ultimately establishing our life's direction and dreams.

I firmly believe AI's development will exceed our expectations, and its greatest momentum will come from unsupervised learning, or approaches like Professor Fei-Fei Li's "world models" that enable autonomous thinking. Only by freeing AI from human constraints, letting this silicon-based life explore from zero to one like a child, can we unlock its true potential.

Consider AlphaGo. After training on all human game records, it could at best compete with top players. However, after switching to reinforcement learning—being told only the rules of Go—the AI underwent a qualitative transformation. It began like a child, learning to win through trial and error, discovering moves humans had never conceived. Eventually, even human champions could no longer defeat it. In Go alone, it has surpassed all of humanity. This is the effect we hope AI will achieve.

Yet why haven't today's AIs reached "mastery"? What prevents GPT, Claude, and similar models from experiencing their own "AlphaGo moment"? Their abilities seem to stop at "proficiency" without achieving "mastery" or surpassing humans, even though reinforcement learning algorithms apply to them equally.

The answer may lie in this: the rules of the real world are far more complex than Go—so complex that even we cannot fully define them. So AI can only learn within humanity's framework of understanding. Isn't this, in a broad sense, still "supervised learning"?

To solve this problem, Stanford's "world models" offer a direction: build a simulated world, give AI senses to experience it firsthand. Let it learn to make fire, then learn social interaction, then scaled collaboration. If we set no goals and let it freely evolve through massive simulations, what would it become? Nobody knows.

We are creating something we cannot fully understand. Is this dangerous or safe? It's like a gamble. But I have a feeling (or perhaps a bold claim): only through fusion with AI can human civilization enter its next stage. Because only through fusion can we truly "understand" AI, gaining security from comprehension rather than the unknown. The specific method might be brain-computer interfaces or silicon-based augmentation, but I believe this path will be taken by someone.

👨‍💻

Evanlin

Interested in AI and its future.