The Original Neural Network: Your Baby's Brain
Where Machine Learning Meets Human Learning
Babies' brains are like untrained AI models. My sister recently had a baby, a lovely little bundle of joy. This means more time spent thinking about babies, or more specifically, baby. As an "in the swing of it" young adult, children are for the most part a non-thought. A luxurious in-game DLC to be indulged in by either the stable or ignorant, sometimes a mixture of the two. I'm at that age where friends are having kids, or at least starting to seriously consider it. With that in mind, what if you have a kid and it sucks? There's not an ethical opt-out button. You can never predict how a child will turn out.
Parenting is still a pretty distant thought; nevertheless, many of my fellow first-wave Gen-Z'ers are taking the early steps onto the ladder of parenthood. A sobering thought. I recently read Permutations City by Greg Egan. A great read, food for thought on "creating life," a real hard sci-fi novel. The deuteragonist, Paul, is based in a world where computing has advanced to the point where one can produce a "copy," a simulation of yourself based on a mathematical mapping of a mechanistic system, your neurons. The copy then exists in a computational VR world. Paul decides to go a level deeper than this and create life WITHIN the computational world. A computer within a computer, computer-ception (emulation in the real world). He does this through Cellular Automata. Cellular Automata describes a mathematical model that seeks to simulate a set of cells, interacting with their neighbor cells. A set of programming behaviors to simulate the organic generation of life, very Deus Ex-Machina of him.
While our main character is going through the transformational moment that led to his Faustian descent, he describes existence as analogous to a neural network. This struck a chord with a thought I had a few weeks back. Personality as an evolutionary byproduct, when thought of from a Darwinian perspective everyone having the same personality would be less than ideal.
Imagine "Hey everyone, let's ALL make this extremely risky journey into the wild," "Let's all compete to be leaders of a group," "Let's ALL be terrified of women (other people aren't scared of women?)." Variation is a social necessity; people in a collective with different traits ultimately empower the collective. With this in mind, there must be a certain level of pre-programming (genetics) that determines character.
I strongly subscribe to a deterministic school of thought. Everything has an explanation; we aren't capable of understanding all the factors that contribute to said explanation. How much Chess education would a baboon need before it can understand the Queen's Gambit?
There are a number of factors that can be explained, some more heavily weighted than others. Once a system is sufficiently complex, it becomes inscrutable. If it's perpetual, unprobabilistic, and opaque enough, it appears completely random and out of our control to the observer. Welcome to our lived reality! My favorite incarnation of this is an LLM (Large Language Model) - the engine behind all these new AI gizmos. I won't bore you with the definition, but it's a fancy data generator that takes historical data, loads and loads of it, and synthesizes something new based on that. Common knowledge. What's less common knowledge is that one of the building blocks for LLMs is a Perceptron, an artificial neuron. A mathematical simulation of a neuron! The building blocks of the system that has come closest to simulating human intelligence is similar in purpose to our dear friend Paul's cellular automata, although not quite the same. Let's explore this with LLMs.
Note: Super interesting topic - Conway's Game of Life. An initial pattern setup in a cellular automaton can produce a massively complex system.
Part of the power of LLMs are their weights; weights are "biases" that alter and can be changed over time. Weights change the output of a collection of Perceptrons; for instance, a simple formula would be (input × weight = output). Altering the weight alters the output. If we started to map this against humans, the first thing we take into consideration are the weights. What would a weight be for a human? A philosophy, ideology, pure computational brain power (memory, pattern matching, speed), health, and so on. There are a myriad of unknowns that determine how we, as humans, come to a decision at any given period. Though we appear to be non-deterministic, given the overwhelming network of inputs and weights that we utilize for even the most basic of decisions. If you were to observe an individual in a vacuum, develop an intimate enough understanding of their life, their behaviors, etc., it would appear as deterministic to us as the programming script appears to the developer.
Babies are smarter than the standard programming script, not in depth, but in breadth. This lack of depth gives them a much higher potential to suck. They also come with bugs in their code, like crying on flights. Only when I'm in the immediate vicinity too. One of the core functions of a child is personality development. These personalities form at a very early period in their lives, as any parent will tell you. The behavior of a baby can be modeled as Brownian motion in a defined latent space, with a generative feedback mechanism. IYKYK. In English, it just means they do random shit, and get better at doing random shit in line with reality, until it stops seeming random. My nephew is merely an underdeveloped LLM. I would propose that us "adults" are all underdeveloped LLMs. We process the world purely with the information and weights that we have to hand. Our "mental" models may have much more weight compared to that of a child; our output can just as easily be called into question.
What are the "adult" weights for humans that need to be taken into consideration? If you were anything like most of the nerds I imagine reading this, your gut thought is genetics. People love talking about genetics. I want to take a second to divorce ourselves from the mainstream understanding of genetics. A trend I increasingly observe is that a mainstream understanding of something is worth about as much as the paper it's printed on. The way it was taught at school is we have a hard-coded map provided to us by our parents and from there it's set in stone. Destiny is pre-written and pre-determined. Your lot in life is locked in. You are who you are because of your parents, no exception. This has not been a common opinion among neuroscientists since the mid-late 20th century. The more common approach parallels the hardware-software approach in a Von-Neumann architecture.
Hard drives, RAM, CPUs (Hardware) layered upon with software working in tandem with firmware as a middle-man. How useful is a computer with an empty hard drive? You get the gist. The hardware exists, certainly; it is more rigid in form and function than software, and the software exists atop the hardware. Efficient use of software (mental models) can make up for deficiencies in hardware. Imagine chunking a memory with spaced repetitions versus learning by raw-dogging via unstructured repetition for memory. Though the two are one and the same and have a large influence on each other. Neuro-plasticity is a very real concept, and endocrinological regulation (hormones making you think and do stuff) of the brain is a non-stop process occurring every second of your day. Your endocrinological reactions are especially shaped by your early years, your genetics, and wider environmental stimuli.
Your experience forms your nature. The argument isn't nature or nurture; it's nurture and nature. So my little nephew has weights. What would they possibly be? Health? Wealth? Environmental stimuli or lack thereof? Who knows. The brain is highly plastic, and if an easily-reproducible deterministic outcome could be replicated and mass-produced, then I'm sure everyone would be doing it. They are not, so it isn't that simple. I believe this is a deterministic system influenced by an unquantifiable number of unknowns that appears random in its function. Who is to say that a baby's reaction to a certain micronutrient in a specific food triggered a hormonal cascade that alters life trajectory? A theoretical parallel to this would be Chaos Theory. Edward Lorenz, a mathematician and meteorologist, found that while running a weather simulation on his computer, if he rounded a number from 0.506127 to 0.506 to save space, a dramatically different set of weather patterns would be produced in his simulation. In a seminal paper titled "Predictability: Does the Flap of a Butterfly's Wing in Brazil Set Off a Tornado in Texas?", the core premise was set. The Butterfly Effect: does a small change upstream result in a massive one downstream? The answer was yes. They both demonstrate how a small initial change propagates into a drastically different outcome, how small parenting decisions can completely alter the trajectory of a child. For a lay person like myself, untrainable, I prefer my AI models on my computer.

