Last September my first child came into the world. He left the comfort of the womb where he’d been developing for nine months. His training and exercises in the womb finally paid off as he took his first breath of atmosphere and sucked his mother’s milk. We call him Thomas, and he has been a joy these past months. I must admit the first couple months we did lose sleep, and he didn’t do much besides eat, poop, and take naps. During this stage we usually referred to him as “the baby” or “the boy,” residual names from before we chose the name Thomas. We were perplexed as to why we could not associate this small, helpless human being with the name we had chosen for him. This dilemma did not last for long, and as he learned to communicate and respond, we learned to associate his given name with his person.

Around three months he started learning all sorts of new tricks. His grin at seeing our faces became noticeably different from the shy smile he gave others. He played peekaboo, cooed and giggled, found his feet (which soon met his mouth), and started rolling over and sitting up. We occasionally flipped him onto his stomach to practice holding himself up and rolling over, what my wife and other trendy parents call “tummy time.” Recently, I switched to sitting him up and seeing how long he can stay upright before falling forward or back into a laying position. Falling face first onto our cushioned mattress usually made Thomas laugh, but an occasional poor mood of his or risky move of mine, and the inevitable cry that followed, meant play time was done for the moment.  

One morning I re-watched our first video of him smiling as I made faces at him at six weeks old. A couple weeks later at his two-month appointment, his pediatrician lifted him up and smiled at him. Thomas grinned back, and it took my wife and I a minute to realize the doctor wasn’t just admiring our adorable boy. Thomas grinned at the doctor and tracked his face as the doctor rotated his body from side to side. A psychologist friend of mine later explained eye tracking to us as she performed a similar test upon first meeting our son. She spouted off a handful of unfamiliar terms and repeated many of the doctor’s same motions with Thomas. I wondered if he learned to identify faces post-birth or if he came pre-programmed that way. If he learned it, what happened in his young brain along the way? If it is an instinctive skill, how did we as a species learn it?

Being a parent has only intensified my desire to understand how we learn. The original desire stemmed from an interest in Artificial Intelligence (AI). I conduct research under a professor in my university’s computer science department with the aim of creating intelligent machines. AI research aims to create machines that think and or act like a human does. The modern approach to this involves creating machines that can learn through exposure to data. The hope—of most AI researchers and myself—is that machines will successfully learn to act and think in much the same way that humans learn to act and think. As early as 1957, computer scientists were drawing upon the structure of the brain for inspiration for their machines (Rosenblatt). Advances in neuroscience help us understand how we learn and how we might impart the ability to learn to a machine.

We know the brain is made up of individual neurons that pass signals between each other. We also know that the brain gradually strengthens the connections between neurons through repeated exposure to data to enable learning. This basic idea is present in most state-of-the-art learning approaches for machines, but that’s where machine learning principles begin to diverge from what neuroscientists know about human learning. We know more about how humans learn than we ever have before, and a lot of this knowledge has only been acquired in the last century. Despite this, it seems that if we understood learning better AI machines like HAL 9000, Ultron, VIKI, Terminator, and WOPR would be a closer reality (or a less malicious list: Jarvis, Data, and Sonny)[1]. The lack of fully capable AI in the world indicates that we do not know everything about how humans learn.

For years, the brain has been a mystery to humankind. Recently we’ve learned a great deal about the functionality of the brain, how it works, and how to repair and modify it, but we don’t know enough to build a totally artificial brain. Car mechanics know enough about vehicles to fix and modify them, but few mechanics know enough about cars to build one completely from parts. Similarly, neuro- and computer scientists have not unlocked the secrets of learning sufficiently to accomplish creating an artificial brain. Perhaps, if we better understood what happens in my son’s mind between birth and when he eventually goes off to college (no rush), we’d be able to finally achieve the science fiction of our dreams.

Apparently, infants learn to recognize faces quickly, if not from birth. After blurs of color solidify into defined and colorful shapes, faces seem to be the first things infants pick out and follow with their eyes. If you think about it, adults also excel at the task. We can identify familiar faces out of the thousands we see during a lifetime after years of not seeing an old acquaintance. Compared to other objects, human faces seem to be special in this way. I can’t tell one dog from another, let alone a particular tree. Although babies learn faces early on, it seems that they have the capability to learn more subtle distinctions than adults.

In 2013, researchers from the McMaster University conducted studies on infants’ perceptual narrowing (Maurer and Werker). They found that infants’ abilities to identify faces and vowel/consonant sounds decreased as they grew older. As adults we struggle to hear vocal nuances in languages unfamiliar to us. I learned to speak Portuguese years ago. After months of speaking the language, the words for grandfather and grandmother still sounded identical to me. It wasn’t until I’d spoken the language for nearly a year that I was able to distinguish the two. I won’t bother going into the embarrassing mistakes I made mixing up the words for coconut and crap. Maurer and Werker found that younger infants more easily identified new vocal sounds, and this ability gradually narrowed as the infants aged. They found similar evidence for perceptual narrowing while distinguishing between faces.

If babies are capable of learning to differentiate a more general set of things, why do they narrow in on human faces so quickly? The same way that adults specialize in professions, I assume our brains need to specialize in certain tasks to improve ability. Children may narrow in on human faces because they see more examples of human faces cooing down at them than any other object. But is that the only reason that babies stare at faces from such an early age, or have we been adapted over thousands of years to recognize faces before anything else? This seems like just another extension of the tabula rasa or nature vs nurture questions. As is usually the case, this may be a mixture of both. Having a child of my own has convinced me that we come preprogrammed by nature and genes in at least some ways. When Thomas was born, I figured that reactions like crying, sneezing, and coughing were instinctual since I was aware that infants did these from birth. When I later saw Thomas first laugh, smile, suck, and chew so naturally, I finally realized just how much of what we do comes from years of genetic mutation.  These are all evidences that nature has played a heavy role in our development, but, even with that in mind, we still do learn a lot from the nurture we receive from the world around us.

Learning by repeated experience is not unfamiliar to us. It takes repeated experience to be able to identify what musical artist is singing a song; the more experienced someone is with dog breeds, the better they can tell them apart, and doctors need years of experience and examples to be able to diagnose a patient via data or imaging equipment. All these examples involve repeated exposure to a type of data. Their experience eventually trains them to know the correct name, breed, or diagnosis of a new data instance that had not yet been experienced.

In the 1950s, mathematicians-turned-computer-scientists were fascinated by the new computers. Never had man made such an effective and amazing machine, packing a fraction of the computational power of a human brain down into a digital computer twice the size of my apartment. We’d really outdone ourselves. Despite the comparably primitive ability of the digital computer, pioneers like Alan Turing and John Von Neumann asked questions like, “Can machines think?” How incredibly similar the two things were, the brain and the digital computer. Instead of computing arithmetic mechanically as traditional computers had been, the digital computer did so with electric signals, just like the brain! The potential seemed limitless.

Despite being on the verge of creating Rossum’s Universal Robots[2], the hype around AI didn’t last. The long and uneventful AI winter of the 1970s, followed by the enthusiastic boom of the 1980s, followed again by the 1990s winter brought dreams of HAL 9000 to an all-time low. It was not until recently that fresh excitement for AI found its way into computer science, as is evidenced by the ever-growing AI trend in pop culture and film. The recent boom was motivated by several significant advancements. It was not a novel idea that, for machines to be able to act like humans, they would have to be able to learn to do so. After all, could you explain to a child all the intricacies of human action, behavior, language, and skill and then promptly send them off into the world, fully confident that the conditioning and failures of childhood are completely unnecessary? If so, please share your secret. Most of our learning happens through experience, so, like an infant learning to recognize faces, computer scientists have been trying to teach machines through a process called machine learning.

What changes made up the catalyst for the latest AI boom? It turns out that computers just weren’t powerful enough for effective machine learning until recently. Now—thanks to the gaming industry—we have hardware specifically designed for the fast and repetitive calculations of machine learning. In addition, it was not until recently that we had the data to teach these machines. For instance, children require significant assistance and instruction during the early years of life. Without the patience of parents and teachers providing plenty of examples of vocabulary and arithmetic, we never would have passed first grade. In much the same way, it wasn’t until the age of information and the internet that machines had the millions of examples they needed to learn simple tasks. In 2012, a group of scientists put old techniques together with much more data and processing power to kindle the flame of the most recent AI boom (Krizhevsky, Sutskever and Hinton). These scientists from the University of Toronto were competing in the ImageNet competition where machines were tasked with an image classification problem. Previous efforts had plateaued at 75% accuracy, so when the University of Toronto achieved 84% accuracy it caused a stir of enthusiasm in the field. Other researchers using similar techniques achieved superhuman level accuracy (94.9%) just a few years later.

If Turing and Von Neumann were excited about doing arithmetic with digital signals, imagine the stir caused by machines that could recognize pictures of cats no matter what peculiar position they were in. The possibilities once again seemed limitless. But was it that simple? Can the mystery of human learning be boiled down to classifying labeled images? What deeper complexities still must be learned, and how can a machine learn them? It turns out these techniques work better on visual domains, and machines are not as adept with the depth and complexity of domains like human language. There are still plenty of practicality problems as well. To start, the cost of electricity to run these machines is far greater than the cost of running the human brain. Then there’s the efficiency problem. Does a child really need to see millions of instances of a cat to be able to tell it from a dog? Like we discussed previously, machines do narrow in on certain features to be able to better recognize examples that they see repeatedly, eventually losing the ability to recognize completely foreign instances just as infants do with perceptual narrowing. Most machine learning algorithms do not, however, begin with a bias towards certain features as infants may have bias towards human faces. There are certainly advantages in efficiency and cost of the biological architecture of the brain. A friend told me that biologists scoff at computer scientists’ attempts at machine learning. Regardless of disadvantages and incongruities from its biological counterpart, machine learning has achieved feats that were able to wake the world from its AI slumber and motivate the contribution of millions of dollars to research funding.

When I was fourteen years old, my father taught me to play racquetball. He learned from his father who played in the fire station when he was a Los Angeles fire chief. I can’t remember how many times I’ve heard the story of how my grandfather won a game against my father while playing with a frying pan and all his fire gear on. He was a well-respected chief. When I was a teen, my father started playing again and invited some of his more athletic friends to learn to play. They would run around to hit the ball while he stood in the center of the court waiting for the ball to come to him. Since then I’ve observed how much more important skill is than speed or agility. Racquetball is like golf or pickleball in that once you learn to play you can keep playing successfully as you age despite the quick ball and gameplay.

I played infrequently for years and never became good at the game until recently when I bought my own racquet, determined to improve. I figured if I spent the money on my own equipment I’d be motivated to get my money’s worth. The ploy worked, and I’ve been playing a few times a week for some months now. At first, more athletic friends to whom I’d introduced the game were beating me within a few days. After some frustrating games, I began to improve. Recently I played with my cousin for the first time this year and noticed my considerable improvement. I’m still by no means an advanced player, but I can usually hold my own at least enough to have fun. Hopefully, if I invest enough in the game, I’ll be able to win games against my grandchildren with ease.

A lot of what we learn in life is more complex than just learning by example. For instance, learning physical movement and control by examples would be difficult. My son didn’t learn how to roll over by watching other babies. I’m not laying down on the floor in front of him demonstrating the skill. Between 3 and 6 months old, he got progressively better at maneuvering his pacifier into his mouth. This required visual concentration, arm movement, and finger dexterity. I watched him as he focused on his hands with a scowl on his face. He would rotate the pacifier by grabbing different parts of it in his hands, and then he would move it forcibly towards his mouth. At first it was more luck than skill if he successfully managed to get the nipple in his mouth rather than the handle or his fist. Eventually he was able to achieve his aim more consistently and he learned the visual interpretation and sequence of actions necessary to achieve his goal. Neither I nor his mother sat next to him telling him what he was doing right and wrong throughout the process. When he achieved his goal, he knew he had done something right. When he failed, he knew he had done something wrong. These are examples of learning through positive and negative reinforcement.

Reinforcement learning, a sub-branch of machine learning, is also used to teach machines to complete tasks, and it functions in much the same way as I learned to play racquetball and my son learned hand dexterity. A machine is given the opportunity to act in an environment and receives rewards for good actions until it learns to act effectively. Like learning from repeated example data, it seems that the biological reinforcement learning of behavioral psychology is much more efficient than machine reinforcement learning. The latter requires millions of playthroughs of a task and is usually quite sensitive, often making significant regressive steps or requiring meticulous fine tuning. Even so, breakthroughs in this field caused excitement in the AI community just like breakthroughs in image classification tasks did a few years prior. Some of the first exciting achievements were in playing Atari games at superhuman levels. Another breakthrough was made in AI research when, in 2016, the London based Deepmind successfully trained a machine using Reinforcement Learning to defeat the world champion at the ancient Chinese board game Go (Silver). The achievement came a decade earlier than predicted and shook Eastern and Western cultures. Reinforcement learning research has since improved to be comparable to human experts at games such as Dota 2 and Star Craft 2.

Who cares if it takes lifetimes of experience playing these games and months of fine-tuning algorithms to reach human and superhuman levels; didn’t it take even longer for the human brain to evolve into the relatively stable and efficient form in which we find it today? If Thomas had been born millennia earlier, would he have been able to learn to sit, babble, and crawl so quickly? The way I see it, we have the opportunity to study the brain and human behavior to expedite that process as we seek to create artificial intelligence. The first time you give a dog a treat or a squirt from the bottle it may not understand what it did to deserve the reward or punishment. After enough occurrences, the animal often begins to associate the reward with good actions and the punishment with bad. We apply similar techniques to teaching and discipline through positive and negative reinforcement. Society has mixed and dynamic opinions about both positive and negative reinforcement; however, we seem to agree that the two methods have different outcomes. Negative rewards, a type of punishment, are sometimes used in machine reinforcement learning, but the paradigm in machine reinforcement learning research considers maximizing reward and minimizing punishment to be identical. Perhaps an approach that treats negative reinforcement differently from positive would be beneficial to machine reinforcement learning.

Learning from others’ experience and rewards is also considered reinforcement learning. I could watch as many professional racquetball games as I like without ever becoming a professional player, but I can definitely pick up ideas and strategies, especially once I am adept at the fundamentals. We learn from watching others all the time. How does the saying go? Fools learn from their mistakes; wise men learn from the mistakes of others. Learning from others has helped people avoid hours of recuperation, saved expensive experiments from being repeated, and inspired numerous fables, nursery rhymes, and other modes of story-telling. For centuries, we’ve taught our youth valuable lessons through the mistakes and successes of fictional characters in stories. From “The Tortoise and the Hare” to “The Boy Who Cried Wolf,” I wonder how many of my early actions were motivated by the retelling of stories (Aesop’s Fables). I can be successful if I am patient. I won’t be believed if I am constantly dishonest. I learned what rewards and punishments I would receive as consequence of my actions vicariously through the experience of others.

Another machine reinforcement learning technique inspired by humans is the idea of exploration. It focuses on exploring new actions or new observations. In other words, to explore actions and observations that surprise the agent. Infants employ a similar technique to learn control and understanding. In 2017, Peter Reschke from Brigham Young University published research showing that infants focus more on surprising observations (Reschke, Walle and Flom). In his experiments, he showed 12-month-olds sequences of positive and negative actions followed by a random positive or negative reaction. For example, giving vs. taking a toy and laughing vs. crying as a response. On average the children focused longer on the reaction that surprised them (e.g. the positive reaction for the negative action). I’ve noticed that a silly face, funny noise, or unexpected scare often makes Thomas laugh. Consistent with the incongruity theory of humor, many of us tend to laugh at what surprises us (Zalta). This focus on the new and surprising is evidently a natural part of human learning from an early age. No wonder it is a successful technique for teaching machines to learn challenging tasks.

At one point in my recent racquetball revival, while I was struggling to compete with my beginner friends, I was playing with a very experienced player. I asked him for advice, and he was happy to give me pointers throughout our next few matches. With his help, I picked up some techniques and shots that I would not have discovered through my own exploration of the game. I had been stuck playing the same way just to keep up with my inexperienced opponents. I didn’t have many opportunities to explore new movements because I was too busy exploiting the little experience I had in order to perform well. Sometimes the choice between exploration and exploitation is a difficult one, and sometimes it is made for us. In this case positive and negative reinforcement is not enough or not quick enough. Learning from a teacher often solves this problem. When explicitly taught a skill, we learn from others’ experiences that have been handed down from teacher to student.

I’ve always enjoyed learning. Mathematics was my subject in high school—much to the chagrin of my sisters who had to go through the same math department after me—but it wasn’t always so enjoyable for me. In elementary school I felt tortured by times tables and three-digit addition. My young mind thought math the true evil, a tedious never-ending torture. Near the end of my elementary education, I was introduced to algebra. The new concepts that students struggled to wrap their heads around meant that the arithmetic was actually simpler. I was thrilled! I found the same phenomenon as I was introduced to calculus, where I was finally given a calculator for almost every test. I adored Newton for his simple representations of the physical world through derivatives and integrals.

This enlightenment was at the hands of teachers who planned or were given plans on how to educate us as students. However, some of their more important, or more influential, lessons were unplanned. In fact, many of the learning moments in my life I can attribute to either a formal or informal teacher, someone with more knowledge or experience attempting to impart some of it to me. Many of these attempts were unsuccessful because of my pride, lack of background knowledge, or the teacher’s inability to communicate a principle. With all of the methodologies behind good teaching some principles stand out.

The ancient Greeks used mythology as a form of storytelling to pass along and teach information. In Greek mythology, the centaur Chiron, the adopted son of the Apollo the god of knowledge, was highly revered as teacher and tutor (Oswalt). His students included Achilles, Hercules, and other heroes and kings. Unlike other centaurs, Chiron was not wild, lustful, or a heavy drinker. Rather, he was intelligent and kind due to his unique parentage (The titan Cronus and the nymph Philyra). Apollo tutored the centaur in archery, medicine, and music which contributed to Chiron being attributed with the discovery of botany and pharmacy. Chiron was a respected teacher and helped numerous humans and gods to learn the skills and knowledge needed for greatness. Like many teachers today, Chiron’s kind and intelligent attributes contributed to his ability to help his students learn.

The ancient Greeks recognized the importance of teachers in the learning process. Great teachers even learned from other great teachers. Verbal and written lessons were and still are the primary method of passing along information to new learners. The passing along of information from teacher to student often involves a condensing of knowledge to transfer it to a verbal or written representation. A good teacher will be able to simplify their knowledge into something more compact and fundamental for a student. You know that you really understand a subject once you can teach it. Why is that? I wouldn’t mention polymorphism, memory management, or pointers in an introductory programming course, so why would understanding those things help me teach more simply? Perhaps more advanced knowledge gives a teacher the perspective necessary to know where and how to begin. Of course, more knowledge makes question answering easier. Now that I think of it, I wouldn’t say that more knowledge makes you a better teacher, but it gives you the deeper understanding necessary to teach well. So somehow Chiron’s deeper understanding of skills and virtue made him a better teacher of those principles.

There’s also something to be said for at teacher’s experience teaching. Deeper understanding improves one’s ability to simplify concepts into a fundamental and communicable form, but even teachers with intense understanding of a subject have trouble translating that into a form that will be understood by a learner. I know that I’ve had my fair share of genius professors that make no sense in the classroom. So, teaching needs to be learned as well. In high school I tutored mathematics for a wide age range. My second year at it, I not only understood mathematics better, but I understood how different students needed to hear or see the techniques in order to internalize them. Part of this was getting to know students individually, but part was also being able to generalize types of students that allowed me to quickly learn to communicate with new pupils. Despite the increased ability that comes with years of teaching, my personal experience is that the more genius the individual, the more simplicity and beauty they are able to use when introducing their subject.

Teacher learning is a technique in artificial intelligence research but usually involves machines learning from other machines. Remarkably, this generally is done to simplify pre-trained models (the teacher) into simpler ones (the learner), paralleling the traditional teacher’s ability to simplify their own knowledge. Here’s where we might be able to learn something from human learning that we haven’t figured out already, teacher learning from human to machine. What we would like to do is teach a machine just as we would a school pupil: point to objects and give them names, write out complex equations to memorize, and explain the steps of a technical routine.

The barrier is representation of our knowledge in a way that is useful for a machine. A machine can teach another machine a task because they communicate and learn in similar ways. When we want to teach a machine, it requires monotonous translation of our knowledge into meticulously labeled datasets. Either we need to learn to express ourselves in a clear digital way, or machines need to improve their ability to understand and make sense of natural language and communication. While neuroscientists plunge into better understanding the brain, computer scientists approach the daunting task of making machines understand us. Although progress has been made in natural language processing, we are still a long way off from being able to create a teachable, inquisitive android like Star Trek’s Data. We can, however, demonstrate tasks visually to robots that they can then repeat, but these robots don’t typically ask clarifying questions or demonstrate deep understanding through generalization. Traditional teaching methods may be too dissimilar from the computational methods of artificial intelligence. Regardless, these observations may serve as useful guides and goals moving forward.

After Thomas started making more consonant sounds and babbling more, I began the teaching lessons I’d been waiting to start since he was born. At every chance I had, I faced him towards me repeating the syllables “dadadada,” hoping that he would catch on. Eventually, I was convinced that some of his responses were legitimate attempts at imitation. Soon enough I’ll need to teach Thomas more important lessons. I’m not sure the best way to go about all of these. How do I teach my boy to be polite to his mother? How do I teach him to respect others who think differently than him?

Techniques for imparting information in a way that will be learnt and understood by a pupil have been approached in many ways through history. Today’s schools often employ explicit, kinesthetic, and visual approaches to teaching. These work well for certain disciplines and with certain audiences. I, for instance, preferred explicit definitions in high school mathematics that I could internalize and play with mentally until I understood that intuition behind them. In subjects like literature I thrived with participatory activities like discussions and presentations. In history classes, artwork and timelines helped me visualize the context of a time period and internalize it that way. Some lessons even require deliberately subtle teaching techniques.

Religious teachers such as Jesus of Nazareth employ subtler story-telling technique to help others internalize difficult concepts. Jesus taught new and controversial ideas in a very conservative and close-minded society. In order to help listeners internalize context without directly contrasting with their previous conditioning, he taught through symbolic parables. As he told his disciples, the parables were directed to those more open-minded individuals that would be able to internalize concepts that would gradually prepare them for future learning. In this case, parables were used to teach difficult concepts as opposed to the transparent stories of fables. The utilization of stories by teachers indicates the effectiveness of stories in teaching and learning. While teaching typically requires the simplification of knowledge for the knowledge to be communicated, sometimes abstract or symbolic teaching styles seem to be more effective in helping a pupil internalize concepts. Unfortunately teaching through story-telling requires an understanding of the world to make sense of the story and draw conclusions from it. You can imagine a science fiction AI being more puzzled than enlightened by parabolic stories, so this learning technique may not be directly applicable to AI. Perhaps it will be more of an indicator of true AI than the path to it.

I think that those who enjoy elementary education enjoy following instructions; those who enjoy secondary education are masochists, and those who enjoy post-secondary education feed on knowledge and thrive when learning. Nevertheless, high school served its purpose by avoiding my education and sending me off to a fine university. At university, I discovered real learning and deep thinking. I stopped letting my classes get in the way of my education, and I was invited to explore and challenge. I made several concurrent decisions after my freshman year: to be a lifelong learner, to go to grad school, and to quit my well-paying engineering internship to work as a student researcher.

As a researcher, I have to constantly learn and acquire knowledge to stay on top of my field. I told my friend once that research was half reading others’ experiments and the other half failing your own. Despite this, I enjoy the opportunity to learn new things, to strive to know enough to discover something novel in a field saturated with some of the most brilliant minds on earth. Computer science is a relatively new discipline, so that means there should be a relatively small number of things to learn right? Unfortunately, I think it just means that what we know about computer science is less organized and established. The funny thing about research is that I have to know and make sense of at least some of this knowledge while acquiring new knowledge that makes unique contributions to the mess.

As I mentioned previously, I learned a lot as a student during my secondary education, but everything I learned was from another source. It was someone else’s opinion or a piece of information with a teacher’s bias. In addition, I was never taught to learn, that was a skill I had to pick up on my own. Instead I was taught to regurgitate information and perform well on tests. I was rarely asked my opinion on something, and when I was, it could not disagree with the textbook too much. At a university, I feel like I can take any position or none at all for most questions. I can investigate the questions I want to investigate and leave the questions I don’t. I am payed – though not much – to think of new questions and look for answers. Is this not a whole new type of learning? No one is holding my hand through the process or telling me how I need to think. I don’t seek information from a teacher or someone with more knowledge than me. I learn what I can from them and then set out on a quest for new knowledge. When I come up with a new question or a new answer I grow excited and then discouraged when I find old publications detailing what I thought was new. This ability to ask questions and find answers may be key to an AI system’s ability to learn as a human does. Finding holes in your knowledge compared to someone else’s knowledge seems doable, but how can a machine learn to identify holes in our knowledge as human? If a machine could do that, perhaps it could also fill those holes. At that point, the singularity wouldn’t be that far away.

My wife and I have had numerous discussions regarding Thomas’s future education. I received my elementary and secondary education from California public schools while my wife was homeschooled until college. While I struggled with the slow progression of my elementary education, I was blessed with opportunities to explore my interests in a specialized engineering program during secondary school. We’ve thought a lot about the importance of challenge, freedom, exploration, specialization, and emotional and social intelligence in education. We’ve considered options such as public school, homeschool, private or charter schools, and Montessori schools.

Maybe by then we will have created AI skilled enough to generate effective, tailored curriculums for each student. Thomas will wake up in the morning and meet in his virtual classroom with his AI teacher via his VR headset, and he’ll ask our smart kitchen to prepare him some sugar-free, never-soggy cereal for breakfast. Later, I’ll ask our personal voice assistant to call for a self-driving taxi to take him out with some of his friends, even if he insists it’s easier to meet with them in their virtual chat room. I’ll be another parent insisting that he learn some of what it was like to grow up when I was a boy.

I guess I don’t need to ask myself how I will teach him. He’ll learn a lot more than I will ever intentionally teach. The bigger question is how he does the learning. Will repetition be enough to develop good habits? What combination of positive and negative reinforcement from the world around him result in the best possible development? What teaching methodologies will produce the most learning enhancing environment for him? Will he learn to love to learn in the same way I have?  Understanding the answers to these questions will do more for him than focusing on my role as a teacher alone. For now, I will avoid asking how to go about teaching Thomas and instead consider how he will learn.

Works Cited

Aesop’s Fables. New York: Oxford University Press, 2002.

Krizhevsky, Alex, Ilya Sutskever and Geoffrey E Hinton. “Imagenet classification with deep convolutional neural networks.” Advances in neural information processing systems (2012).

Maurer, Daphne and Janet F Werker. “faces, Perceptual narrowing during infancy: A comparison of language and.” Wiley Online Library (2013).

Oswalt, Sabine G. Concise encyclopedia of Greek and Roman mythology. Chicago, 1969.

Reschke, Peter J, et al. “Twelve‐Month‐Old Infants’ Sensitivity to Others’ Emotions Following Positive and Negative Events.” Wiley Online Library (2017).

Rosenblatt, Frank. “The Perceptron, a Perceiving and Recognizing Automaton.” 1957.

Saffran, Jenny R, Richard N Aslin and Elissa L Newport. “Statistical Learning by 8-Month-Old Infants.” Science (1996).

Silver, David. “Mastering the game of Go with deep neural networks and tree search.” Nature 529 (2016): 484-489.

Szepesvári, Csaba. “Algorithms for Reinforcement Learning.” Synthesis Lectures on Artificial Intelligence and Machine Learning (2010): 1-103.

Zalta, Edward N. “Philosophy of Humor.” The Stanford Encyclopedia of Philosophy (2012). <https://plato.stanford.edu/>.


[1] These examples come from the movies Space Odyssey 2001, Avengers: Age of Ultron, iRobot, The Terminator, War Games, Iron Man, and iRobot respectively.

[2] A science fiction play written in 1920 by Karel Capek that contained the first use of the term robot.