Now in his Senior year at Princeton University, Nicholas Johnson 2015 has been doing some very interesting work recently on machine learning, a technology that is revolutionizing the development of artificial intelligence.
Recently, Nicholas discussed the nature of his work with Veritas magazine. Here is a transcription of that conversation:
Veritas: What is the definition of machine learning within the broader field of artificial intelligence?
Artificial intelligence broadly defined refers to machines being able to execute tasks that humans would consider "smart". Whether or not a task should be considered "smart" is a separate discussion. Examples of tasks generally accepted to fall into this category are image recognition/image labelling and language understanding. Machine Learning falls within artificial intelligence and specifically refers to when a machine becomes able to execute "smart" tasks after being provided with and learning from data (often extremely large amount of data).
Veritas: What attracted you to this area of study, and what is the nature of your work?
I was attracted to this area of study because it is an ideal combination of my interests in mathematics and computer science, and because ML has the potential to revolutionize countless industries (and is already doing so). I have spent a fair amount of time working on developing optimization algorithms (which are at the heart of ML) for specific use cases, developing ML systems with privacy guarantees for users and proving performance guarantees for ML systems. I am particularly interested in healthcare applications of ML; my undergraduate thesis is focused on an ML approach to preventative health interventions designed to curb the prevalence of obesity. I also have a current ongoing project in financial planning and have previous worked on transportation logistics (specifically, bikesharing in Montreal).
Veritas: Does ML have the potential to accelerate the development of AI?
Yes. In the development of the earliest AI systems, humans sought to formalize explicit rules that a machine could combine in complex ways to achieve a certain task. Although this approach is sensible, ML proved to be a more flexible and ultimately more powerful approach to designing AI systems. In the ML model, humans essentially tell machines how to use data, but nothing more. Machines are then free to build whatever rules are most effective in achieving the task (which are usually different from the rules a human would give) after being provided with data. An important point is that ML as a model for AI is not a particularly new concept. It has, however, become more powerful in recent years following the development of high-performance computing infrastructure that has vastly increased the amount of data that can be processed by a machine in a reasonable period of time, and following the development of new algorithms.
Veritas: Does ML have the potential to "humanize" AI?
Yes and no. It depends on what is meant by "humanize". ML produces AI systems that (almost) perfectly reflect patterns found in the provided data. This data is often produced by humans, so systems produced by ML will reflect patterns often created by humans. So in this sense, yes ML would help humanize AI. However, it is important to note that since the data is often produced by humans, it will reflect any biases (conscious or not) that humans have left (placed?) in the data. In another sense however, ML does not help humanize AI in that ML gives machines more autonomy in learning how to complete tasks, whereas other approaches to AI rely more heavily on direct human input.
Veritas: What are the applications of ML, and the implications of this technology for modern civilization?
The applications of ML are limitless. I believe we are in the midst of a technological revolution comparable in significance to the industrial revolution of the early 1800s. I see developments in ML and AI as having an augmenting role for human existence; AI systems can either work in tandem with human experts to produce more consistent and superior results than a human could alone, or AI systems can automate "simple" tasks allowing humans to have the mental bandwidth to achieve higher pursuits.
Background: On Sept. 8, Nicholas was presented the Class of 1939 Princeton Scholar Award from Princeton University, where he now in his Senior year. The university issued an announcement of his achievement, which included the following:
“Nicholas is an operations research and financial engineering concentrator and is pursuing certificates in statistics and machine learning, applied and computational mathematics, and applications of computing.
"This past summer, Johnson worked as a software engineer in machine learning at Google’s California headquarters,” says the Princeton annoucement. “He previously interned at Oxford University’s Integrative Computational Biology and Machine Learning Group, developing and implementing a novel optimization technique under the supervision of Aleksandr Sahakyan, principal investigator and group head. He presented the project at Princeton’s inaugural Day of Optimization in October 2018 and at the 25th Conference of African American Researchers in the Mathematical Sciences in June 2019, where his project was recognized with the Angela E. Grant Poster Award for Best Modeling.”