Posted on September 26, 2019

(CSS in progress for margin notes).

Incentives for AI practitioners are broken

I came into Stanford University eager to study artificial intelligence, enthusiastic about building the future. After two years of career fairs, startup pitches, and fireside chats, I grew frustrated by blatant, excessive commercialismReads like satire, but: http://nymag.com/intelligencer/2019/09/how-to-network-through-stanford-university.html on display across campus culture. While I did find a small, motivated cohort of students fascinated by solving fundamental problems in AI, they were often drowned out by the industry-oriented majority. Yes, corporations are remarkably prolific when it comes to research published in major conferences. But, to truly benefit society, academic research must pursue ideas for the long-term future. I am skeptical that the incentive structure for AI research is currently aligned with this goal.

I believe our field misunderstands the categorical difference between “basic research” and “applied research.” Office of Scientific Research and Development (OSRD) director Vannevar Bush asserted that basic research “results in general knowledge and an understanding of nature by its laws” in his 1945 report to Roosevelt. In the context of AI, this means we ought to think harder about foundational questions like “What is intelligence?” or “How do babies learn?” Excessive industry funding in our field leaves it hyper-focused on engineering, at the expense of this kind of long-term exploration.

On the other hand, applied research refers to taking a broad theoretical idea (say, convolutional neural networks), and transferring it to a particular problem of interest (like facial recognition). To be clear, applied research is necessary; this is why OpenAI took a $1B investment from Microsoft to leverage cloud computing to scale neural network training pipelines. But it is no substitute for basic research like the psychology of face perception, which has established the field of visual neuroscience.

In 1948, Claude Shannon published “A Mathematical Theory of Communication” in the Bell System Technical Journal. Broadly, the paper uses probability theory to analyze compression and communication. While Shannon’s paper focuses on proving theorems, in retrospect it laid the foundations for encryption, MP3 files, and the Internet. Bell Labs, where Shannon and his eminent colleagues worked, found the right balance between basic and applied research, and we now have the transistor, the laser, and C++.

Given this misalignment of incentives, what should we do? First, like the editorial boards of medical journals, we must scrutinize funding sources. While it may be a net positive that industry supports a great deal of Ph.D research, it is alarming that many researchers are funded by firms that have questionable track records. In January 2017, as a college sophomore, I signed the paperwork for an internship at Uber ATG to work on self-driving cars; after reading headlines, I felt troubled by the prospect of working at a company with an aggressive culture, so I chose to renege on the job offer. In a twist of poignant irony, Anthony Levandowski’s signature was printed on my non-disclosure agreement.

Second, we need to have better meta-analysis tools for published research; specifically, software that allows people to understand who’s working on what. These tools have the potential to address many of the frustrations practitioners express about AI scholarship.

Third, academia must keep pace. While we may hope that visionaries are drawn to unsolved problems for their own sake, too many of our brightest minds will be drawn to finding slightly more efficient ways to sell ads as long as they can get paid ten—or, in some cases, a hundred—times more for doing so. Academia must “fight fire with fire” and provide an order of magnitude more funding and tenure track positions for AI research. Stanford must leverage its endowment to improve living standards for PhD students, while pushing for increased federal spending in AI.

In my view, developing safe AI is the most important problem of our time. It is the difference between the catastrophic failure of AI-propagated filter bubbles in 2016 and applying AI to medicine to save lives. While the current state is unfortunate, a better future is possible. I am optimistic that with funding disclosure, meta-analysis tools, and radically improved support for graduate students, and more basic research funding we can ensure that AI practitioners abide by a Hippocratic Oath.

Acknowledgments

It’s difficult to list everyone here, but I am grateful for my friends and colleagues at Stanford, The Gradient, and The Thiel Fellowship for helpful conversations around these ideas. I would especially like to thank Michael Swerdlow, Amit Ghorawat, Jeff Hammerbacher, Zhanpei Fang, Ali Partovi, and Vinjai Vale.