Computers may surpass humans, but we'll still have jobs. Here's why.
*This article was originally published by USA Today. To read the original article in full, click here.
Twelve-year-old Gary Leschinsky is a nationally-ranked chess player in the U.S. He has a bright future ahead of him — but it may not be in chess.
OSTILL is Franck Camhi/Shutterstock.com
Why? The reality is that there’s not much future for people in chess anymore. Machines have become so advanced that they will always beat us in the game, however smart we are. In "Range: Why Generalists Triumph in a Specialized World", David Epstein notes that this has been the case since grandmaster Garry Kasparov’s loss to the IBM supercomputer in 1997, and that it’s a sign that perhaps we should be outsourcing tactical tasks to computers. Similarly, translation, spell checking, copyediting, transcription, and other jobs heavily reliant on rote memory have all begun to be outsourced to computers.
What Gary Leschinsky has going for him, instead, is something particularly human — his creativity. He’s also an inventor, and he has patented an allergy watch that can detect food allergies.
What makes humans irreplaceable
Computers and machines can beat us in games like chess, checkers and tic-tac-toe because these games are bound by a finite number of moves and possibilities. Machines can surpass humans in anything that is bounded and literal. They’re inductive: we train them to recognize patterns in data. Humans, on the other hand, are limited in our inductive abilities. There is so much data available today that the human mind simply can’t process it all.
Here’s the difference: Humans have creative, deductive, emotional and ethical abilities. Machines don’t. This is what makes humans irreplaceable, no matter how many chess matches computers win.
What does this mean for the future of work? Will our jobs really be replaced by machines? A Pew survey found that nearly four in 10 Americans worry that’s the case, and a 2019 Brookings Institute report concluded that a quarter of U.S. jobs will be “severely disrupted” by AI in the coming years.
Contrary to popular fears, however, there’s a very relevant and important place for humans in a world of robots and computers. The key is that we have to be strategic about what roles and tasks we assign to the machines and what roles and decisions we protect for the humans. We should be assigning the “chess-like” tasks to the machine while protecting the creative and values-based tasks that are inherently human.
Right now, the best use of technology is prioritization: filtering, ordering, and ranking data. Most of us, for example, take for granted that when we type “cat” into a Google search bar, we instantaneously have access to millions of pictures of cats. Ten years ago, this would have been impossible. It would have been one of the most impressive feats of computing ever accomplished. That a machine can accurately select a cat from billions and billions of available images is actually a remarkable thing.
It’s not magic, though. Computers have this ability because humans trained them to do it; we essentially fed algorithms millions of images of cats to teach them what an image of a cat looks like. This is how machines learn.
Different roles and tasks
The applications of machine learning have far greater implications than quickly finding cute cats. Today, you can build a computer model to identify virtually anything, provided you have the examples to teach the algorithm. Giant Oak, for example, developed a technology that enables financial institutions and government agencies to identify money launderers, human and drug traffickers, and terrorists.
When we do this, we’re not digging through the haystack to find the needle. We’re teaching a computer to prioritize billions of documents for human investigators, agents, and analysts to review. The computer is not the one deciding whether or not someone is a terrorist or a money launderer. Instead, it’s doing what it does best — using the training data humans have given it to identify and rank potentially relevant information for humans to use in making judgment calls.
For example, we can imagine that over 99% of all bank customers abide by all laws, but some small percentage of launder money. With proper training data, machine learning can prioritize information likely associated with money laundering over all other information. Now the human need only review the most important information when adjudicating money-laundering behaviors.
Computers promise amazing efficiencies, but we also need humans. If we’re deciding whether to let someone onto an airplane, open a bank account, or watch our children, we don’t want a computer making the final call.
Computers, robots and machines might impact the “future of work,” but they will never completely replace humans. They’ll just beat us at chess.
Gary M. Shiffman is an applied micro-economist and business executive working to combat organized violence, corruption, and coercion. He received his BA from the University of Colorado in Psychology, his MA from Georgetown University in National Security Studies, and his PhD in Economics from George Mason University. His academic work is complimented by his global operational experiences, including his service as a U.S. Navy Surface Warfare Officer in the Pacific Fleet with tours in the Gulf War; as an official in the Pentagon and a Senior Executive in the US Department of Homeland Security; as a National Security Advisor in the US Senate; and as a business leader at a publicly-traded corporation. Currently, Dr. Shiffman serves as the CEO of Giant Oak, Inc., and the CEO of Consilient, both of which are machine learning and artificial intelligence companies building solutions to support professionals in the fields of national security and financial crime. He dedicates time to Georgetown University’s School of Foreign Service, teaching the next generation of national security leaders. Dr. Shiffman recently published The Economics of Violence: How Behavioral Science Can Transform Our View of Crime, Insurgency, and Terrorism with Cambridge University Press in 2020.