AI and Machine Learning in 2018: Looking Back and Forward

Author: Dr. Zunaid Kazi
Chief Technology Officer
Java / AI & Cognitive Systems / Machine Learning/ NLP / Text Analytics

It is that time of the year when everybody puts on their prognostication hat and makes predictions about what is to come for the new year. Where will AI and Machine Learning (ML) go this year? I am throwing in my proverbial predictive hat in the ring by first looking back at 2017 to look forward to 2018.

Looking Back: 2017

2017 was a pivotal year for advances in AI and ML. 2017 was the year that AI was no longer a four-letter word, and Deep Learning entered the common vernacular. AI and ML became the buzzword of the year taking over from Big Data.

Machine Beats Man

Advances in 2017, led by general reinforcement learning, were exemplified by numerous machines-beats-man achievements. We saw Deepmind’s chess playing system beating the best chess-playing software after teaching itself to play chess in under four hours. Then AlphaGo Zero, also from Deepmind, didn’t even need data to train itself. It trained itself by learning the game from scratch by playing against itself. Machines no longer required humans to best humans at a human game. Equally impressive was the extension of AI’s dominance over humans in imperfect information games (games where some information is private). Libratus, from CMU, beat the world’s best professional poker players in a 20-day competition. Libratus ended up with almost $2 million in chips after playing over 120,000 total hands.

Smart Assistants Arrive

2017 was also the year of the smart, conversational assistants. Apple’s Siri, Amazon’s Alexa, Google Home, Microsoft’s Cortana, and many others now find room in our rooms and on our devices. Neural networks, specifically deep learning, in conjunction with faster and more efficient chips such as GPUs or Google’s TPUs made processing Big Data to create speech recognition and natural language understanding and generation models faster and more accurate. Not only is speaker independent and reliable speech recognition is here, but Natural Language Processing driven by breakthroughs in Deep Learning is moving ever closer to understanding the nuances of spoken language made more complex with compound sentences and layers of contexts and co-references.

Cars Drive Themselves

Almost every car manufacturer announces an autonomous car initiative. Waymo, Google parent Alphabet’s self-driving car company, leads the pack, hotly followed by Uber. Tesla already has some degree of automation with its hands-free Auto Pilot and is moving fast towards full automation. Companies after companies have announced self-driving car initiatives – Toyota, BMW, Volvo, Nissan, GM, Ford, Audi, Honda, Hyundai and more are targeting fully autonomous vehicles on the road in the 2020s. Data from LIDAR, RADAR, cameras, sensors, and other data are then used by sophisticated by a slew of machine learning algorithms – both supervised and reinforcement learning.

Coda: self-driving cars did actually arrive in 2017. Waymo now has cars driving around in the Phoenix suburb of Chandler with no one in the driver’s seat. They do have an employee in the back seat ready to intervene if necessary.

AI (Machine Learning) as a Platform Takes Hold

While Google’s TensorFlow led the way earlier, in 2017, we saw a plethora of Machine (Deep) Learning frameworks and applications announced. Facebook released PyTorch early in 2017; then we had AlphaGo (Zero) from Google, Alexa from Amazon, and Gluon from AWS and Microsoft to name a few. These frameworks opened up the domain to many more people than beyond the machine learning experts; from having to focus on creating or tweaking algorithms and feature engineers, developers could now focus on picking and choosing the right algorithms and optimizing performance.

Looking Forward: 2018

So where are we headed? In 2017 AI took over from Big Data as the buzzword of the year. There were much hype and many headlines that screamed that AI would either change the world or end it. While there will still be much sensationalization about AI, 2018 will also be the year where much of the hype settles, and we start seeing real quantifiable results from AI and particularly Machine Learning.

These are my predictions of for AI and Machine Learning for the year 2018.

Conversational Interfaces Spread

From the current chatbots and voice-activated personal assistants, we will see conversational interfaces become increasingly common as an interface for people to interact with machines. In many applications these voice-enabled interfaces may not just complement the traditional point-and-click interface, they may supplant it. Rapid improvements in Natural Language Processing, Understanding, and Generation algorithms will make it easier for us to interact with automated systems and applications just as we would with another person. Retailers will rush to adopt conversational agents to provide personalized attention to their customers.

Promising Advances in Deep Learning Algorithms

While Deep Learning has been very promising, the cost and time needed to design and train a usable Deep Network have been an impediment. New promising algorithms and approaches that have shown promise in 2017 will become the new paradigms. These developments include meta-learning algorithms for unsupervised learning,  generative models, and adversarial learning models. But what appears to be most promising, and exciting, are Capsule Networks. Geoffrey Hinton, the “father of deep learning,” once again seems to be on the forefront of another breakthrough with his paper on Capsule Networks he published in October 2016. Capsule networks leverage spatial relationships and can show much-reduced error-rate and achieve this with only a fraction of the data traditional methods need.

Hardware to Support the Software

Something old is new again.

A breakthrough that led to the rebirth of neural networks was when GPUs were first used to train large neural nets. The computationally intensive part of training a network is composed of performing multiple matrix multiplications. Since GPUs are nothing more than parallel floating point calculators with thousands of cores, the acceleration offered by GPUs allowed neural networks to train on larger models and reduce error rates.

Enter Field Programmable Gate Arrays (FPGAs). Their low latency, low power, and programmable nature as a natural way to offload something that is done by software on to the hardware. At the very least, the low power consumption will allow bigger machines to process large data without burning the office down. While GPUs still win the race speed race, FPGAs promises to take over the mantle not just for low-power and low-latency but also for speed.

AI and ML to Play Major Role in Healthcare

Machine learning algorithms will increasingly start moving out from academia and research to improving patient outcomes and care. Most of these applications will be behind the scenes – improving speed, accuracy, and efficiency of much of the processes driven by the massive amounts of data available.

A promising area has been in oncology, with ML applications have been used to detect or identify cancer in many cases better than a human oncologist. Equally promising are the predictive applications that will be proactive rather than reactive – enabling caregivers to take preventative actions before the actual onset of a condition. There will also be significant progress in using population-based data to provide more significant insights into accountable care and assisting in drug discovery.

Increased Scrutiny

On the flipside, as AI and Machine Learning become ubiquitous, there will be increased scrutiny, both from ethical and transparency perspectives.

Transparency is crucial. As Machine Learning models, and in particular Deep Learning models, become more integral components of our decision making workflow particularly in critical domains such as healthcare, finance, and legal, it becomes imperative that decisions must be explainable. The decisions must be compliant with laws and regulations and above all the predictions made must be trustable. There will be increased research on working on developing interpretable models that leave behind a decision trail.

AI and the issues of ethics and accountability have started to move from the world of science fiction to the world nonfiction. More so than apocalyptic warnings of robot takeovers, we will see people taking long hard looks at potential biases, whether deliberate or accidental, in our ML models.

Is it going to be a Brave New World?