State of AI — Notes from DataHack Summit 2018
January 24, 2019 | Technology
Artificial Intelligence is a fascinating subject.
It stimulates our imagination like no other on-going technological advancement.
And as a result, it also gives rise to many speculations. Most interesting (or rather dreadful) of them – “AI will take over the world!”. A notion, which is mostly incepted by Hollywood movies, rather than reality.
Ex Machina movie, in which a humanoid robot outsmarts two highly intelligent people
Fortunately, the reality is not that fascinating. It is exciting, of course, with all the possibilities and on-going developments. But for now, it is mostly an aid to us humans, helping us work more in less time and with better accuracy.
Recently, Trantor’s AI/ML team was at the DataHack Summit 2018 in Bangalore, where many world-class AI experts, data scientists, machine learning engineers, and technology evangelists gathered to discuss ideas, their feasibilities, applications, and much more.
Overall, the event provided many insights on what is relevant in the field of AI today. So, we decided to create this blog to help our clients and tech enthusiasts understand how to make the best use of this technology in current dynamics.
Here are our key takeaways from the event.
Interpretability of Machine Learning Models
Unlike traditional software, the functional part of machine learning algorithms happens in a black box. Meaning, with software, you know exactly what steps led to a specific outcome. But with an ML model, you only have a vague idea of how a particular outcome is achieved.
Since there is no telling exactly how an ML model produced a specific outcome, there is no assurance that it will always outperform a traditional software. And this has been a major hurdle in the adoption of AI/ML so far. For tech guys, it is really difficult to convince management to replace their legacy software with something even they are not sure how does it work.
This is how the scenario looks like in a real organization:
At the DataHack Summit, one of the speakers, Karthikeyan Sankaran, pointed out 3 major reasons for the gap that exists between ML predictions and business decision making. He also suggested solutions that can help bridge this gap; at least up to an extent that allows organizations to leverage ML without much concern. Following diagram highlights these points.
Automated Machine Learning (AutoML)
The workflow of machine learning models involves a lot of steps, such as data pre-processing, feature engineering, feature extraction, and so on. A lot of these steps require an ML expert. And even after these workflow steps, the ML expert has to perform algorithm selection and hyper-parameter optimization at every stage.
And the entire process of ML model training is re-iterative. And it will continue running in search of the most optimum solution for a particular given problem. Clearly, a typical ML workflow is a tedious process. Consequently, it also consumes a lot of time of your skilled employees.
To address this issue, AutoML was introduced. It builds ML models that don’t require human intervention at every stage. It automates the end-to-end process of applying machine learning and frees your skilled resources, produces models faster, which often outperform models designed with the traditional approach. Overall, a win-win solution in pretty much every scenario.
Related Read: How AI Is Driving Next Phase of Growth in Fintech
Transfer Learning
According to Wikipedia, “transfer learning is a research problem in machine learning that focuses on storing knowledge gained while solving one problem and applying it to a different but related problem.”
Simply put, it involves customizing layers of an already trained ML model to solve a new but similar problem. Layers are simply properties or constraints related to that specific problem.
For example, you have a trained ML model A that recognizes dogs’ pictures. Now, if you want another model B to identify wolves’ pictures, you can simple customize certain layers of model A to build model B, instead of building it from the scratch.
Currently, transfer learning is being extensively used in the field of Computer Vision (pertaining to the example above). But its use cases are present in a wide range of AI instances.
In 2018, tools like BERT (Google), ELMO, and ULM Fit were introduced that made transfer learning possible with NLP (Natural Language Processing) models. For example, online retailers can use open source ecommerce chatbot APIs to build chatbots for their own stores. And there are a lot of other open source trained models for developers’ convenience, which they can use to build and train their own models with much ease and in significantly less time – all thanks to transfer learning.
Final Notes & Remarks
Another key insight that we came across at the summit was related to the current industrial state of the unsupervised learning. AI architectures like RBMs (Restricted Boltzmann Machines) and GANs (Generative Adversarial Networks) have created quite a buzz in past, but when it comes to industrial application, they are pretty much in the research stage and mostly exclusive to tech giants.
In fact, unsupervised learning is the part that actually imposes the threat that is often blamed on AI –overpowering human race, taking our jobs, etc. But that scenario is still far – maybe a few decades at least. Yet it may lead to it if we are not careful, as many tech leaders like Elon Musk have warned us about. So, while it is important for the next stage of technology development that unsupervised learning architectures continue to get better, it is also necessary that these developments are regulated. What do you say?
Looking for a trusted technology partner for your AI initiatives?
Leave Your Comment