18 May Wrongology: The Art of Learning from Mistakes! by Omid Moradiannasab
Mistakes, misunderstandings and missteps. Most of us will do anything to avoid them. In many cases humans find it uncomfortable, not to say humiliating, to even admit to own fallibility. Nevertheless, mistakes are inevitable, not only to humans but also to algorithms.
In the fast-paced world we now live in, a big source of such mistakes could be to easily incline toward optimizing what does not really matter! Given that in the financial domain the weight of influential parameters in the market can vary across time in our world of continuous changes, classical perspectives and approaches like deep Learning, NLP, and Text Mining technologies may not be the most efficient for analyzing a financial market especially as the cost of manual data labelling – required for adapting to never-ending changes- is high, as well.
So far, the purpose of the technology we provide has not been modelled to replace human insights with AI, but to broaden the data sources, to speed up the processes and to extract the patterns that could be overlooked by a human eye. The added value of the later releases will be the focus on developing an AI which is able to look back every now and then at the decisions it has previously made. This means that any errors the system has made will be used in the future rounds as part of a self-improvement process. In other words, the goal is to achieve a new software architecture that lets the core AI to learn from its own mistakes, similar to the way a human does, i.e. adjusting their models or understanding of a phenomenon each time they get a signal from it.
On the contrary to humans, our AI will not be humiliated by making mistakes, instead as soon as there is a feedback on the outcome of the process (e.g. sentiment of a piece of news), the core AI would acknowledge it, learn from it, put it into practice, and deliver the improvement in the next release. When it comes to the sentiment analysis of financial domain, such approach is extremely helpful in determining and focusing on what the customer wants or how the customer interprets a market signal, in contrast to applying some everlasting golden rules.
This is best achieved by listening to the customers and using their feedbacks as positive or negative rewards as in reinforcement learning models. A good example of evolving reinforcement learning application in AI world is chess playing robot. Such models basically try to optimize the parameters in order to maximize a cumulative reward. This type of learning enables faster and more accurate optimization towards what the customer expects to see in the end of the day.
Inevitably, like any other solution, this one may also come with specific risks. One of them would be feedback efficiencies, as every single feedback matters and can bring about a change either in a constructive or destructive manner. The second could be the bias caused by following feedbacks of specific customers or groups of customers who share the same perspective. In an ideal world, going towards a personalized AI for every customer that aligns itself to the customer’s perspective could lower such risks to a great extent.
Algorithms, just like ours, don’t only learn by following the rules. They learn every day. Failure is instructive. All successful systems are built through trial and error, experimentation, variation and selection. Thus, it’s important not to waste mistakes but learn from them.
This story was provided by Omid Moradiannasab, Software Engineer AI & NLP at YUKKA Lab.