Artificial intelligence: it’s not coming, it’s already here!
2021.01.24
Originally I had intended to write about what to look out for in products purchased for Christmas that are based in some form or another on artificial intelligence (AI) solutions. And also whether the time had finally arrived, as heralded by many science fiction works, when AI takes over from humanity. For many, the first thing to come to mind in this respect is a humanoid-type, weapons-wielding Terminator. Then would come the reassurance that this is still a long way down the road. In the meantime, however, artificial intelligence has perhaps already taken over control of a significant slice of our lives, not in human form, without a weapon in sight, stealthily, but instead in the form of massively complex and virtually uncheckable algorithms. And you didn’t even need to go out and buy it at Christmas!
I believe that the above assessment is not exaggeration. Anyone using Google, Facebook, YouTube, Twitter and similar apps is constantly being influenced by artificial intelligence algorithms, whether consciously or unconsciously. Our first thought might be that Google, however fantastic it might be, is still ‘merely’ an effective, global search engine; Facebook and Twitter are social platforms, YouTube is a video sharing site. However, these platforms are business applications the primary function of which is to generate profit, even if in the meantime they have to provide some kind of service we consider useful. Due to the business model they employ, the most important objective for their owners is to get users to use these apps increasingly frequently and for as much time as possible. This can be promoted, on the one hand, by integrating all sorts of handy functionality. However, there is an even more effective way to grab our attention to continue using, make further searches, browse and watch videos. AI!
In order to learn, AI solutions have to be given a clear goal and have to be fed data. A lot of data! Based on these, AI begins to configures itself in such a way that its activities encourage attainment of the objective. Contrary to computer programs drawn out on tables based on preliminary designs and transplanted into programming code, the fine-tuned algorithm generated by AI learning is on the whole a ‘black box’. Quite often, the designated objective is brilliantly achieved, but the reality is that it is far from obvious how this happened in the course of employing which internal decision-making processes. As a consequence, it is not uncommon to come across ‘side effects’ that nobody had reckoned on, and which cannot be simply cancelled without resetting the entire algorithm. It is sufficiently obvious also in the case of the abovementioned platforms that AI algorithms operate with spectacular efficiency. Or at least in terms of the pre-determined objective: to get users to spend as much time as possible making searches, chatting, sharing and watching videos. But there are side effects here, too!
Those who believe they can discern behind all this global conspiracies and the intrigues of some shadowy powers are probably looking in the wrong direction. No matter how much we consider Mark Zuckerberg and his colleagues not to be saviours of the world but instead coldly calculating business people, it is most probable that even they did not foresee what social problems their algorithms would cause. This is not about people with bad intentions, alien lizards, nor even is it about artificial intelligence algorithms that want to do humanity down! Rather, it is much more about the fact that this partly self-learning version of AI capable of achieving these otherwise fantastic results interprets the objectives very narrowly.
Many people are already thinking and working on how to make such flaws in current AI solutions preventable or correctable; how it would be possible to integrate some kind of ethical considerations into the design and functioning of AIs. Until then, all we can do is to pay close attention to such manipulative actions of algorithms, and try to resist them, in the course of using platforms that otherwise provide great value. Do not take this lightly! To a certain extent, artificial intelligence knows us better, our motivations and instincts, than we do ourselves.
Gábor Hraskó
Head of CCS Division, INNObyte
More articles
TED INNObyte – we are 5 years old
2019.11.14
„We are thinking in a team!”
2020.09.10
Home office at INNObyte
2020.06.03