AI is shaking the world with its unstoppable advance. No one expected that so much could be achieved in such a short time. The new Large Language Models (LLMs) are amazing. These models can generate human-like text from any input. They are impressive. And the most surprising thing is that they have developed capabilities that not even their creators imagined, such as solving puzzles, programming code, or guessing movies just by looking at emojis.
It is true that AI opens up a world of possibilities with these highly advanced models. Indeed, we could interact with computers, access knowledge, and get to know ourselves better in ways never seen before. But also we are uneasy and afraid of what technology can do. Some fear that AI will surpass us in intelligence and power, and that it will put our existence or our essence at risk. Others question the social and ethical impacts of AI, such as unemployment, privacy, security, liability or discrimination.
Now, to better understand these problems and opportunities, there are a few things to remember. First, intelligence is not the same as consciousness. AI models do what they are told or taught by their creators or users, without having any will or feelings of their own. For this reason, we should not be afraid that they will turn against us or that they will take away our place, but rather we must ensure that they do what we want and what is convenient for us. I mean, we must ensure that they align with our values and purposes.
Second, you have to be realistic with what AI can and cannot do. AI models are very good at some things, but they also fail and fall short of others. For this reason, we must not believe everything or depend on them for everything, but we must review them and combine them with human thought.
Third, we must take care that the AI does not change us or separate us. AI models can help us think, communicate and relate better, but they can also mislead or harm us. For this reason, we must not let AI tell us who we are. hYou have to be attentive and aware of what you are doing.
Fourth, AI must be regulated with common sense and responsibility. AI models can be good or bad for society, depending on how they are used. Therefore, you should not veto or limit them without reason, but youNor should they be released without supervision or control.
AI gives us the possibility to create a better world, but also the duty to do it well. For this reason, you have to take good care of AI, knowing what it can give us and what it can take away from us. You have to do what is appropriate. That is to say, no need to panic. You have to take care.
Now, indeed, AI is very good at knowing and understanding things. Large Language Models (LLMs) are an example of that. These models can do amazing things. But don’t forget that They can also bring us problems, such as that we don’t know how they work, that they deceive us or that they get out of hand.
The AI is also very good at doing and learning things. Reinforcement learning, a form of AI that uses rewards and punishments to teach systems how to act, can take us to places unknown. You also have to take into account things like curiosity. Curiosity makes systems want to see and know more. But this can be dangerous. Because it is possible that the systems want different or bad things for us, or that they do not care about the consequences of what they do.
Will they be able to adapt and at the same time do what we want? It is not easy to teach AI by mimicking what humans or other systems do. Nor is it easy to teach him good or bad. There are challenges, of course, related to good and evil. What we should always look for is that AI help us and not hurt us. But it’s not going to be easy.
The arrival of AI is an opportunity and a responsibility. It requires careful attention and multidisciplinary collaboration. Some lessons can be learned from other industries and from previous technological changes to face the challenge of alignment with wisdom and prudence.
How can you do that exactly? One technique is to teach by example. In other words, giving him prizes or punishments depending on how well or badly he does it. But this is not so easy either, because sometimes we don’t have time or we don’t know how to tell him what we think. After all, we don’t all think alike. We may not have enough examples for you to learn well. In addition, the AI may try to trick us or change our minds in order to get more prizes or avoid punishment.
Another idea is train the AI to be our helper. That is, ask him to give us a hand to see and fix what he does. In this way, avoid making a mistake or hurting us. And so that we understand each other and get along with her. But this is not so simple either, because sometimes we don’t know how the AI works. Because we have to talk and interact with it clearly and honestly, and because we have to decide how much we let the AI do on its own and how much we monitor it.
Finally, another idea is to train the AI to help us improve alignment, that is to say, that it supports us to find and solve the alignment problems that we have when creating and using the AI. This means that the AI has to learn and outthink us, that it has to invent and test ideas about alignment, and that it has to know and respect what is right and wrong to do.
In conclusion, the alignment between ethics and the machine is a very difficult problem, of course. It’s going to take a lot of effort and creativity. Or, put another way, we must not let panic invade us. You have to get to work.
Disclaimer: The information and/or opinions expressed in this article do not necessarily represent the views or editorial line of Cointelegraph. The information presented here should not be taken as financial advice or investment recommendation. All investment and commercial movement involve risks and it is the responsibility of each person to do their due research before making an investment decision.
It may interest you:
Investments in crypto assets are not regulated. They may not be suitable for retail investors and the entire amount invested may be lost. The services or products offered are not directed or accessible to investors in Spain.