Artificial intelligence considerations

In June 2022, Swedish fintech Klarna laid off 700 employees, representing 10% of its workforce, in a mass video. After a reorganization, today Klarna is one of the largest banks in Europe, offering payment solutions for 150 million consumers in more than 500 000 merchants in 45 countries and performing 2 million transactions per day, with the support of more than 5 000 employees.

Recently, Klarna revealed that through an agreement with U.S.-based OpenAI, it has implemented an artificial intelligence (AI) chatbot that handles two-thirds of all chats with its customers, representing 2,3 million conversations. Also, the assistant handles refunds, cancellations and even disputes in purchase processes. Klarna has said that this technology performs the work of 700 people: precisely the number of employees who will be laid off in 2022.

The rise of AI is now a reality. Although it seems that there are more harms that it could bring, this technology can benefit all societies around the world. However, in order to do so, it inevitably requires a strong ethical commitment.

The impact of AI in the social domain

AI works 24 hours a day, 7 days a week, 365 days a year. Moreover, for example, Klarna’s AI chatbot not only speaks 35 languages and is available in 23 markets, but operates without rest, without complaint and without risk of accidents and occupational illness. If a company’s goal is to generate as much capital as possible, how can the human workforce compete in the face of increasingly intelligent AI?

The implications of AI have been analyzed for some years. In its 2021 recommendation, the United Nations Educational, Scientific and Cultural Organization (UNESCO) highlighted the ethical implications associated with AI, because of its potential impact “on decision-making, employment, health care, education, access to information, consumer and personal data protection, the environment, democracy, the rule of law, security, human rights and fundamental freedoms, including freedom of expression, privacy and non-discrimination.”

In recent history, major ethical dilemmas have arisen in relation to technological advances. The Manhattan Project, for example, was developed thanks to the contribution of great scientists of the time, who created a weapon of mass destruction unthinkable in those days. Could the inappropriate use of AI generate even more disastrous social impacts than the atomic bomb? Or is it rather an opportunity to take productivity to another level and transform the economy and people’s lives?

According to the World Economic Forum, by 2025, 85 million current jobs will have disappeared and the OECD estimates that up to 46% of jobs could be automated, generating uncertainty for those who perform them. Automation will affect manufacturing jobs the most, but also activities such as driving vehicles may change significantly, as has happened with autonomous cabs in San Francisco, California.

AI is also expected to open up new job opportunities in areas such as systems programming. For example, the World Economic Forum has indicated that 97 million jobs could be generated due to digital transformation. Of these jobs, can we assume that they will all be automated, or will we see AI making important decisions independently? Is it far-fetched to think that in the not-too-distant future there will be many companies made up of tiny groups of humans, capable of generating $40 million a year? Perhaps not.

Education is another field sensitive to the use of AI. In July 2023, a professor at the University of Costa Rica detected the use of ChatGPT to answer test questions in a group of 18 incoming students. This poses another ethical dilemma due to the way in which AI can be used to overcome the academic requirements that exist for the purpose of developing skills of various kinds.

Clearly, AI can be used in a positive way to improve many aspects of daily life, such as health, education or employment, provided we are able to maintain ethical conduct and governments manage the associated risks.

Regulatory framework for AI

UNESCO has highlighted the need to establish mechanisms to ensure ethics in the use of AI. The United States and more than 20 countries in the European Union are currently working on analyzing the regulatory framework for AI, with the aim of generating benefits for society. We hope so.

Machines will be able to do what they want, but humans can ethically decide what products they accept from machines. For example, the Costa Rican Congress recently presented the AI Regulation Law, which aims to establish guidelines for the development, implementation and use of AI in our country, in accordance with the principles and rights established in the Political Constitution of 1949 and the international treaties to which we are part of. In addition, the said Law focuses on the protection and promotion of dignity and welfare and rights. By the way: the proposal was made with the help of ChatGPT.

It is possible that well-regulated and managed AI will positively revolutionize life as we know it. However, beyond those benefits, the potential negative impacts and risks, no AI will be able to empathize and make the best decisions, it will not have the capacity for reasoning and abstract thinking that we possess, it will not be aware of its existence and its environment and most importantly: we decide where we want to go with it.