The Need to Use Artificial Intelligence Ethically
As humans, we often recreate the fantasy of confronting certain kinds of intelligence that act against our interests. When it comes to facing the challenge of using artificial intelligence (AI) ethically, we find ourselves at a tipping point. Because this technological advance promises unprecedented developments. However, it also brings certain challenges and difficulties that warrant reflection and deserve action.
These accelerated changes have generated a series of problems of a social nature. As such, they mean we struggle to keep up the pace of this new form of intelligence. So, what action should we be taking?
To start with, we need to establish moral, and probably also legal, limits when using artificial intelligence. If we don’t, we’ll expose ourselves to the fact that what’s meant to be a development in civilization will, in reality, be the exact opposite.
Using artificial intelligence ethically
The debate around AI and its use is broad and complex. That said, there’s some agreement on certain moral norms in this regard.
1. Artificial intelligence at the service of human interests
An article published in Real Academia de Ciencias Morales y Políticas claims that, from a humanist point of view, AI has the task of minimizing the avoidable suffering of people. This is where the concept of malicious use of artificial intelligence comes into play. It refers to the potential dangers that misuse of these programs poses to society.
Undoubtedly, the safety of individuals must be guaranteed, as well as the privacy and identity of the environment. Otherwise, the precept of the minimization of suffering would be completely breached.
2. Avoiding the dictatorship of data
The collection of massive data (big data) is the engine of progress in branches as disparate as medical technology and economic development. However, when data is used to skew the population and segregate it, we speak of the ‘dictatorship of data’, claims a bulletin from the CSIC.
Continuing with this example, AI could collect huge samples of the results of a new medical treatment or the incidence of health problems. In this scenario, we’d have to ask ourselves to what extent it’s ethical for an insurer to have access to this kind of data for giving us quotes or providing us with coverage.
3. Respecting neuro rights
Big data could also be used to make predictions about human behavior. This could be exploited to a great extent in the field of marketing. Therefore, to use artificial intelligence ethically, such data shouldn’t be a tool that influences the identity of users or their cognitive freedom. These are neuro rights.
In the field of artificial intelligence, it’s essential to ensure that neuro rights are respected in the collection, storage, analysis, and use of brain data. This involves obtaining informed and explicit consent from individuals before collecting data from their brains. In addition, the privacy and confidentiality of the data must be protected, and it must be used ethically and responsibly.
Furthermore, respect for our neuro rights must ensure that AI isn’t used to manipulate or unduly influence our identities, cognitive freedom, or autonomy.
This encompasses avoiding any discrimination, stigmatization, or manipulation based on brain data. Moreover, it implies ensuring that AI-based decisions and actions are transparent, explainable, and fair. However, this is quite a challenge, since most of the models that artificial intelligence works with are opaque. In effect, they give good results, but we don’t know why.
4. Preserving human dignity
Certain jobs, especially those that provide care, are considered unsuitable for AI and robots. That’s because they require the capacity for empathy and respect. For example, it wouldn’t be ethical to subject an individual to therapy directed by artificial intelligence, nor to have AI function as policemen or judges.
As a matter of fact, the concept of empathy in robots poses extremely interesting challenges, due to the nature of human emotion and consciousness. Although robots can be programmed to recognize and respond to human facial expressions, tone of voice, and other emotional cues, they don’t have the ability to experience emotion and understand in the same way that we do.
A work published in the Economía y Sociedad explains that intelligent technologies are being given functions that involve managing emotions. Consequently, like humans, they fall into a contradiction between moral duty and the scenario in which it’s implemented.
5. Keeping sources open
There’s one statement that prevails on the subject of artificial intelligence. It’s the idea that their code should be open and public. Moreover, its development shouldn’t be in the hands of a few, since it’s a technology that directly affects people’s lives, social configuration, and even their culture. Indeed, transparency must be guaranteed and malicious use of AI prevented.
Why should we use artificial intelligence ethically?
From national security to the use of an app, from politics to medicine, the use of artificial intelligence must be ethically reviewed in an unbiased manner. Any malicious use of it wouldn’t only cause or increase threats to our society but also deepen any negative consequences.
Since AI is changing our world and culture, passive agents must develop, in parallel, a culture of responsibility and good usage. Contrary to what many people think, this doesn’t only involve learning and applying cybersecurity measures.
However, promoting such a responsible culture is only the first step. It’s essential that governments and companies take measures for the ethical management of AI. Fortunately, moral reflection has already begun and there have been some advances in this regard, as stated in Derecho Global.
The effectiveness of the conclusions of this reflection will depend on whether they meet their humanist objective. For the moment, those who are really dominating us are the humans behind the robots. Indeed, for now, behind all artificial intelligence, lies human intelligence.
All cited sources were thoroughly reviewed by our team to ensure their quality, reliability, currency, and validity. The bibliography of this article was considered reliable and of academic or scientific accuracy.
- The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation. https://doi.org/10.17863/CAM.22520
- Cortina Orts, A. (2019). Ética de la inteligencia artificial. Anales de la Real Academia de Ciencias Morales y Políticas, 379-394. Ministerio de Justicia. https://www.boe.es/biblioteca_juridica/anuarios_derecho/abrir_pdf.php?id=ANU-M-2019-10037900394
- García Pastor, E. (2022). Definiendo una inteligencia artificial más ética. Consejo Superior de Investigaciones Científicas. https://www.csic.es/es/actualidad-del-csic/definiendo-una-inteligencia-artificial-mas-etica
- González Arencibia, M., & Martínez Cardero, D. (2020). Dilemas éticos en el escenario de la inteligencia artificial. Economía y Sociedad, 25(57), 93-109. https://dx.doi.org/10.15359/eys.25-57.5
- Fernández Fernández, J. L. (2021). Hacia el Humanismo Digital desde un denominador común para la Cíber Ética y la Ética de la Inteligencia Artificial. Disputatio. Philosophical Research Bulletin, 10(17), 107-130. https://dialnet.unirioja.es/servlet/articulo?codigo=8018155
- Porcelli, A. (2020). La inteligencia artificial y la robótica: sus dilemas sociales, éticos y jurídicos. Derecho global. Estudios sobre derecho y justicia, 6(16), 49-105. Epub 27 de enero de 2021.https://doi.org/10.32870/dgedj.v6i16.286
- Recomendación sobre la ética de la inteligencia artificial. (2021). Unesco.org. Recuperado 11 de abril de 2023. https://unesdoc.unesco.org/ark:/48223/pf0000381137_spa
- Terrones Rodríguez, A. L. (2018). Inteligencia artificial y ética de la responsabilidad. Cuestiones de Filosofía, 4(22). https://www.researchgate.net/publication/326907301_Inteligencia_artificial_y_etica_de_la_responsabilidad