{"id":791,"date":"2025-11-12T09:08:12","date_gmt":"2025-11-12T09:08:12","guid":{"rendered":"https:\/\/i2sc.es\/?p=791"},"modified":"2025-11-12T09:59:08","modified_gmt":"2025-11-12T09:59:08","slug":"una-ia-funcional-y-de-calidad-como-base-para-una-ia-etica-y-legal","status":"publish","type":"post","link":"https:\/\/i2sc.es\/en\/blog\/una-ia-funcional-y-de-calidad-como-base-para-una-ia-etica-y-legal\/","title":{"rendered":"Functional, high-quality AI as the basis for an ethical and legal AI"},"content":{"rendered":"<p>Although all new information technology presents risks, AI is perhaps one of the technologies that increases the magnitude of risks the most, both negative (due to its potential impact at the organisational and social level) and positive (due to the great opportunities it offers, for example, for decision support).<\/p>\n\n\n\n<p>With regard to traditional IT risks, it should be noted that these evolve more rapidly in AI than in most other areas, giving rise to risks such as:<\/p>\n\n\n\n<ul style=\"padding-right:25px;padding-left:25px\" class=\"wp-block-list\">\n<li class=\"translation-block\"><strong>Legal<\/strong>, when intelligent systems do not comply with current regulations.<\/li>\n\n\n\n<li class=\"translation-block\"><strong>Ethical<\/strong>, when no thought is given to the use made of intelligent systems.<\/li>\n\n\n\n<li class=\"translation-block\"><strong>Technical<\/strong>, when the software or hardware infrastructure does not ensure the robustness of intelligent systems.<\/li>\n<\/ul>\n\n\n\n<p><\/p>\n\n\n\n<p class=\"translation-block\">In terms of legal aspects, the most notable is the European Union's <a href=\"\/\/artificialintelligenceact.eu\/es\/\u2019\" target=\"_self\">AI Act<\/a>, and at the national level in Spain, the \u2018Anteproyecto de Ley para el Buen Uso y la Gobernanza de la Inteligencia Artificial\u2019, which develops the penalty and governance regime provided for in the AI Act, adapting it to Spanish legislation.<\/p>\n\n\n\n<p class=\"translation-block\">Regarding ethical aspects, the technical report <a href=\"https:\/\/www.iso.org\/standard\/78507.html\" data-type=\"\u2018link\u2019\" data-id=\"\u2018https:\/\/www.iso.org\/standard\/78507.html\u2019\" target=\"\u2018_self\u2019\">ISO\/IEC TR 24368 <em>Information technology \u2014 Artificial intelligence \u2014 Overview of ethical and societal concerns<\/em><\/a> is highly relevant, pointing out that AI ethics is a field within applied ethics. It brings together various ethical frameworks and uses them to highlight the new ethical and societal concerns that arise with AI. It also provides examples of practices and considerations for creating and using ethical and socially acceptable AI. On the other hand, the two initiatives led by <a href=\"https:\/\/www.linkedin.com\/in\/jppenarrubia\/\" data-type=\"\u2018link\u2019\" data-id=\"\u2018https:\/\/www.linkedin.com\/in\/jppenarrubia\/\u2019\" target=\"_self\">Juan Pablo Pe\u00f1arrubia Carri\u00f3n<\/a>,  Vice-President of the Council of Professional Associations of Computer Engineering in Spain (CCII), on this topic at CEN\/CENELEC: \u00abGuidelines on tools for handling ethical issues in AI system life cycle\u00bb and \u00abGuidance for upskilling organisations on AI ethics and social concerns\u00bb.<\/p>\n\n\n\n<p class=\"translation-block\">As for the technical side, there are <a href=\"https:\/\/www.amazon.es\/Gobierno-Gestin-Calidad-Inteligencia-Artificial\/dp\/B0DK8CVM61\" data-type=\"\u2018link\u2019\" data-id=\"\u2018https:\/\/www.amazon.es\/Gobierno-Gestin-Calidad-Inteligencia-Artificial\/dp\/B0DK8CVM61\u2019\">international standards<\/a> for evaluating, ensuring and improving the functionality and quality of intelligent systems. Among these, we highlight the importance of good software processes for developing AI systems (as reflected in the standard <a href=\"https:\/\/www.iso.org\/es\/contents\/data\/standard\/08\/11\/81118.html\" data-type=\"\u2018link\u2019\">ISO\/IEC 5338<\/a>) and a quality model for AI software (standard <a href=\"https:\/\/iso25000.com\/index.php\/normas-iso-25000\/iso-25059\">ISO\/IEC 25029<\/a>) and data (standard <a href=\"https:\/\/iso25000.com\/index.php\/normas-iso -25000\/iso-5259\u2018 data-type=\u201clink\u201d data-id=\u2019https:\/\/iso25000.com\/index.php\/normas-iso-25000\/iso-5259\" target=\"_self\">ISO\/IEC 5259<\/a>).<\/p>\n\n\n\n<p class=\"translation-block\">In the \u2018Social Pyramid of AI\u2019, its <a href=\"https:\/\/doi.org\/10.1613\/jair.1.12814\" data-type=\"\u201clink\u201d\" data-id=\"\u2018https:\/\/doi.org\/10.1613\/jair.1.12814\u2019\" target=\"_self\">authors<\/a> identify four components of AI social responsibility, starting with the basic notion that the functional suitability of AI underpins everything else. The next component, the legal one, requires AI systems to act in a manner consistent with government and legal expectations, meeting at least minimum legal requirements. At the next level, ethical responsibilities are the obligation to do what is right, fair and equitable, and to avoid or mitigate negative impacts on stakeholders. Finally, in terms of philanthropic responsibilities, AI is expected to be a \u2018good citizen,\u2019 contributing to addressing social challenges such as cancer and climate change.<\/p>\n\n\n\n<figure class=\"wp-block-image aligncenter size-full is-resized\"><img loading=\"lazy\" decoding=\"async\" width=\"636\" height=\"565\" src=\"https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI.png\" alt=\"\" class=\"wp-image-794\" style=\"width:415px;height:auto\" srcset=\"https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI.png 2276w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-300x259.png 300w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-1024x884.png 1024w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-768x663.png 768w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-1536x1325.png 1536w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-2048x1767.png 2048w, https:\/\/i2sc.es\/wp-content\/uploads\/2025\/11\/social_pyramid_AI-14x12.png 14w\" sizes=\"auto, (max-width: 636px) 100vw, 636px\" \/><\/figure>\n\n\n\n<p><\/p>\n\n\n\n<p>As we can see, the basis of the entire pyramid rests on the fact that intelligent systems must be functionally adequate and meet minimum quality requirements, since otherwise it will be of little use if they comply with legal and ethical requirements.<\/p>\n\n\n\n<blockquote class=\"wp-block-quote is-layout-flow wp-block-quote-is-layout-flow\">\n<p class=\"translation-block\">We are convinced that with functional, high-quality AI, together with <a href=\"https:\/\/i2sc.es\/wp-content\/uploads\/2025\/06\/articulo_SIC_IA_mp_cmf.pdf\" target=\"\u2018_self\u2019\">appropriate management and governance systems<\/a>, intelligent systems will be able to fulfil their philanthropic function, which is ultimately the most important thing: that AI truly serves to improve the quality of life of people and their organisations.<\/p>\n<\/blockquote>","protected":false},"excerpt":{"rendered":"<p>Aunque toda nueva tecnolog\u00eda inform\u00e1tica presenta riesgos, quiz\u00e1s la IA sea de las que m\u00e1s incrementa la magnitud de los riesgos, tanto negativos (por su impacto potencial a nivel organizacional y social) como positivos (por las magn\u00edficas oportunidades que ofrece, por ejemplo, para el soporte a las decisiones). Respecto a los riesgos tradicionales de las [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":792,"comment_status":"closed","ping_status":"","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[7],"tags":[],"class_list":["post-791","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-blog"],"_links":{"self":[{"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/posts\/791","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/comments?post=791"}],"version-history":[{"count":3,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/posts\/791\/revisions"}],"predecessor-version":[{"id":797,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/posts\/791\/revisions\/797"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/media\/792"}],"wp:attachment":[{"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/media?parent=791"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/categories?post=791"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/i2sc.es\/en\/wp-json\/wp\/v2\/tags?post=791"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}