top of page
  • Zdjęcie autoraHubert Taler

E for Ethics of AI usage | ABC of AI

Artificial intelligence (AI) is taking over our world, from healthcare to finance. But, as Spider-Man's Aunt May said, with great power comes great responsibility. (Or was it Uncle Ben?) Ethical considerations in AI are key to ensuring the technology benefits society without causing harm.

Recent months have brought a lot of discussion about the ethics of using AI, not only from the use side of the finished solution, but also when preparing, for example, a language model. Can we use images that are publicly available? Can we train our model on publicly available texts?

By Gage Skidmore, CC BY-SA 3.0,

Finally, can we base the voice of our application on the voice of a well-known actress, even if she doesn't agree to it? Such a story, after all, (plausibly), happened to Scarlett Johansson.

Over the years, key ethical issues surrounding AI have included::

  1. Bias and fairness: AI systems can inadvertently perpetuate biases present in training data, leading to unfair outcomes. The similarity to how the human brain works can be seen here.

  2. Privacy: training AI often requires huge amounts of data, raising concerns about how this data is collected, stored and used.

  3. Transparency: understanding how AI makes decisions is key to accountability. This is even the subject of a separate issue: the interpretability and explainability of AI. Such complex systems are completely opaque to us.

  4. Autonomy: Balancing human control with AI autonomy to ensure ethical outcomes. Here we need to draw a line between human decision-making based on data provided by AI and autonomous decision-making by AI without human intervention or control.

To ethically utilise the potential of AI, we need to prioritise transparency, fairness and accountability. Collaboration between technologists, ethicists and decision-makers will be key to responsibly navigate this new field.

When AI crosses the boundaries of ethics

Artificial intelligence has great potential, but it is not free of pitfalls. Here are some well-known cases where AI has crossed ethical boundaries:

In 2020, a major technology company faced criticism for facial recognition software used by law enforcement. Concerns about racial bias and privacy violations led to public outrage and the eventual suspension of the software.

Another example: a global corporation used an AI tool for recruitment that was later found to favour men due to biased training data. This highlighted the risk of AI perpetuating existing social biases.

Another example came from China, where some cities implemented AI-based predictive policing systems that disproportionately targeted minority communities. The lack of transparency and accountability raised serious ethical questions.

The findings point to the need for continuous monitoring and updating of AI systems to reduce bias, as well as the need for clear communication about how AI makes decisions. Stronger policies governing the ethical use of AI are also important. These cases highlight the need for vigilance, transparency and ethical guidelines in the development and implementation of AI.

Dall-E generated metaphor of "Ethical AI"

The role of regulation in AI ethics

As AI technologies advance, the need for a robust regulatory framework becomes more critical to ensure ethical practices. Here is how regulation plays a key role:


Regulations such as RODO in Europe set strict data protection standards, ensuring that AI systems responsibly process personal data. This is an important element of the recently adopted AI ACT.

Ensuring fairness

The law may require regular audits of AI algorithms to detect and reduce bias, promoting fairness and equality in applications. This is particularly true for algorithms that classify statements or, for example, job applicants or those applying for financial services.

Accountability and transparency

Regulatory frameworks can require companies to disclose how their AI systems make decisions, which promotes transparency and accountability. The principle of 'no human being' is also important here. - The final decision should be made by a human based on a recommendation from an algorithm. If information generated by a language model goes to a customer or user, it should be labelled as such - we should always know that we are talking to AI.

Preventing harm

By establishing clear ethical guidelines, regulation can help prevent the misuse of AI in areas such as surveillance, predictive policing and autonomous weapons.

What's in the future?

The future of AI regulation requires the development of international agreements to ensure consistent ethical practices around the world. It is also crucial to continually update regulations to keep up with rapid technological developments. All stakeholders should be involved in this process, including technologists, ethicists, policy makers and the public. Effective regulation is essential to harness the benefits of AI while protecting against its risks, ensuring that technology serves humanity ethically and equitably.

Creating ethical AI is a complex but crucial task that faces several major challenges. AI systems learn from data that may contain social biases, leading to unfair outcomes. In addition, AI algorithms, especially deep learning models, can be opaque, making it difficult to understand the decision-making process. Implementing ethical guidelines consistently across all AI applications is difficult, especially in large organisations, and the rapid development of AI technologies often outpaces the creation of ethical standards and regulations.

Building ethical AI requires a coordinated effort to proactively address challenges and ensure that the technology benefits society as a whole in a fair and transparent manner. Key strategies include using diverse and representative datasets to train AI models, reducing the risk of bias. It is also important to develop AI systems that can explain their decision-making processes in an understandable way. Regular ethical audits, conducted by independent third parties, help to maintain compliance with ethical guidelines. Interdisciplinary collaboration, involving ethicists, sociologists and other experts, ensures that different perspectives are included in the AI development process. In addition, continuous learning and adaptation of AI systems to the latest ethical guidelines and best practices is essential to maintain high ethical standards.

How do we shape ethical AI?

Looking to the future, the development of AI will be of increasing ethical importance. We can expect to see more comprehensive and globally recognised ethical guidelines to help standardise practices across industries. Governments and international organisations can establish dedicated bodies to oversee AI ethics, ensuring compliance and resolving ethical violations. In addition, universities and training programmes are likely to integrate ethics deeply into AI curricula, preparing future developers to prioritise ethical issues.

Technological advances will play a key role in shaping the future of AI ethics. New technologies, such as explainable AI and bias detection tools, will help developers create more transparent and fair AI systems. Increased awareness and public engagement with AI ethics will drive demand for ethical AI, influencing companies to adopt responsible practices. It will also be important that ethical frameworks are proactively updated to keep up with the rapid progress of AI, which requires collaboration between technologists, ethicists, policymakers and society.

In the future, we can also expect that the development of new technologies will inherently support ethical practices. Collaborative efforts in proactive adaptation and ethical innovation will ensure that AI serves humanity in a positive and equitable way. The dynamic landscape of the future of AI and ethics requires constant vigilance and collaboration to ensure that these technologies benefit society as a whole.

0 wyświetleń0 komentarzy

Ostatnie posty

Zobacz wszystkie


bottom of page