top of page
Hubert Taler

Artificial intelligence (AI) is taking over our world, from healthcare to finance. But, as Spider-Man's Aunt May said, with great power comes great responsibility. (Or was it Uncle Ben?) Ethical considerations in AI are key to ensuring the technology benefits society without causing harm.

Recent months have brought a lot of discussion about the ethics of using AI, not only from the use side of the finished solution, but also when preparing, for example, a language model. Can we use images that are publicly available? Can we train our model on publicly available texts?

By Gage Skidmore, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=27440650

Finally, can we base the voice of our application on the voice of a well-known actress, even if she doesn't agree to it? Such a story, after all, (plausibly), happened to Scarlett Johansson.

Over the years, key ethical issues surrounding AI have included::

  1. Bias and fairness: AI systems can inadvertently perpetuate biases present in training data, leading to unfair outcomes. The similarity to how the human brain works can be seen here.

  2. Privacy: training AI often requires huge amounts of data, raising concerns about how this data is collected, stored and used.

  3. Transparency: understanding how AI makes decisions is key to accountability. This is even the subject of a separate issue: the interpretability and explainability of AI. Such complex systems are completely opaque to us.

  4. Autonomy: Balancing human control with AI autonomy to ensure ethical outcomes. Here we need to draw a line between human decision-making based on data provided by AI and autonomous decision-making by AI without human intervention or control.

To ethically utilise the potential of AI, we need to prioritise transparency, fairness and accountability. Collaboration between technologists, ethicists and decision-makers will be key to responsibly navigate this new field.

When AI crosses the boundaries of ethics

Artificial intelligence has great potential, but it is not free of pitfalls. Here are some well-known cases where AI has crossed ethical boundaries:

In 2020, a major technology company faced criticism for facial recognition software used by law enforcement. Concerns about racial bias and privacy violations led to public outrage and the eventual suspension of the software.

Another example: a global corporation used an AI tool for recruitment that was later found to favour men due to biased training data. This highlighted the risk of AI perpetuating existing social biases.

Another example came from China, where some cities implemented AI-based predictive policing systems that disproportionately targeted minority communities. The lack of transparency and accountability raised serious ethical questions.

The findings point to the need for continuous monitoring and updating of AI systems to reduce bias, as well as the need for clear communication about how AI makes decisions. Stronger policies governing the ethical use of AI are also important. These cases highlight the need for vigilance, transparency and ethical guidelines in the development and implementation of AI.


Dall-E generated metaphor of "Ethical AI"

The role of regulation in AI ethics

As AI technologies advance, the need for a robust regulatory framework becomes more critical to ensure ethical practices. Here is how regulation plays a key role:

Privacy

Regulations such as RODO in Europe set strict data protection standards, ensuring that AI systems responsibly process personal data. This is an important element of the recently adopted AI ACT.

Ensuring fairness

The law may require regular audits of AI algorithms to detect and reduce bias, promoting fairness and equality in applications. This is particularly true for algorithms that classify statements or, for example, job applicants or those applying for financial services.

Accountability and transparency

Regulatory frameworks can require companies to disclose how their AI systems make decisions, which promotes transparency and accountability. The principle of 'no human being' is also important here. - The final decision should be made by a human based on a recommendation from an algorithm. If information generated by a language model goes to a customer or user, it should be labelled as such - we should always know that we are talking to AI.

Preventing harm

By establishing clear ethical guidelines, regulation can help prevent the misuse of AI in areas such as surveillance, predictive policing and autonomous weapons.

What's in the future?

The future of AI regulation requires the development of international agreements to ensure consistent ethical practices around the world. It is also crucial to continually update regulations to keep up with rapid technological developments. All stakeholders should be involved in this process, including technologists, ethicists, policy makers and the public. Effective regulation is essential to harness the benefits of AI while protecting against its risks, ensuring that technology serves humanity ethically and equitably.

Creating ethical AI is a complex but crucial task that faces several major challenges. AI systems learn from data that may contain social biases, leading to unfair outcomes. In addition, AI algorithms, especially deep learning models, can be opaque, making it difficult to understand the decision-making process. Implementing ethical guidelines consistently across all AI applications is difficult, especially in large organisations, and the rapid development of AI technologies often outpaces the creation of ethical standards and regulations.

Building ethical AI requires a coordinated effort to proactively address challenges and ensure that the technology benefits society as a whole in a fair and transparent manner. Key strategies include using diverse and representative datasets to train AI models, reducing the risk of bias. It is also important to develop AI systems that can explain their decision-making processes in an understandable way. Regular ethical audits, conducted by independent third parties, help to maintain compliance with ethical guidelines. Interdisciplinary collaboration, involving ethicists, sociologists and other experts, ensures that different perspectives are included in the AI development process. In addition, continuous learning and adaptation of AI systems to the latest ethical guidelines and best practices is essential to maintain high ethical standards.

How do we shape ethical AI?

Looking to the future, the development of AI will be of increasing ethical importance. We can expect to see more comprehensive and globally recognised ethical guidelines to help standardise practices across industries. Governments and international organisations can establish dedicated bodies to oversee AI ethics, ensuring compliance and resolving ethical violations. In addition, universities and training programmes are likely to integrate ethics deeply into AI curricula, preparing future developers to prioritise ethical issues.

Technological advances will play a key role in shaping the future of AI ethics. New technologies, such as explainable AI and bias detection tools, will help developers create more transparent and fair AI systems. Increased awareness and public engagement with AI ethics will drive demand for ethical AI, influencing companies to adopt responsible practices. It will also be important that ethical frameworks are proactively updated to keep up with the rapid progress of AI, which requires collaboration between technologists, ethicists, policymakers and society.

In the future, we can also expect that the development of new technologies will inherently support ethical practices. Collaborative efforts in proactive adaptation and ethical innovation will ensure that AI serves humanity in a positive and equitable way. The dynamic landscape of the future of AI and ethics requires constant vigilance and collaboration to ensure that these technologies benefit society as a whole.

Hubert Taler

To introduce the topic of Deep Learning, or deep learning, it is impossible not to touch on artificial neural networks, on which the whole concept is based.

Neural networks are an artificial information processing system (nowadays 100% software-based - there are no electronic or physical parts) that simulate, to a certain extent, the operation of the human brain. To some extent - that is, they are not a model of the actual structures of the brain, they merely borrow a general operating principle.


Neural networks are not a novelty

Neural networks are not a novelty in computer science. As unidirectional networks (without back propagation), they appeared as early as the 1940s of the 20th century. These were known as perceptrons. As defined by Wikipedia:

The operation of a perceptron is to classify the data appearing at the input and set the output values accordingly.

And this operating principle still applies today to all the neural networks we use. What have perceptrons been able to do? For example, the perceptron created by Frank Rosenblatt and Charles Whightman was trained to recognise alphanumeric characters. This was 1957, mind you!


Sieć neuronowa z ukrytymi warstwami (Autorstwa John Salatas - https://jsalatas.ictpro.gr/implementation-of-elman-recurrent-neural-network-in-weka/, CC BY-SA 3.0, https://commons.wikimedia.org/w/index.php?curid=56969207)

Since then, neural network structures have become highly complex. One-way networks have been followed by recurrent networks, in which the connections between neurons are a graph with cycles, and other types such as self-organising networks.

Why do we need these networks

Well, exactly why has mankind actually been creating these artificial neural networks for over 80 years? Because of their unique feature: they allow us to solve practical problems without having to formalise them mathematically.

What is it all about? We can, for example, find the relationship between medication administered and the results of treatment, recognise what emotion a photograph of a face expresses, or suggest the most favourable combination of actions in our portfolio, without writing complex equations describing reality. This is all the more so because we are often unable to write such equations - and thanks to neural networks, they 'write' themselves, reflecting the relationships between inputs and outputs in the network.

How is this possible? Well, there are no miracles. We simply have to provide the network with thousands, or tens of thousands, of examples of observational data (to pull a theme from an example: the combination of assets in a portfolio and the financial result), so that after such 'learning', it can solve the next one on its own.

Yes, just like a human, a neural network, it learns by example.

And, in fact, we also explained the title concept of deep learning in passing. This is a method of training a multilayer neural network, without knowing the exact parameters of this learning. This type of method makes it possible to detect complex patterns in the data.


Pierwszy perceptron - sieć można było obejrzeć fizycznie

Check! Did deep learning work?

The way in which such a trained network is tested is interesting. An engineer or scientist training such a network, leaves himself a sample of the original data (e.g. 10% of the data), and these are not used to train the network. Once trained, he checks its responses on this sample - the output should reflect the real data as much as possible. Since it works on the real data we have, chances are it will also work on the newly collected data.

This also means, of course, that any irregularities in data collection (e.g. unconscious narrowing of data, so-called data bias), will also make our trained model inadequate for analysing real data. Similarly, these types of systems are also susceptible to deliberate attacks - i.e. deliberate ' data contamination' of the input data.

What to use it for?

Systems based on deep learning are commonly used in speech recognition, image recognition, image classification e.g. for medical purposes, reconstruction of images or other media that are damaged or incomplete. There is also a category of applications of such systems for cyber security purposes (e.g. detecting attacks or unusual behaviour).

Deep learning is with us for good, and is inevitably linked to building neural networks. We use it on a daily basis, whether calling an automated call centre or allowing a smartphone to detect images of landscapes or selfies among the photographs we have taken.

Hubert Taler

Cloud computing is a concept that may be associated with something modern and developed in the recent history of technology. However, its roots go back to much earlier times. In this article, we trace the evolution of this revolutionary concept from the early days of computing to modern times.

The prehistory of the cloud

The origins of cloud computing can be traced back to 1963, when DARPA (Defense Advanced Research Projects Agency) invested in the MAC (Multiple Access Computer) project, the first system to enable the sharing of CPU time between multiple users. In the 1960s, the concept of time-sharing gained popularity, primarily through Remote Job Entry (RJE) technology, mainly used by companies such as IBM and DEC.

The dominant model at the time was the ‘data centre’ model, where users submitted jobs to be completed by operators on IBM mainframes. In the 1970s, full time-sharing solutions began to appear, such as Multics on GE hardware, Cambridge CTSS or early UNIX ports on DEC hardware. In practice - this meant that instead of relying on operators, researchers needing to use a computer could do it themselves.

Another breakthrough came with the development of the Internet in the late 1980s and early 1990s. In the 1990s, telecoms companies began to offer virtual private network (VPN) services, which provided a quality of service comparable to dedicated connections, but at lower prices. Cloud computing began to symbolise a new era in data management. We now associate VPN with a change of virtual geographical address, but it means accessing resources elsewhere as if they were on our local network.

This has all gone by various names (e.g. remote access) - but without using the metaphor of cloud computing or storage.



The concept of the cloud is emerging

In 1994, General Magic used the cloud metaphor to describe the Telescript environment, in which mobile agents could travel around the network in search of access to various sources of information. It was then that cloud computing began to be seen not just as remote access to services, but as a platform for creating complex virtual services. For example, a place where applications run independently of the user's attention.

In 2002, Amazon established a subsidiary called Amazon Web Services, enabling developers to build applications independently of traditional IT infrastructures. In 2006, the company introduced further services: Simple Storage Service (S3) and Elastic Compute Cloud (EC2), which were among the first services to use server virtualisation on a pay-per-use basis. The term Infrastructure as a Service (IaaS) was born.

2007 brought further breakthroughs: Netflix launched its online movie streaming service - thus creating the first streaming service based on the Software as a Service model, and IBM and Google collaborated with universities to create server farms for research purposes.

The decade of 2010 began with Microsoft introducing the Microsoft Azure platform. Shortly thereafter, Rackspace Hosting and NASA initiated the OpenStack project, aimed at making it easier for organisations to offer cloud computing services on standard hardware, as opposed to network equipment dedicated to server farms.

In the following years, cloud development accelerated. IBM introduced the IBM SmartCloud framework and the US government created the FedRAMP programme, setting security standards for cloud services. In 2011, Apple launched iCloud and in 2012 Oracle announced Oracle Cloud.



Without the cloud, would there be no AI?

Over the past few years, especially after the 2020 pandemic, cloud computing has grown in popularity as a tool to provide remote working flexibility and data security. Currently, global spending on cloud services stands at $706 billion and is expected to reach $1.3 trillion by 2025.

Modern advances in artificial intelligence, such as ChatGPT, would be impossible without the infrastructure provided by cloud computing. The cloud offers the immense computing power and resources necessary to process and analyse the large data sets that are central to machine learning and AI. With cloud computing, AI algorithms can be trained on complex models using massive amounts of data much faster and more efficiently than ever before. What's more, the cloud enables easy scalability and availability of these resources, which is critical for ongoing exploration and innovation in AI. By providing flexible and powerful on-demand computing resources, cloud computing not only supports the development of new capabilities in AI, but also enables faster deployment and integration of smart applications into everyday life and business. Thus, the cloud has become the foundation on which modern AI is built, transforming theoretical concepts into practical applications that change the way we work, learn and communicate.

The story of cloud computing shows how far technology can take us, offering ever new opportunities for businesses and individual users alike. It is a story about the ongoing quest for more efficient and flexible use of the computing resources that define the modern world of technology.

bottom of page