Morality, Ethics, and AI – Part 2: Moral Agency

In the first part of this series, we explored foundational moral philosophies – the theories and principles that help us distinguish right from wrong. Now we turn to the concept of moral agency, a key building block in ethics. Moral agency is about the capacity to make moral decisions and be held responsible for actions. This post will explain what moral agency means, define what a moral agent is, and classify different kinds of moral agents. Finally, we will apply these ideas to the realm of Artificial Intelligence (AI), examining who the moral agents are in creating, deploying, and governing AI systems, and who bears responsibility for AI’s outcomes.

Understanding Moral Agency

Moral agency generally refers to an individual’s ability to make ethical choices and to be accountable for those choices. In simple terms, to have moral agency means to understand the difference between right and wrong and to have the capacity to act on that understanding. There are several key components that together define moral agency:

  • Capacity to Act Freely: A moral agent must have the ability to act intentionally and autonomously. In other words, the person (or entity) can choose their actions freely, without being forced or completely controlled by external forces. This implies a degree of free will or control over one’s own behavior. If someone has no control over an action – for example, if they are physically forced or under extreme duress – we usually don’t hold them fully morally responsible for that act. Capacity to act also entails rationality: moral agents typically can understand the consequences of their actions and deliberate about different options.
  • Moral Knowledge (Knowing Right from Wrong): Moral agency also requires a certain level of moral awareness or understanding. An agent should be able to discern right from wrong, or at least understand that certain actions can harm others and should be avoided. This doesn’t mean they must be moral philosophers, but they need a basic grasp of moral rules or principles in their society. For instance, a very young child or an animal might not comprehend the concept of “right vs. wrong” – which is why we don’t consider them full moral agents (even though we may still care about their well-being). Having moral knowledge means the agent can recognize when a situation has a moral dimension and can consider moral principles when deciding how to act.
  • Responsibility for One’s Actions: Being a moral agent means that the person (or entity) is the author of their actions in a moral sense – they can be credited or blamed for what they do. If you have the capacity to choose and you know the basics of right and wrong, then you are responsible for the choices you make. Moral responsibility implies that our actions are not just accidents happening to us; rather, we intentionally carry them out and can be judged for them. For example, if an adult knowingly commits fraud, we consider them morally (and legally) responsible for that decision. On the other hand, if someone sleepwalks and breaks something, we don’t usually call that a moral action because they lacked conscious control – thus we don’t hold them responsible in the normal way.
  • Accountability: This goes hand-in-hand with responsibility. To be a moral agent is to be accountable to others and society for your actions. Accountability means that if you do something wrong (or right), others have a justified expectation to call you out, praise you, or punish you as appropriate. An accountable agent can be asked to explain their actions and may face consequences (like moral blame, social sanctions, or legal penalties) if their actions violate ethical standards. In a community, treating someone as a moral agent means we expect them to answer for their behavior. For instance, laws and moral rules assume adults are accountable for what they do. If a person causes unjustified harm, we don’t excuse it – we hold them accountable. (In fact, moral agents are often said to have a duty not to cause unjustified harm to others.)

All these components – free action, moral understanding, responsibility, and accountability – together define moral agency. An entity with moral agency can make moral choices and be rightly held answerable for them. Without these abilities, an entity would not qualify as a true moral agent in the full sense.

What Is a Moral Agent?

A moral agent is an entity that possesses moral agency. Typically, when we talk about moral agents, we are talking about people – beings who can act deliberately, understand moral rules, and be held responsible for their actions. A classic definition is that a moral agent is “a being capable of acting with reference to right and wrong,” meaning the agent’s decisions can be guided by moral considerations. If you are a moral agent, your actions can be evaluated ethically, and you can be praised or blamed accordingly.

Human adults with a sound mind are the paradigm example of moral agents. Such a person can reflect on what they ought to do, make a choice, and understand that others will consider that choice good or bad. For example, if Omar sees someone drop their wallet and he considers whether to return it or keep it, he is thinking as a moral agent. If he decides to return it, we praise his honesty; if he keeps it, we criticize him because he knew it was wrong. Omar had the capacity to choose and knew that taking someone else’s property is wrong, so he’s fully responsible for his action.

It’s useful to contrast moral agents with what some philosophers call moral patients. A moral patient is an entity that doesn’t act morally itself but is still worthy of moral consideration from those who are moral agents. In other words, moral patients are those to whom moral agents have responsibilities. Classic examples of moral patients are infants or animals: they cannot formulate moral intent or be held accountable for their actions (a baby doesn’t understand right and wrong, an animal follows instinct), but moral agents (like adults) still owe them ethical treatment and care. All moral agents are usually also moral patients (for instance, adults deserve moral consideration too), but not all moral patients are moral agents.

In summary, a moral agent has the ability to do right or wrong and can be judged for it. If an entity cannot understand or control its actions in a meaningful way, then it isn’t a moral agent – it might still be morally important (as a moral patient) but it doesn’t bear responsibility.

Types of Moral Agents

Not all moral agents are exactly the same. We can classify moral agents into different types or categories, especially when we consider edge cases and non-human entities. Below are some key types of moral agents I think about and how they differ:

Fully Capable Human Agents

These are adult human individuals who have full mental capacity and no extraordinary constraints on their freedom. They represent the standard case of moral agency. Such individuals can reason about their actions, understand moral norms, and are not under coercion or impairment that would significantly undermine their judgment. Society and law typically assume that people in this category are responsible for what they do. For example, if a mentally sound adult commits a crime or a moral transgression, we naturally hold them accountable. They had the ability to choose differently and the knowledge that what they did was wrong.

A fully capable human moral agent has the highest degree of responsibility. They are expected to uphold moral duties (like not harming others, telling the truth, keeping promises) unless there’s some extenuating circumstance. Most ethical theories (from Aristotle to Kant to Mill, discussed in the first part of the series) are largely talking about this kind of agent – a person who can deliberate and act freely based on moral reasons.

Limited or Diminished Moral Agents (Partial Agents)

Some humans do not have the full capacity for moral agency, usually due to age, cognitive development, or certain impairments. Children, for instance, are not expected to be fully moral agents. A five-year-old cannot be held to the same standard of responsibility as an adult. Young children are still learning right from wrong and may not fully grasp the consequences of their actions. This is why, both morally and legally, we treat children differently. If a toddler breaks something, we don’t consider it a moral failing; we might correct them gently, but we recognize they don’t yet have the mature agency to be morally accountable in the same way an adult is. (In many legal systems, there is a certain age – for example, the age of criminal responsibility – below which children are not held legally accountable for crimes because of their limited agency.)

Similarly, adults with certain mental disabilities or disorders may have diminished moral agency. If an individual has a severe cognitive impairment or a mental illness that prevents them from understanding their actions, we may judge that they are not fully responsible for those actions. For example, someone who cannot distinguish reality from delusion might not be accountable for something they do while in a psychotic state. This is reflected in legal concepts like the insanity defense, but it also holds in everyday moral judgment – we recognize that capacity and understanding matter.

Another case of diminished agency is when someone is under extreme coercion or duress. Imagine an adult is forced at gunpoint to drive a getaway car for bank robbers. Even though normally that adult is a capable moral agent, in this situation their freedom to act is stripped away by force. We would likely judge that the person isn’t morally blameworthy for the coerced action because they had no real choice – they temporarily lost the ability to act freely. In effect, their moral agency was overridden by the coercion. This shows that even a typical moral agent can become a “non-agent” in certain extreme contexts, since moral agency requires voluntariness.

In summary, limited moral agents either lack full cognitive capacity or full freedom. We still consider them human and often treat them with care and moral concern, but we adjust our expectations of responsibility. Children, for example, are guided and educated rather than blamed harshly; individuals with impairments are cared for or treated, and if they cause harm unintentionally, we respond differently than we would to a fully aware adult. These cases show that moral agency can come in degrees – it’s not always an on/off switch.

Collective or Institutional Agents

Can groups or organizations be moral agents? This is a question that ethicists and legal scholars have debated. We know individual people are moral agents, but what about a corporation, a government, or a team of people acting together? After all, groups make decisions and take actions that affect others. We often speak as if organizations act – e.g., “the company decided to dump toxic waste” or “the government chose to pass a law.” But is the organization itself a moral agent, or is it just a shorthand for the individuals within it?

Some philosophers argue that groups can be considered moral agents in their own right. They suggest that a collective can have intentions or policies that are more than just the sum of what each member wants. For example, a corporation like a car manufacturer might develop a corporate culture and decision-making process that leads to unethical actions (say, cheating on emissions tests). Even if no single engineer wanted to cause harm, the organization as a whole might be said to have “acted” unethically through its established procedures and goals. In that sense, people might hold the corporation accountable in addition to (or even rather than) the individuals. In fact, modern law does treat corporations as “legal persons” to some extent – companies can be fined for wrongdoing, which is a way of saying the collective entity is being held responsible.

On the other hand, not everyone agrees that organizations are full moral agents. Some argue that only individuals have moral agency, and groups are just collections of individuals. According to this view, saying “the corporation did wrong” is a figurative way of speaking; in reality, it’s certain people within the corporation who made choices, and those individuals are the true moral agents. Critics of corporate moral agency might say that attributing intentions or guilt to a non-human entity is problematic – a corporation doesn’t have a mind or feelings like a person does (although it might have procedures that mimic decision-making).

The truth may be a bit of both: we often do hold groups accountable (we expect, for instance, a company to take responsibility for a harmful product, not just scapegoat one low-level employee). There’s even the concept of group agency in philosophy, meaning a group can function as an agent that has beliefs, goals, and makes decisions. Ultimately, whether collectives are genuine moral agents or just metaphorical ones, we do treat them as bearing moral responsibility in many practical contexts. For example, in the scandal where a car company was caught cheating emissions tests, many people said “The Car Company acted unethically,” implying that the company as a whole was a culpable agent, not only the individual engineers involved. In sum, institutions can and should be held morally accountable, even if we recognize that any group is made up of individual human agents who actually carry out the actions.

Artificial Agents (AI and Machines)

A particularly challenging category is that of artificial agents – meaning robots or AI systems. Can a machine be a moral agent? This question is no longer just science fiction; as AI systems become more advanced and autonomous, people are increasingly wondering whether these systems might ever have responsibilities or “minds” of their own.

As of today, the general consensus is that AI and other artificial systems are not moral agents in the full sense. They do not possess consciousness or a genuine understanding of right and wrong comparable to humans. An AI might simulate conversation about ethics or make “decisions” based on programmed criteria, but it doesn’t truly comprehend moral values. Moreover, you can’t yet hold an AI accountable in a meaningful way – you can’t punish a software program by appealing to its conscience or by putting it in jail. Instead, we hold the humans behind the AI responsible for what the AI does.

For instance, if a self-driving car causes an accident due to a design flaw in its decision-making algorithm, we don’t say the car itself is morally guilty of negligence. Rather, we ask: did the engineers program it responsibly? Did the company properly test it? Perhaps the safety driver or the owner has some responsibility too. The machine is the immediate actor, but not a moral agent we can blame or praise – it’s not acting out of its own free will or moral awareness.

Many ethicists reinforce this view by describing AI and robots as moral tools or moral instruments rather than independent moral agents. One prominent computer ethics scholar, for example, argued that no artifact (no man-made system) can ever be a moral agent; they are always components in human moral actions. In this view, an AI can certainly be involved in morally significant situations – it can be part of what causes harm or good – but it isn’t the originator of moral decisions. The responsibility stays with the humans who created or deployed it.

However, there is an ongoing debate. A few researchers and futurists speculate that if AI becomes advanced enough – say, a future with true artificial general intelligence that might even have self-awareness – then perhaps such an AI could be considered a moral agent. Some philosophical arguments have been made that, under certain conditions, an “artificial agent” could have agency comparable to humans. For example, if a robot could learn, adapt, and make autonomous decisions based on ethical principles, one could argue it has a form of moral agency. There have even been proposals about giving advanced AI some form of legal personhood in the future (though these are controversial and not implemented anywhere currently).

In the present reality, AI systems lack the key components of moral agency: they have capacity to act (they can operate and make choices within programmed parameters), but they lack true moral understanding and they cannot be meaningfully held accountable (you can’t punish an AI or make it understand why its action was wrong). Because of that, we treat AI not as moral agents but as products or tools. The moral agents are the humans who design, deploy, and use these tools.

This leads us directly into our next section. Now that we know what moral agency is and who can be a moral agent, we need to ask: when we’re dealing with AI, who are the moral agents involved at each stage, and how do we assign responsibility for what AI does? The discussion below will delve into the roles of people and organizations in AI creation and usage, and how our human moral agency comes into play in the age of intelligent machines.

Moral Agency in the AI Domain

Artificial Intelligence raises special questions about morality and agency, because AI systems can perform actions or decisions that impact the world, yet (as noted) they aren’t moral agents themselves in the traditional sense. So who carries the moral agency in the context of AI? In this section, we examine several scenarios – building AI, deploying AI, using AI, and governing AI – and identify the moral agents and their responsibilities in each.

Moral Agents in AI Development (Creating AI Systems)

When it comes to producing AI models and artifacts, the primary moral agents are the people and organizations that design and build these systems. AI developers, which include software engineers, data scientists, researchers, and product designers, are key moral agents in the creation phase. They decide how an AI system will function: what data it will be trained on, what objective it will optimize, and what safeguards it will have. These choices have moral dimensions. For example, if developers of a facial recognition AI know that their training data is skewed mostly toward lighter-skinned faces, they have a moral responsibility to acknowledge this bias and ideally fix it. If they ignore it and the system ends up misidentifying people of color at a higher rate (leading to wrongful arrests or denials of service), the developers and the company that built that AI would be morally (and perhaps legally) responsible for the harm caused.

In developing AI, responsibility is not only on the engineers writing the code, but also on team leaders, project managers, and the companies funding and directing the work. A tech company that pushes out a new AI product hastily without adequate testing is exercising poor moral agency – it’s prioritizing speed or profit over potential risk to users and society. The company as a collective agent can be held accountable if its product causes harm due to negligence or foreseeable flaws. We’ve seen examples of this: think of an AI chatbot (Tay) released without proper content filters that ends up generating hate speech, or a social media algorithm tuned to maximize engagement that unintentionally spreads misinformation and contributes to real-world unrest. In each case, the problem can be traced back to decisions (or lack of precautions) by the human agents who created the system.

To put it plainly, the act of creating powerful technology comes with ethical obligations. Engineers often quote uncle Ben saying “with great power comes great responsibility,” and this applies here. Those who create AI must consider the ethical implications of their design choices. They are moral agents who must anticipate how their systems could be misused or could fail. For instance, if you are developing a machine-learning model for medical diagnoses, you have a responsibility to ensure it’s as accurate and unbiased as possible, because doctors and patients will rely on it. If you cut corners and the AI makes a deadly mistake that could have been prevented, the fault lies with the developers and the company that allowed such a flawed system into the world.

The wider community of AI researchers also plays a role – for example, by establishing ethical guidelines and best practices (like fairness, transparency, and privacy standards). But ultimately, when we pinpoint “who are the moral agents” in AI development, it’s the humans: the designers, programmers, and perhaps the corporate entities, all of whom have the capacity to make choices during the creation of AI. They should act responsibly and can be held accountable if they don’t.

Moral Agents in AI Deployment and Use

Next comes the deployment of AI systems and their use in real-life situations. This often involves a new set of human moral agents: those who decide to implement the AI in a particular context, and those who actually operate or interact with the AI.

Consider a hospital that decides to start using an AI system to help diagnose patients. The hospital’s administration (and the clinical leaders) are making a deployment decision – they are moral agents in this scenario. They must evaluate whether the AI is safe and effective, decide how it will be used, and ensure that staff are trained to work with it. If the hospital deploys a diagnostic AI without proper validation and it ends up misdiagnosing patients, leading to harm, the hospital leadership and anyone who was negligent in this decision can be held responsible. Their agency comes into play in the choice to trust and use the AI.

Now think about the users of AI. In the hospital example, that would include doctors or nurses who use the AI’s output to make treatment decisions. These individuals are not off the moral hook just because an AI is involved. A doctor using an AI tool is still a trained professional with duties; if the AI suggests something that seems wrong, the doctor ought to question it. In other words, the end-users have a responsibility to use AI systems judiciously. If a doctor blindly follows an AI recommendation without understanding it and a patient is hurt, we would likely fault the doctor for abdication of responsibility. The doctor can’t just say “well, the computer said so” – society expects the human professional to remain the ultimate decision-maker. This is sometimes described as keeping a “human in the loop” to ensure oversight. But that human in the loop must truly exercise their agency, not be a passive rubber stamp.

Another real-world example is self-driving cars. If someone activates an autopilot or self-driving mode in their car, are they free of responsibility for what the car does? Most people would say no – the human driver is still a moral agent who should pay attention and intervene if needed. There was a famous case of a self-driving Uber test vehicle that tragically struck and killed a pedestrian in 2018. In that situation, there was a safety driver in the car whose job was to monitor and take over if the AI faltered. After the accident, investigations showed the safety driver was reportedly distracted at that crucial moment. The public discussion around this incident quickly turned to the human operator’s responsibility: should they have been more alert? While it’s true the AI made an error (it failed to correctly identify the pedestrian), the immediate accountability fell on the human who was supposed to supervise. This illustrates how, during AI deployment, an organization might try to share or shift responsibility: the company might say the human driver was the last failsafe, whereas others might argue the company deployed an AI that wasn’t safe enough. Either way, it’s clear that multiple human agents are involved – the engineers who built the faulty system, the managers who decided it was ready for road testing, and the safety driver tasked with oversight were all moral actors in that scenario.

We also see interesting questions of responsibility with AI in consumer services. For example, if a social media platform uses AI algorithms to curate your news feed and you as a user rely on that feed for information, do you have any responsibility for verifying the content? If someone simply believes whatever the AI-driven feed shows and then acts on false information, to what extent is that on the user versus the platform’s algorithms? This is a complex area, but one might say both have roles: the platform’s creators are responsible for algorithm design (to prevent promoting dangerous misinformation), but users also have a general responsibility to be critical thinkers.

In summary, when AI systems are deployed and used, the moral agents to look at include:

  • Organizations or people deploying the AI – e.g., a company integrating AI into their product, a public agency rolling out an AI system for decision-making, a manager approving use of AI in a workflow. They have the agency in deciding that an AI will be used and how it will be used.
  • Front-line users or operators – e.g., a professional using AI advice, a driver using a self-driving feature, or even an average person interacting with an AI assistant. These users have to exercise judgment in how they follow AI outputs.
  • Support and oversight personnel – sometimes there are humans whose job is specifically to oversee AI (like the safety driver in the autonomous car or a content moderator overseeing an AI filtering system). These roles carry moral responsibility to intervene when the AI might go wrong.

Crucially, AI deployment often distributes responsibility across many parties. That can make accountability tricky because when something goes wrong, people might point fingers at each other. This is sometimes referred to as the “problem of many hands,” where an outcome is the result of many interwoven decisions by different agents. It emphasizes that all involved agents need to uphold their part of ethical responsibility and not assume “someone else will take care of it” or “the AI will handle it.” In truth, the AI will not take moral care – only the humans can do that.

Governance and Responsibility for AI’s Consequences

Finally, let’s talk about the broader context: who governs AI and who bears responsibility for AI’s consequences at a societal level. Given that AI technology can have wide-ranging impacts (think about AI controlling self-driving cars, or algorithms that influence millions of people’s information diet), there is a need for oversight beyond just the immediate users and developers. Governance in AI refers to the frameworks of rules, regulations, and norms that guide how AI is developed and deployed.

Governments and regulators are key moral agents here. They have the power to create laws and standards to ensure AI is used safely and ethically. For example, authorities can require that AI systems in critical areas (like healthcare or transportation) meet certain safety standards or undergo ethical review. If regulators do nothing and adopt a hands-off approach, they are still making a moral choice – arguably a negligent one if it leads to harm. On the other hand, proactive governance (such as passing legislation that mandates transparency in AI or that bans certain high-risk AI applications like social scoring) is an exercise of moral agency aimed at protecting the public. A current example is the European Union’s proposed AI Act, which is a comprehensive regulation to govern AI. This law in progress assigns responsibilities to different parties (like AI providers, deployers, etc.) and seeks to ensure accountability. It essentially says: those who build and deploy AI must take responsibility for its outcomes, intended or not. By crafting such rules, lawmakers are acting as moral agents trying to anticipate consequences and assign accountability before problems occur.

Industry groups and standards bodies also play a role in AI governance. Sometimes companies come together to create ethical guidelines (for instance, the tech industry has published AI ethics principles about fairness, transparency, and accountability). Organizations like the IEEE or ISO may develop technical standards for AI to ensure it is safe. While these bodies don’t have legal authority, they exercise moral leadership and self-regulation. The people in these organizations are agents who decide what ethical practices to promote.

Another layer of governance is within companies themselves – for example, some big tech companies have internal AI ethics panels or review processes. If a company knowingly ignores ethical risks identified by their own experts, that is a moral failing of the company’s leadership. Conversely, if they take strong actions to fix or prevent AI-related harms, they are acting responsibly. We can see examples: some companies have pulled the plug on certain AI features after realizing they could be misused, which shows corporate accountability in action.

Now, when AI causes consequences (especially bad ones), who bears responsibility? As we’ve been discussing, the responsibility generally lies with the human agents involved, not the AI system. There is a phrase in AI ethics circles: “AI should not displace human responsibility.” This means that no matter how autonomous or advanced an AI is, we should never get into a mindset of “the AI is to blame.” Ultimately, blame (or praise) must track back to a person or group of people. If an autonomous drone in a military setting makes an erroneous targeting decision, one might be tempted to say it was the drone’s fault; but behind that drone are many layers of human decisions – the people who programmed its targeting algorithm, the commanders who deployed it, the policymakers who approved autonomous weapons in the first place. Those are the moral agents to hold accountable.

There have been proposals in some corners about giving AI “legal personhood” (essentially treating AI a bit like a corporation in terms of liability). However, as of now, no jurisdiction has gone that route, and many ethicists argue it’s a bad idea because it could allow the real human agents to escape liability by blaming the AI. Instead, current thinking and legal frameworks keep the responsibility with humans. For instance, if a self-driving car causes an accident, liability might fall on the manufacturer or software developer for a defect, or on the operator for misuse, or some combination – but not on the car itself as an independent entity.

One challenge is that AI systems can be so complex and involve so many stakeholders that our traditional ways of assigning responsibility become strained. We might need new approaches, like clearer rules that any company deploying AI must have an accountability strategy (e.g., some jurisdictions are considering laws that if an AI causes harm, the provider must assist in investigating and compensating the victim, rather than the victim trying to prove who exactly in the company is at fault). The bottom line from an ethical perspective is that we must not allow AI to create a moral accountability gap. Someone must answer for the actions or decisions of an AI system – whether it’s the developer who wrote the code, the company that provided the service, the person who decided to use it in a certain way, or the overseers who failed to regulate it.

Examples to illustrate governance and responsibility: One concrete example can be found in the realm of finance. Suppose an AI algorithm is used by a bank to approve or deny loans, and it turns out to systematically discriminate against a certain group of people (say, it rejects minority applicants more frequently due to biases in training data). When this comes to light, who is responsible? Arguably:

  • The company (bank) is responsible because it chose to use the algorithm and perhaps didn’t audit it for fairness.
  • The developers of the AI model (maybe a third-party AI vendor) are responsible if they designed it poorly.
  • Regulators might also bear some responsibility if they failed to enforce anti-discrimination laws on automated systems.
  • In the end, we expect the bank to take accountability by fixing the issue and making amends to those harmed. The law might also step in to ensure this happens, showing how governance and moral agency intersect.

To draw all this together, in AI we have multiple layers of moral agency:

  • Creators of AI (engineers, companies) – for building the systems with care and ethics.
  • Deployers of AI (organizations, managers) – for integrating AI responsibly into real-world use and ensuring proper oversight.
  • Users of AI (individuals, professionals) – for using AI outputs wisely and not relinquishing their own judgment.
  • Governors of AI (regulators, policymakers, industry leaders) – for setting the rules of the game and stepping in when broad responsibility needs to be enforced for the public good.

None of these can shrug and point to the AI as the agent. The AI has no moral agency to shoulder the blame or praise. It is the people behind and around the AI that remain the moral agents.

Conclusion

Moral agency is a foundational concept in ethics that helps us understand who can be held morally responsible for actions. In this post, we defined moral agency as the capacity to make moral decisions and be accountable for them, outlining the importance of free action, moral understanding, responsibility, and accountability. We defined a moral agent as any being (usually a person) who has those capacities, and we looked at various types of moral agents – from ordinary adults to children (with limited agency), to groups like corporations, and finally the contested case of artificial agents.

Applying these ideas to AI, we found that while AI systems themselves are not moral agents, there are plenty of human moral agents involved at every stage of an AI system’s life cycle. The researchers and engineers who create AI have the agency and responsibility to embed ethical considerations into design. Those who deploy and use AI must do so carefully, keeping their hands on the steering wheel (sometimes literally) and not treating the AI as an infallible decision-maker. And at a higher level, our institutions and governments have the moral obligation to govern AI in a way that prevents harm and allocates responsibility clearly.

The takeaway is that humans cannot abdicate moral responsibility to machines. As AI continues to advance and play larger roles in our lives, we must remember that it is a reflection of human choices – both in how it’s built and how it’s applied. We, as moral agents, remain accountable for ensuring AI is used ethically and for addressing any negative consequences that arise. No matter how autonomous or “smart” our creations become, our moral agency is what will determine whether AI is ultimately a force for good or ill in the world…….. FOR NOW.

References

  • ethicsunwrapped.utexas.edu – Ethics Unwrapped (Univ. of Texas) glossary definition of a moral agent (able to discern right from wrong and be accountable) and note that children or adults with certain disabilities have little or no capacity for moral agency.
  • blog.practicalethics.ox.ac.uk – Philosophers’ view that above individual agents there can be group agency; some argue corporations are full moral agents capable of intentions, guilt, and apt for blame or praise as we treat individual persons.
  • medium.com – View of ethicist Deborah G. Johnson that an artifact (machine) can never be a moral agent, seeing such artificial agents instead as “components in human moral action,” with moral agency remaining with the humans involved.
  • lumenova.ai – Emphasis that developers and users of AI systems must be accountable for the actions of those systems – they should take responsibility for any harm caused by their AI and work to prevent future harm.
  • unesco.org – UNESCO’s global AI Ethics Recommendation (2021) stating that AI systems should not displace ultimate human responsibility and accountability – humans should remain in charge and answerable for AI.
  • azorobotics.com – Recognition in AI law discussions that while some suggest giving AI legal personhood, current frameworks do not recognize AI as a legal entity – instead, liability remains with the human actors in AI development, deployment, and use.
  • azorobotics.com – Example scenario from an autonomous vehicle accident: a self-driving car killed a pedestrian, raising the question of who is responsible (the company, the car’s manufacturer, or the AI itself), which exposed gaps in traditional liability assignment.
  • azorobotics.com – Another example from healthcare: an AI diagnostic system gave a wrong output leading to delayed treatment, illustrating how it can be unclear whether the developers or the users (doctors) bear liability for an AI’s mistake.
  • accesspartnership.com – The European Union’s draft AI Act takes a governance approach: in its current form it holds AI developers and manufacturers responsible for AI failures or unintended outcomes, even if the harm was not anticipated – placing accountability on those who build the AI.
  • cigionline.org – Explanation of the “moral crumple zone” concept: in complex autonomous systems, responsibility for failures often gets incorrectly heaped on a human operator with limited control, while the technology itself is treated as flawless – thus the human becomes a scapegoat, preserving the perception of the machine’s infallibility.

Leave a comment