The drive to create “trustworthy AI” is gaining momentum. Governments, industry, and civil society are all rushing to define ethical guidelines for AI development and deployment. The EU’s "Ethical Guidelines for Trustworthy AI" is a prominent example. President Biden’s Executive Order establishes standards for AI safety and security, among other priorities.
If humans are ultimately in charge of AI, should companies that develop and deploy Artificial General Intelligence (AGI) systems be held liable when those systems cause harm?
This question cuts to the heart of AI governance. It forces us to grapple with fundamental issues of power, responsibility, and accountability in a world increasingly shaped by intelligent machines.
“It ain't so much the things that people don't know that makes trouble in this world, as it is the things that people know that ain't so.”
―Mark Twain
Rejecting Technological Determinism:
It is critical to frame the above question in a way that rejects technological determinism—the idea that technology shapes society in inevitable ways. We are not passive recipients of technology; we choose how to develop, deploy, and use it. This means that companies developing AGI are not simply unleashing forces beyond their control. They are making deliberate choices about the design, capabilities, and applications of these systems.
The question of whether AI companies should be held liable when their AGI systems cause harm directly relates to this rejection of technological determinism. If we accept that the creators and deployers of advanced AI are not just passive observers but active decision-makers shaping these systems, then it follows that they should be accountable for the consequences of their choices.
Just as a parent can be liable for negligent supervision of their child, or a gun manufacturer can face culpability for reckless marketing of their product, AI developers deploying intelligent systems into the real world may bear responsibility if those systems lead to foreseeable harm. Holding them liable would be a way to incentivize AI governance and safety prioritization from the outset rather than treating harmful AI outcomes as unavoidable accidents outside of human control.
By framing AI liability in these terms, we place the locus of agency and power where it belongs—with the human developers who can steer the trajectory of AI's development and deployment; then, it becomes an issue of choosing, in the present moment, how and when to wield that power responsibly and then, being held to account for failing to do so.
HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a paid subscriber.
Establishing Impact Assessment Requirements for AI Systems H.R.6580 - Algorithmic Accountability Act of 2022
One example of a legislative move to address this issue is the Algorithmic Accountability Act of 2022 which aims to regulate the development and deployment of artificial intelligence (AI) systems, specifically "automated decision systems" and "augmented critical decision processes", that make significant decisions impacting consumers' lives.
What it Does:
Directs the Federal Trade Commission (FTC) to establish regulations requiring companies meeting certain criteria to perform impact assessments of their AI systems used for critical decisions like employment, housing, finance, healthcare etc.
Mandates companies document their AI system's development, test for bias/accuracy issues, consult external stakeholders, identify and mitigate harms, and provide transparency to consumers.
Requires companies to submit annual reports summarizing their impact assessments to the FTC.
Allows FTC enforcement against companies violating the rules as unfair/deceptive practices.
Authorizes state attorneys general to sue on behalf of residents harmed by violations.
What it Covers:
Companies over certain revenue/data thresholds deploying AI systems for consequential consumer decisions.
AI systems making "critical decisions" with legal or significant effects on areas like employment, housing, finance, healthcare etc.
What it Doesn't Cover:
Does not establish strict product liability standards for AI companies.
Does not cover AI systems not used for delineated "critical decisions" impacting consumers.
Liability is based on failure to follow impact assessment/transparency requirements, not directly caused by harm from AI system itself.
In essence, H.R.6580, as proposed, establishes governance through mandated risk assessment, documentation, and transparency requirements for consequential AI systems enforced via FTC rules and state lawsuits.
If we recognize that human agency shapes AI, then holding companies accountable for the harms caused by their AGI systems becomes a matter of:
Incentivizing Responsible Development: Legal liability can act as a powerful incentive for companies to prioritize safety, mitigate risks, and align AGI with human values.
Ensuring Justice for Those Harmed: When AI systems cause harm, those affected deserve redress and compensation. Holding companies legally liable can provide a pathway for justice.
Promoting Transparency and Accountability: The threat of legal liability can encourage companies to be more transparent about their AGI development processes, risk assessments, and mitigation strategies. Here, it is important to emphasize Tristan Harris' concept of "clean thinking" to ensure unbiased evaluation in these governance processes.
Navigating the Complexities:
As the Algorithmic Accountability Act of 2022 bill demonstrates, determining liability for AGI harms will be complex. Legislation needs to be carefully crafted to determine where to draw the line on liability, but it is a very necessary conversation. And we'll need to address challenging questions:
What level of foreseeability of harm is required to establish liability?
How do we distinguish between harms caused by design flaws versus misuse?
To what extent should users or operators of AGI systems share responsibility?
What are the appropriate mechanisms for redress and compensation?
Community Feedback is Requested:
Appropriateness of Stance: Does the argument effectively address the question of whether companies developing AGI systems should be held liable for harm caused by those systems? Is the rejection of technological determinism adequately supported, and does the framing of liability resonate with you?
Legal Opinions: For those knowledgeable in legal matters, are the comparisons to liability in other domains (such as parental liability or product liability for firearms) appropriate and legally sound? Are there conversations occurring about broad liability in this way?
Case Law References: Are there relevant case law references or legal precedents that support or challenge the arguments and questions presented here?
Other Bills or Laws: Beyond the above example, I'm curious to hear from the community - what other notable legislation or regulatory proposals exist, either in the U.S. or internationally, that attempt to establish governance and accountability measures for AI systems?
These thoughts are just the beginning of a complex conversation. No conclusions have been drawn, but my opinion has been stated, and your insights and perspectives are invaluable. Please share your thoughts and opinions in the comments below to contribute to this ongoing dialogue.
HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a paid subscriber.
Share this post
When AGI Goes Wrong: Who Pays the Price?
Share this post
The drive to create “trustworthy AI” is gaining momentum. Governments, industry, and civil society are all rushing to define ethical guidelines for AI development and deployment. The EU’s "Ethical Guidelines for Trustworthy AI" is a prominent example. President Biden’s Executive Order establishes standards for AI safety and security, among other priorities.
But these well-intentioned efforts are facing a growing wave of criticism. Critics like Thomas Metzinger argue that "trustworthy AI" is a meaningless concept. Only humans can be trustworthy, not machines. This raises a crucial question:
If humans are ultimately in charge of AI, should companies that develop and deploy Artificial General Intelligence (AGI) systems be held liable when those systems cause harm?
This question cuts to the heart of AI governance. It forces us to grapple with fundamental issues of power, responsibility, and accountability in a world increasingly shaped by intelligent machines.
Rejecting Technological Determinism:
It is critical to frame the above question in a way that rejects technological determinism—the idea that technology shapes society in inevitable ways. We are not passive recipients of technology; we choose how to develop, deploy, and use it. This means that companies developing AGI are not simply unleashing forces beyond their control. They are making deliberate choices about the design, capabilities, and applications of these systems.
The question of whether AI companies should be held liable when their AGI systems cause harm directly relates to this rejection of technological determinism. If we accept that the creators and deployers of advanced AI are not just passive observers but active decision-makers shaping these systems, then it follows that they should be accountable for the consequences of their choices.
Just as a parent can be liable for negligent supervision of their child, or a gun manufacturer can face culpability for reckless marketing of their product, AI developers deploying intelligent systems into the real world may bear responsibility if those systems lead to foreseeable harm. Holding them liable would be a way to incentivize AI governance and safety prioritization from the outset rather than treating harmful AI outcomes as unavoidable accidents outside of human control.
By framing AI liability in these terms, we place the locus of agency and power where it belongs—with the human developers who can steer the trajectory of AI's development and deployment; then, it becomes an issue of choosing, in the present moment, how and when to wield that power responsibly and then, being held to account for failing to do so.
HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a paid subscriber.
Establishing Impact Assessment Requirements for AI Systems
H.R.6580 - Algorithmic Accountability Act of 2022
One example of a legislative move to address this issue is the Algorithmic Accountability Act of 2022 which aims to regulate the development and deployment of artificial intelligence (AI) systems, specifically "automated decision systems" and "augmented critical decision processes", that make significant decisions impacting consumers' lives.
What it Does:
Directs the Federal Trade Commission (FTC) to establish regulations requiring companies meeting certain criteria to perform impact assessments of their AI systems used for critical decisions like employment, housing, finance, healthcare etc.
Mandates companies document their AI system's development, test for bias/accuracy issues, consult external stakeholders, identify and mitigate harms, and provide transparency to consumers.
Requires companies to submit annual reports summarizing their impact assessments to the FTC.
Allows FTC enforcement against companies violating the rules as unfair/deceptive practices.
Authorizes state attorneys general to sue on behalf of residents harmed by violations.
What it Covers:
Companies over certain revenue/data thresholds deploying AI systems for consequential consumer decisions.
AI systems making "critical decisions" with legal or significant effects on areas like employment, housing, finance, healthcare etc.
What it Doesn't Cover:
Does not establish strict product liability standards for AI companies.
Does not cover AI systems not used for delineated "critical decisions" impacting consumers.
Liability is based on failure to follow impact assessment/transparency requirements, not directly caused by harm from AI system itself.
In essence, H.R.6580, as proposed, establishes governance through mandated risk assessment, documentation, and transparency requirements for consequential AI systems enforced via FTC rules and state lawsuits.
Holding Companies Accountable:
The AI Now Institute’s 2023 Landscape: Confronting Tech Power Report provides details on algorithmic accountability. It describes the levels of external algorithmic audits that are commonly conducted on a mostly voluntary basis. AiNow publishes various works that focus on algorithmic accountability. The organization monitors the development of AI accountability measures that may inadvertently consolidate power within the technology industry, thereby undermining systemic solutions to address algorithmic bias and fairness concerns.
If we recognize that human agency shapes AI, then holding companies accountable for the harms caused by their AGI systems becomes a matter of:
Incentivizing Responsible Development: Legal liability can act as a powerful incentive for companies to prioritize safety, mitigate risks, and align AGI with human values.
Ensuring Justice for Those Harmed: When AI systems cause harm, those affected deserve redress and compensation. Holding companies legally liable can provide a pathway for justice.
Promoting Transparency and Accountability: The threat of legal liability can encourage companies to be more transparent about their AGI development processes, risk assessments, and mitigation strategies. Here, it is important to emphasize Tristan Harris' concept of "clean thinking" to ensure unbiased evaluation in these governance processes.
Navigating the Complexities:
As the Algorithmic Accountability Act of 2022 bill demonstrates, determining liability for AGI harms will be complex. Legislation needs to be carefully crafted to determine where to draw the line on liability, but it is a very necessary conversation. And we'll need to address challenging questions:
What level of foreseeability of harm is required to establish liability?
How do we distinguish between harms caused by design flaws versus misuse?
To what extent should users or operators of AGI systems share responsibility?
What are the appropriate mechanisms for redress and compensation?
Community Feedback is Requested:
Appropriateness of Stance: Does the argument effectively address the question of whether companies developing AGI systems should be held liable for harm caused by those systems? Is the rejection of technological determinism adequately supported, and does the framing of liability resonate with you?
Legal Opinions: For those knowledgeable in legal matters, are the comparisons to liability in other domains (such as parental liability or product liability for firearms) appropriate and legally sound? Are there conversations occurring about broad liability in this way?
Case Law References: Are there relevant case law references or legal precedents that support or challenge the arguments and questions presented here?
Other Bills or Laws: Beyond the above example, I'm curious to hear from the community - what other notable legislation or regulatory proposals exist, either in the U.S. or internationally, that attempt to establish governance and accountability measures for AI systems?
These thoughts are just the beginning of a complex conversation. No conclusions have been drawn, but my opinion has been stated, and your insights and perspectives are invaluable. Please share your thoughts and opinions in the comments below to contribute to this ongoing dialogue.
HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a paid subscriber.