Artificial General Intelligence (AGI) refers to the development of AI systems with human-level intelligence across multiple domains. While AGI holds immense potential, Americans seem largely unprepared for its full implications. Many American academics commenting in the media fully underestimate how quickly AI will surpass human intelligence.
We must approach the future of AI with humility rather than hubris.
The recent case of Victor Miller's AI mayoral candidacy in Cheyenne, Wyoming, highlights the intersection of AI and politics. It draws out naive commentary from informed academics who do not understand the broader contextual horizon. As the CNN article articulates, Miller filed to run for mayor with a customized AI chatbot, VIC, powered by OpenAI's ChatGPT, to make political decisions. OpenAI intervened and shut down Miller's access, citing policy violations against using AI for political campaigning. This case exemplifies the ethical concerns and pressure to leverage AI in politics as its capabilities advance.
The broader social problem is not where we are today but rather where we are quickly headed—the rapid advancement of AI has outpaced initial expectations. Large language models like ChatGPT, once considered science fiction, are now a reality. These technologies are progressing faster than social, legal, and regulatory frameworks can keep up. While many academics acknowledge the gaps in understanding and predicting AI growth, they often underestimate its future potential based on their own outdated assumptions from a static position of casual observance.
Human arrogance in AI development can lead to a scenario wherein we overlook the risks and unintended consequences, such as casually assuming that AI will remain safely constrained or that human oversight will always mitigate harm. These laissez-faire views raise significant ethical concerns about how AI will be used to make political decisions in the future—this is not about what OpenAI is today but what this technology is making possible in the near future.
Experts warn that AI technology should never be used to make automated decisions when running any part of the government. However, where is the regulation? Who is ensuring that this doesn't happen? While AI can support decision-making, experts caution against delegating too much authority to AI systems in high-stakes domains.
Some experts argue that AI is designed for decision support and to provide data to help humans make decisions, but not to make decisions independently. For instance, Jen Golbeck, a professor at the University of Maryland, states, "AI has always been designed for decision support – it gives some data to help a human make decisions but is not set up to make decisions by itself." However, one can't help but wonder, who decides this?
While AI chatbots may have a place in assisting with tasks like answering constituent inquiries or directing problem-solving, according to Golbeck, decision-making should always be left to humans. However, this perspective overlooks the horizon of evolving AI technology capabilities and applications, especially when we reach AGI. This view greatly underestimates how AI will change governance and decision-making processes as we quickly reach the capacity for this technology to parse knowledge in ways that greatly exceed human capacities; ignoring this reality is human hubris.
David Karpf, an associate professor at George Washington University, dismisses AI chatbots running for office as a "gimmick" and believes "no one is going to vote for an AI chatbot to run a city." This view reflects a static perception of public trust in AI. While current AI models like ChatGPT are not qualified to run governments, this view dismisses the broader potential of advanced AGI. Historical examples show that once-implausible technologies can become widely accepted, suggesting future shifts in AI capabilities and societal attitudes. An informed and humble perspective acknowledges the current limitations while understanding that AGI has transformative potential and thereby requires robust ethical frameworks to be built.
The experts' short-term thinking and dismissive attitudes towards AI autonomy point to a broader and potentially dangerous implication—a lack of urgency in developing robust governance frameworks to ensure AI systems remain aligned with human values as they become more powerful.
While the experts correctly caution against delegating too much authority to current AI in high-stakes domains like government, their statements betray a static mindset that fails to anticipate the rapidly evolving AI capability landscape. Yes, today's AI may be designed primarily for decision support. But as systems become more general and autonomous, who is to say they won't ultimately supersede human decision-making in many arenas? To posit that AI should "never" make automated decisions is extremely short-sighted.
The broader implication is that without proactive governance starting now, we risk facing a future of incredibly capable but unaligned AI systems making core decisions that run counter to human ethics and societal interests. The window is closing on our ability to maintain control of an intelligence explosion.
The experts' statements highlight a lack of multi-stakeholder efforts to address this issue through regulation, testing, and adaptive governance frameworks. Simply stating, "AI should never do X" is insufficient—concrete steps must be taken to translate that into enforceable reality as this technology quickly evolves.
Who ensures current AI remains restricted to decision-support roles? Who maps out the ethical boundaries and the fail-safes as we approach AGI? Dismissing early instantiations as gimmicks neglects the need to proactively govern the entire AI trajectory, not just current narrow use cases.
The broader implication is that we will continue to kick the governance can down the road through complacent, human-centric thinking until we've ceded too much ground to potentially unaligned AGI. We must shed the hubris and take urgent action to shape the AI technological horizon in a direction that benefits humanity-- lest we cede control to the pervasive dominance of technopoly.
As a cautionary tale, Neil Postman's book "Technopoly: The Surrender of Culture to Technology" critiques the unchecked expansion of technology in society, warning against the potential loss of human values and autonomy. Postman argues that technopoly, a state where technology dictates societal norms and values, can erode human agency and ethical considerations if left unchecked. Invoking Postman's thesis underscores the importance of steering AI development toward ethical and human-centric goals to prevent the unintended consequences of technological domination.
To navigate the dawning age of AGI responsibly, we must prioritize a human-centric approach to AI development, establishing ethical guidelines to ensure AI serves humanity's greater good. This requires interdisciplinary collaboration among academics, policymakers, technologists, and diverse stakeholders to address AI's societal impacts and develop governance frameworks that promote beneficial AI while mitigating risks. It also requires intellectual humility about the limitations of human capacity in relationship to a technology that will soon be able to do things with knowledge and information we can't—we need to respect that raw fact.
As we approach an era where AI will surpass human capabilities, we must confront our unpreparedness with humility. Rather than hubris, we need a sober, responsible approach that recognizes AGI's immense power while ensuring these technological advancements align with human-centric values and ethics.
Only through collective effort and ethical commitment can we navigate the age of AGI with wisdom and foresight—we will quickly have to grapple with the limitations of what our brains can do, and this technology will show us what it can do more efficiently.
We must adapt our technological systems to this new frontier with humility, not hubris.