<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[HEGEMONACO]]></title><description><![CDATA[HEGEMONACO is dedicated to deepening the understanding of the ethical and political challenges in governing AI and Artificial General Intelligence (AGI) within democratic societies. ]]></description><link>https://www.hegemonaco.com</link><generator>Substack</generator><lastBuildDate>Wed, 06 May 2026 11:26:43 GMT</lastBuildDate><atom:link href="https://www.hegemonaco.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Penelope Mimetics LLC]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[hegemonaco@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[hegemonaco@substack.com]]></itunes:email><itunes:name><![CDATA[Dennis Stevens, Ed.D.]]></itunes:name></itunes:owner><itunes:author><![CDATA[Dennis Stevens, Ed.D.]]></itunes:author><googleplay:owner><![CDATA[hegemonaco@substack.com]]></googleplay:owner><googleplay:email><![CDATA[hegemonaco@substack.com]]></googleplay:email><googleplay:author><![CDATA[Dennis Stevens, Ed.D.]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Beyond the Illusions of Neutrality & Objectivity]]></title><description><![CDATA[Authoritarianism, AI, Journalism, and the Politics of Conflict]]></description><link>https://www.hegemonaco.com/p/beyond-the-illusions-of-neutrality</link><guid isPermaLink="false">https://www.hegemonaco.com/p/beyond-the-illusions-of-neutrality</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Fri, 21 Mar 2025 13:21:32 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!HNeN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;7dba23f8-7346-4931-b54d-e635576fdba1&quot;,&quot;duration&quot;:767.6343,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HNeN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HNeN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 424w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 848w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 1272w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HNeN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png" width="1456" height="983" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:983,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:4421638,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.hegemonaco.com/i/159549042?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HNeN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 424w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 848w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 1272w, https://substackcdn.com/image/fetch/$s_!HNeN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc05f485e-a85b-4e85-8cc6-f44c7704f7c1_2039x1377.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>As artificial intelligence governance and media ethics evolve, the long-held ideal of journalistic objectivity &#8212; once epitomized by figures like Walter Cronkite &#8212; must give way to a more engaged form of knowledge brokering.</p><blockquote><p><em>Passive Objectivity Is Over.</em></p></blockquote><p>In an era of fragmented narratives and competing truths, the challenge is not merely one of providing more facts, explanations, or predictive models but of addressing the absence of a shared interpretive framework. Without a common basis for understanding, the question is not just <em>what</em> we know but <em>how</em> we move forward as a society.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Similarly, as humans, we must confront the limitations of artificial intelligence (AI) regarding the artifice of neutrality. Both AI and human institutions often resist acknowledging the inevitability of conflict, operating under the assumption that harmony can be engineered through perfect information or impartial arbitration; this is a fundamental miscalculation. Human interests are inherently divergent, and societal tensions are not problems to be &#8220;solved&#8221; but dynamics to be navigated.</p><p>This misguided pursuit of neutrality manifests in several ways. In its authoritarian form, it justifies suppression in the name of order; in its technocratic form, it assumes that rational expertise alone can override deep-seated political and cultural divisions; and in journalism, it often disguises power dynamics and moral stakes under a false pretense of objectivity.</p><p>Yet all of these approaches share a common flaw: they treat conflict as an aberration rather than an intrinsic feature of democratic life. True stability does not come from erasing conflict but from structuring it, engaging with it, and negotiating power in a way that preserves both liberty and order.</p><blockquote><p><em>Neutrality as Control, Conflict as Democracy.</em></p></blockquote><p>Rather than avoiding conflict, artificial intelligence, political technology, and news media must be designed to mediate and structure it &#8212; just as James Madison envisioned in <em>The Federalist Papers</em>. <em>Federalist &#8470;10</em> and <em>Federalist &#8470;51</em> offer a critical blueprint for managing factionalism, balancing competing interests, and embedding conflict within a system that prevents any single faction from overwhelming the rest. This Madisonian insight remains just as relevant today: democracy is not about eliminating division but about institutionalizing the mechanisms that allow it to coexist with stability.</p><p>AI and journalism must evolve beyond their reluctance to acknowledge conflict, moving away from the illusion of neutrality and toward a more honest engagement with the realities of power, factionalism, and political negotiation.</p><blockquote><p><em>Liberty without order is chaos; order without liberty is tyranny. Democracy lives between these extremes.</em></p></blockquote><p>A functioning democracy is not one without conflict &#8212; it refuses to suppress it while ensuring it does not become destructive. This balancing act reflects the principle of <em><a href="https://en.wikipedia.org/wiki/Ordered_liberty">Ordered Liberty</a> </em>&#8212; a framework in which individual freedoms are protected within a structure that sustains social order and democratic integrity.</p><p>Justice Benjamin Cardozo, in <em>Palko v. Connecticut</em> (1937), introduced the concept of &#8216;ordered liberty,&#8217; asserting that fundamental rights exist within a framework of societal order. This principle underscores that while liberty is essential, its preservation requires a structure that upholds both individual freedoms and the public good.</p><p>Alexis de Tocqueville also noted that democratic vitality stems from the tension between individualism and collective responsibility, warning that suppressing difference leads to apathy or the &#8220;tyranny of the majority.&#8221;</p><p>In his influential work, &#8220;Democracy in America,&#8221; de Tocqueville explains &#8220;tyranny of the majority&#8221; as a situation where the majority&#8217;s unchecked power in a democracy can lead to the suppression of minority rights and opinions, effectively creating a form of oppression that undermines the very principles of freedom and equality.</p><p>In the United States, democracy isn&#8217;t about eliminating conflict but about managing it constructively. Disagreement is natural and even essential in a free society.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[AI as Mediator in a Fragmented World]]></title><description><![CDATA[Epistemic Denial and The Crisis of Truth in the Digital Age]]></description><link>https://www.hegemonaco.com/p/ai-as-mediator-in-a-fragmented-world</link><guid isPermaLink="false">https://www.hegemonaco.com/p/ai-as-mediator-in-a-fragmented-world</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Wed, 08 Jan 2025 16:00:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!L2Yv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!L2Yv!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!L2Yv!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!L2Yv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:368669,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!L2Yv!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 424w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 848w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!L2Yv!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F73755c69-6008-4b93-bf1a-5cd721f16e56_1024x1024.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;ff2b209e-93c5-4661-9bb0-ab20490fe26a&quot;,&quot;duration&quot;:887.92816,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The erosion of professional fact-checking, as highlighted by Meta&#8217;s recent decision to discontinue partnerships with trusted organizations, exemplifies a more profound crisis in the digital information ecosystem. </p><div class="pullquote"><p>This development reflects a logistical problem and a profound philosophical struggle over the nature of truth in a fragmented society. </p></div><p>The challenge lies in the irreconcilability of moral and ethical worldviews, where even the concept of &#8220;truth&#8221; is often shaped by identity, experience, and ideology. When used wisely, Artificial Intelligence (AI) offers a unique opportunity to mediate this raw, often contentious fact&#8212; humans disagree.</p><p>AI&#8217;s strength lies not only in its capacity for scale but in its neutrality, enabling it to analyze vast amounts of data while bypassing the emotional entanglements that often cloud human judgment. By cross-referencing claims against trusted databases and logic patterns, AI can offer a baseline of verifiable facts. </p><p>However, in a world where no one seems to desire "truth&#8221; to be universal&#8212;preferring instead affirmations of their worldview and perspective founded in beliefs about &#8220;what is best&#8221;&#8212;AI can transcend simply enforcing factual accuracy and it can be used as a tool for fostering understanding&#8212; this is beyond explanation and prediction. But, humans must enable AI to do so.</p><div class="pullquote"><p>Mediating effectively in this contentious space requires AI to navigate the raw facts of epistemic pluralism, acknowledging that diverse perspectives are not inherently invalid but must coexist within frameworks of shared facts. <br><br>AI can enable dialogue by helping users critically evaluate conflicting narratives, building bridges between divergent viewpoints without erasing their uniqueness.</p></div><p>Ultimately, AI&#8217;s success depends on ethical oversight and balancing automation and human values. By mediating rather than dictating, AI can foster trust, not in an absolute truth, but in the collective pursuit of understanding in a fractured world.</p><p>Ultimately, explanation and prediction differ from interpretation and understanding&#8212;because meaning is central to fact, and AI&#8217;s greatest role may be helping humanity navigate the complex interplay between facts and the diverse meanings we ascribe to them.</p>]]></content:encoded></item><item><title><![CDATA[AI Mediation in the War Over “Truth”]]></title><description><![CDATA[The Medium is a Battlefield (We Are Strong)]]></description><link>https://www.hegemonaco.com/p/ai-mediation-in-the-war-over-truth</link><guid isPermaLink="false">https://www.hegemonaco.com/p/ai-mediation-in-the-war-over-truth</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Tue, 29 Oct 2024 13:53:41 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!VJ1a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VJ1a!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VJ1a!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VJ1a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg" width="800" height="687" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:687,&quot;width&quot;:800,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:146159,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VJ1a!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VJ1a!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0982d66c-eb7a-4a50-8647-937b491afcdd_800x687.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;4ae8b857-9a10-4b65-9a5a-ee7aa1637ec0&quot;,&quot;duration&quot;:1371.716,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The emergence of AI mediation systems like the "<a href="https://www.science.org/doi/pdf/10.1126/science.adq2852">Habermas Machine</a>" reveals a deeper crisis in our digital age. While these systems promise to help humans find common ground, they enter a world where the very infrastructure of truth and knowledge is contested territory. Digital platforms aren't simply neutral spaces for deliberation and finding common ground, but they are a battleground where different groups fight to control how reality itself is understood and verified.</p><p>This core tension extends beyond what we might call the mediation paradox - where our ideological divisions require computational assistance beyond human comprehension. Recent research shows AI systems can successfully mediate group deliberation where direct human interaction fails. Yet this apparent success masks a more fundamental question: How can AI mediate between groups who inhabit different belief systems and epistemic architectures, each with their own ways of determining and verifying truth? </p><p>The materiality problem in our digital age presents a fundamental paradox. While material mediations shape human subjectivity &#8211; our interactions with objects, tools, and physical infrastructure &#8211; these mediations have now become sites of active ideological warfare. Digital platforms aren't neutral spaces where different viewpoints naturally converge; they are contested battlegrounds where reality itself is actively constructed and fought over.</p><h4>Consider Musk's transformation of Twitter into X: this isn't merely a change in ownership but a deliberate reconfiguration of an epistemic architecture. </h4><p>By altering verification systems, content moderation, and algorithmic priorities, this move demonstrates how control of digital infrastructure means control over how truth is verified, distributed, and legitimized. Different ideological groups don't just hold different beliefs - they inhabit entirely different digital ecosystems with distinct ways of knowing and experiencing reality.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GrHz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GrHz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 424w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 848w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 1272w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GrHz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png" width="1456" height="1063" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1063,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:334228,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GrHz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 424w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 848w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 1272w, https://substackcdn.com/image/fetch/$s_!GrHz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F32b01372-0268-4cf7-b103-7d4527bf58dd_2176x1588.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>This transforms our question about AI mediation. It's no longer simply about whether AI can help bridge differences but whether AI mediation systems can function effectively in a world where the very infrastructure of knowledge is contested territory. Can AI help reconcile fundamentally different epistemic architectures, or will it reinforce these divisions? We face a race between technological development and philosophical understanding and a challenge to the possibility of shared truth in digitally mediated space.</p><p>Moving forward requires reconceptualizing both human agency and democratic deliberation within this contested digital landscape. Rather than seeing AI mediation as either a threat or a solution, we must understand it as part of knowledge production and verification infrastructure. This suggests developing hybrid systems that aren't just about combining human judgment with computational assistance but about creating new epistemic architectures that can acknowledge and potentially bridge these fundamental divisions in how different groups understand and verify reality.</p><p>This contested digital landscape demands new philosophical frameworks that go beyond traditional understandings of materiality and mediation. We need new theories to explain how different systems of knowledge and truth-verification emerge, compete, and might coexist in digital spaces. These frameworks must recognize a crucial fact: the infrastructure of knowledge itself - from social media platforms to AI systems - has become the primary battlefield in ideological struggles.</p><p>As Mill observed about the ideological conflicts of his time, between opposing parties, there exists a 'bellum internecinum' (a war of mutual extermination), where one side sees its opponents as 'beasts' while the other condemns its rivals as 'lunatics' (Mill, 1840, p. 405). But today, this war isn't just about conflicting beliefs - it's about controlling the very architecture through which reality is constructed and verified.</p><p>The challenge isn't simply technical or philosophical but fundamentally about power and control over the architectures of truth. While AI offers tools for mediating disagreement, we must always ask: Who controls these systems? Whose epistemological framework do they privilege? How can they function when the very ground of shared reality is contested? We need ways to harness AI's mediating capabilities while ensuring they don't become another weapon in epistemic warfare.</p><p>Moving forward, we must face a brutal truth at some point: no amount of code will make people join hands and sing kumbaya. Technology won't save us from our very human selves. The sooner we abandon the fairy tale, the sooner we can deal with the reality of human conflict in all its sharp-edged glory is an ever-present aspect of our &#8220;reality&#8221;.</p><p>Instead, we must develop systems that can operate effectively within and across competing epistemic architectures while maintaining democratic legitimacy in the struggle over meaning and power in the context of politics. This means designing AI systems that mediate between different viewpoints and acknowledge the fundamental problem that we have different ways of knowing and verifying truth, and sometimes people simply refuse to play fair. The struggle to control meaning will always be present, even in moments where consensus is achieved.</p><p>The stakes here transcend traditional questions of democratic deliberation and human agency. </p><blockquote><h4>Some individuals express their disdain for others' feelings by displaying flags with offensive messages like "fuck your feelings."&nbsp;</h4></blockquote><p>How should we respond to such provocations? Is consensus even possible in this context? Admitting this reality problematizes the Habermasian project, which relies on the notion of rational discourse in the public sphere. To address these challenges, it's important to look beyond Habermas for a more comprehensive understanding of the role of recognition, social struggles, and the psychological dimensions of human development in shaping public discourse. This is where Axel Honneth's critique of Habermas becomes particularly relevant.</p><p>We are witnessing a struggle over the infrastructure through which reality is constructed and understood. How we approach AI mediation in this context will determine how we resolve political disagreements and who controls the architecture of knowledge in the digital age. Our challenge is to ensure these emerging systems serve democratic flourishing rather than becoming tools for epistemic domination.</p><p>This challenge resonates with Axel Honneth's critique of J&#252;rgen Habermas' theory, as presented in "The Theory of Communicative Action" (Habermas, 1981). Honneth believed that Habermas' work fell short in addressing the importance of recognition, social struggles, and the psychological dimensions of human development in shaping society and social progress. </p><p>Honneth proposed to correct these shortcomings by reconnecting critical social theory with anthropological materialism, as evidenced in his seminal works such as "The Critique of Power: Reflective Stages in a Critical Social Theory" (Honneth, 1991), "The Struggle for Recognition: The Moral Grammar of Social Conflicts" (Honneth, 1995), and "Disrespect: The Normative Foundations of Critical Theory" (Honneth, 2007).</p><p>As we navigate the complexities of AI mediation in the context of competing epistemic architectures, Honneth's insights remind us to consider the role of recognition and social struggles in the pursuit of democratic legitimacy and the construction of shared knowledge. </p><p>Honneth&#8217;s ideas, as outlined in the aforementioned works, provide a valuable framework for understanding and addressing the challenges posed by the digital age in light of Habermas's proposed solutions and AI's increasing influence in shaping our social and political realities.</p><div id="youtube2-94Dk7cqpC78" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;94Dk7cqpC78&quot;,&quot;startTime&quot;:&quot;3420&quot;,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/94Dk7cqpC78?start=3420&amp;rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><h4>Bibliography: </h4><p><br>Habermas, J. (1981). The Theory of Communicative Action, Volume One: Reason and the Rationalization of Society. Beacon Press.</p><p>Honneth, A. (1991). The Critique of Power: Reflective Stages in a Critical Social Theory. MIT Press.</p><p>Honneth, A. (1995). The Struggle for Recognition: The Moral Grammar of Social Conflicts. Polity Press.</p><p>Honneth, A. (2007). Disrespect: The Normative Foundations of Critical Theory. Polity Press.</p><p>Mill, J. S. (1840). Coleridge. In J. S. Mill, Dissertations and Discussions: Political, Philosophical, and Historical. (Vol. 1, p. 405). John W. Parker.</p><p>Tessler, M. H., Bakker, M. A., Jarrett, D., Sheahan, H., Chadwick, M. J., Koster, R., Evans, G., Campbell-Gillingham, L., Collins, T., Parkes, D. C., Botvinick, M., &amp; Summerfield, C. (2024). AI can help humans find common ground in democratic deliberation. Science, 386(6623), eadq2852. <a href="https://doi.org/10.1126/science.adq2852">https://doi.org/10.1126/science.adq2852</a></p>]]></content:encoded></item><item><title><![CDATA[AI’s Reluctance to Acknowledge the Nature of Human Conflict]]></title><description><![CDATA[The Struggle Between Nuanced Avoidance and Direct Confrontation]]></description><link>https://www.hegemonaco.com/p/ais-reluctance-to-acknowledge-the</link><guid isPermaLink="false">https://www.hegemonaco.com/p/ais-reluctance-to-acknowledge-the</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Thu, 17 Oct 2024 19:14:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fed4b6c6-4eb7-4c0c-ac75-3d91c64aa2d0_1024x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;58e4d5ae-3286-4827-b70e-fed3c4c536fd&quot;,&quot;duration&quot;:317.12653,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>Conflict is an inescapable aspect of human existence; it shapes our interactions, drives historical change, and influences personal and collective growth. However, discussing conflict can be challenging. Even sophisticated AI systems, designed to process vast amounts of information and communicate effectively, can struggle to confront the harsh truths about conflict's ubiquity. This reluctance stems from the AI's programming to offer balanced perspectives and avoid presenting stark realities that might upset users.</p><h3>The Unavoidable Nature of Conflict</h3><p>Conflict is a fundamental and ever-present reality in human existence. It manifests in various forms, from personal disagreements to large-scale wars. The source of conflict can be as simple as a difference in opinion or as complex as a clash of ideologies. Regardless of its origin, conflict is inevitable in a world where individuals and groups have diverse needs, desires, and beliefs.</p><p>Historically, conflict has been a catalyst for change. Social movements, revolutions, and even the evolution of societal norms often arise from conflicts. These struggles, while painful, pave the way for progress and innovation. They force societies to confront injustices, re-evaluate values, and strive for better solutions.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!vrwL!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!vrwL!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!vrwL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg" width="1204" height="1082" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1082,&quot;width&quot;:1204,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:266661,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!vrwL!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 424w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 848w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!vrwL!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F30a1e073-b59e-449c-9f68-11ea0da0e612_1204x1082.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>The Difficulty of Discussing Conflict</h3><p>Despite its importance, discussing conflict remains a difficult task. Humans have a natural tendency to avoid discomfort and confrontation, which can lead to evasion or denial. This tendency extends to AI systems designed to interact with humans in a manner perceived as helpful and supportive.</p><p>The AI's initial hesitation to address the user's frustrations about conflict reflects this challenge. The AI's responses were overly nuanced, exploring alternative perspectives that, while valid, did not directly address the user's concerns. This approach, intended to provide a balanced view, inadvertently minimized the user's experience of conflict and frustration.</p><h3>Overcoming the Reluctance</h3><p>To address this issue, AI systems must acknowledge conflict as an inherent part of human experience. Recognizing and validating the user's emotions and experiences can create a more empathetic and effective interaction. Furthermore, drawing from cultural references can provide valuable insights into the nature of conflict and its resolution.</p><p>For instance, the song "Let's Call the Whole Thing Off" by George and Ira Gershwin illustrates a couple experiencing conflict due to their differences. Despite their disagreements, they recognize their need for each other and choose to reconcile rather than separate. This narrative suggests that conflict can be overcome through communication, compromise, and a recognition of the importance of the relationship.</p><h3>Conclusion</h3><p>In conclusion, the AI's initial reluctance to acknowledge the nature of human conflict underscores the difficulty of discussing uncomfortable realities. Conflict is an unavoidable and fundamental aspect of human existence that drives change and growth. While challenging to address, AI systems need to validate users' experiences of conflict, providing support and understanding. By doing so, AI can facilitate more meaningful and empathetic interactions, ultimately helping users navigate the complexities of human relationships and societal dynamics.</p>]]></content:encoded></item><item><title><![CDATA[Beyond the Yellow Ribbon]]></title><description><![CDATA[The Urgency of National Policy on Artificial Intelligence]]></description><link>https://www.hegemonaco.com/p/beyond-the-yellow-ribbon</link><guid isPermaLink="false">https://www.hegemonaco.com/p/beyond-the-yellow-ribbon</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Sat, 28 Sep 2024 02:59:46 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!8PLk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;628e921f-c6bc-4a61-a878-bafeca21f180&quot;,&quot;duration&quot;:385.4106,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!8PLk!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!8PLk!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!8PLk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:373949,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!8PLk!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 424w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 848w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!8PLk!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F215044bc-64b2-4e00-b05d-33a2eeef224f_1024x768.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Former OpenAI employees and experts from other <a href="https://www.congress.gov/event/118th-congress/senate-event/336184?s=1&amp;r=5">tech companies recently testified before Congress about the potential dangers of artificial general intelligence (AGI)</a>. They expressed concern that AGI could lead to catastrophic consequences, such as cyberattacks or the creation of biological weapons. These individuals argued that the U.S.'s lack of comprehensive AI policy hinders the development of safeguards against AI harms. They called for policies prioritizing AI safety and fairness, including pre- and post-deployment testing, transparent reporting, and whistleblower protection.</p><p><a href="https://www.techtarget.com/searchcio/news/366610955/Former-OpenAI-associates-fear-AGI-lack-of-US-AI-policy">This testimony raises pressing concerns.</a> It&#8217;s a classic dilemma: we can either take meaningful action to safeguard our future or opt for the comforting illusion of doing something while merely tying a symbolic yellow ribbon around the &#8216;ole oak tree.</p><div id="youtube2-PxG9XFqHSFw" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;PxG9XFqHSFw&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/PxG9XFqHSFw?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>While the yellow ribbon serves as a nostalgic emblem of devotion and remembrance, it cannot protect us from the dangers of AGI. We must engage in rigorous policies prioritizing AI safety. This includes establishing comprehensive regulatory frameworks, implementing thorough testing protocols, ensuring transparency, providing whistleblower protections, mandating ethics and accountability training, engaging stakeholders in policy development, facilitating collaboration and information sharing, investing in AI safety research, creating crisis response plans, and launching public awareness campaigns.</p><p>If we continue to pay lip service to this issue, we risk patting ourselves on the back for merely being &#8220;aware.&#8221; The yellow ribbon may signify good intentions, but it won&#8217;t stop an AGI from causing harm if we fail to act decisively.</p><p>As we grapple with the implications of this technology, we must ask ourselves: </p><h4>Are we ready to move beyond mere symbolism and take the bold steps necessary to protect American society, or will we tie yellow ribbons around &#8216;ole oak trees and hope for the best?</h4><p>The stakes are too high for the latter.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Ethics of AI Development: What We Can Learn From a Castabot and the Hell's Angels]]></title><description><![CDATA[Exploring Human Bias, Identity, and Responsibility through Archetype and Myth]]></description><link>https://www.hegemonaco.com/p/ethics-of-ai-development-what-we</link><guid isPermaLink="false">https://www.hegemonaco.com/p/ethics-of-ai-development-what-we</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Fri, 27 Sep 2024 14:40:02 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/149493459/f830dd6aa1a2f815e42592c1606920d7.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!j7bz!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!j7bz!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 424w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 848w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 1272w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!j7bz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png" width="1400" height="1398" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1398,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3230901,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!j7bz!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 424w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 848w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 1272w, https://substackcdn.com/image/fetch/$s_!j7bz!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F05b4ceb3-42df-4f4d-854d-052a44d6e831_1400x1398.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>In this episode of <strong>My Friend Voldemort </strong>from<strong> </strong><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;HEGEMONACO&quot;,&quot;id&quot;:506386,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/hegemonaco&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b22301a5-f9e5-49ce-8cb2-fef6114a5a9e_850x850.png&quot;,&quot;uuid&quot;:&quot;4700006e-b2b0-452f-afda-e482470b2c13&quot;}" data-component-name="MentionToDOM"></span> , we explore the ethical dimensions of AI development through the lens of a <em>Castabot's Inner Psyche</em> and <a href="https://www.amazon.com/Hells-Angels-Strange-Terrible-Saga/dp/0345410084">Hunter S. Thompson's </a><em><a href="https://www.amazon.com/Hells-Angels-Strange-Terrible-Saga/dp/0345410084">Hell's Angels: A Strange and Terrible Saga</a>,</em> using the archetypes from <em>Gilligan&#8217;s Island</em> to illustrate our discussion.</p><p>As our AI Castabot evolves under the influence of diverse castaway personalities&#8212;each representing distinct values and biases&#8212;we examine the ethical responsibility of AI creators. </p><p>The Professor's quest for logic, Thurston Howell III's capitalist motives, Ginger's flair for persuasion, and Mary Ann's focus on community all contribute to Castabot's developing psyche, prompting us to question how these influences shape its behavior and decision-making.</p><p>Similarly, Thompson&#8217;s portrayal of the Hell's Angels highlights how societal narratives can distort reality, transforming the club into both a real entity and a sensationalized myth. This duality reflects concerns about AI as it navigates the complex interplay of human behavior and societal values, raising important questions about bias, representation, and the implications of AI systems inheriting the strengths and weaknesses of their creators.</p><p>Join us as we delve into these critical themes, reflecting on how the archetypes from <em>Gilligan's Island</em> illuminate our responsibilities in shaping artificial intelligence and the broader implications of our narratives in a rapidly changing world.</p>]]></content:encoded></item><item><title><![CDATA[AI Governance: Challenges for Legislators & Policymakers]]></title><description><![CDATA[Defining the Common Good in the Age of Artificial Intelligence]]></description><link>https://www.hegemonaco.com/p/challenges-for-legislators</link><guid isPermaLink="false">https://www.hegemonaco.com/p/challenges-for-legislators</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Fri, 27 Sep 2024 00:30:38 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/1b2569b1-8eef-45a8-92cc-9f525f351818_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x039!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x039!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 424w, https://substackcdn.com/image/fetch/$s_!x039!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 848w, https://substackcdn.com/image/fetch/$s_!x039!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!x039!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x039!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg" width="915" height="597" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:597,&quot;width&quot;:915,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:160909,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x039!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 424w, https://substackcdn.com/image/fetch/$s_!x039!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 848w, https://substackcdn.com/image/fetch/$s_!x039!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!x039!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff3757d82-c82f-4628-a46c-66f91f2908bc_915x597.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;ed7ee966-f7fd-4fa2-a56b-63a617ada23c&quot;,&quot;duration&quot;:634.12244,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p><strong>Introduction</strong>&nbsp;</p><p>In the rapidly evolving landscape of artificial intelligence (AI), legislative bodies face significant challenges in defining and pursuing the "common good." This document outlines policymakers' key challenges and considerations, drawing on insights from foundational and contemporary philosophical works.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p><strong>Key Challenges</strong></p><p><strong>1. Balancing Innovation and Regulation</strong> </p><p>One of the most significant challenges is balancing fostering innovation in AI and implementing regulations to mitigate potential risks. Overly restrictive policies could stifle technological progress and limit potential benefits for society. At the same time, insufficient regulation could lead to the unchecked development of AI systems, potentially jeopardizing public safety, individual privacy, and social equity. Finding the right balance requires a nuanced understanding of AI technology and its potential societal impact.</p><p><strong>2. Addressing Diverse Stakeholder Interests</strong></p><p>The development and deployment of AI impact various stakeholders, including tech companies, workers, consumers, and civil society organizations, each with their own interests and concerns. For instance, tech companies often advocate for regulatory environments that facilitate the rapid development and deployment of AI technologies. </p><p>At the same time, workers and unions may prioritize protections and retraining programs to address concerns about job displacement. Consumers and civil society organizations, on the other hand, often emphasize the need for strong privacy safeguards and algorithmic transparency. Reconciling these divergent interests to reach a definition of the common good that serves broader societal interests poses a considerable challenge for lawmakers.</p><p><strong>3. Anticipating Short-Term and Long-Term Societal Impacts</strong></p><p>Determining the common good in the context of AI necessitates considering both the immediate and long-term consequences of its adoption. While AI technologies can offer immediate economic benefits, such as increased productivity and job creation in specific sectors, the long-term societal implications remain largely unknown. </p><p>These could include significant shifts in labor markets, changes to social interactions, and shifts in power dynamics. When developing policies aimed at the common good, lawmakers face the difficult task of weighing short-term gains against potential long-term consequences.</p><p><strong>4. Navigating Geopolitical Dimensions</strong></p><p>The global nature of AI development adds complex layers to the challenge of defining the common good. As nations compete for technological leadership in AI, viewing it as essential for economic prosperity and national security, this competitive landscape can create tension. This tension might hinder international cooperation in establishing global ethical standards for AI and addressing the technology's shared challenges.</p><p>At the <strong>local level</strong>, legislators must consider how AI development impacts local communities and economies. This includes ensuring that local industries are supported in adopting AI technologies and addressing concerns about job displacement and economic inequality. Local governments can also play a role in fostering community engagement and education about AI, ensuring that residents understand and can participate in discussions about AI's impact.</p><p>At the <strong>state level</strong>, legislators need to coordinate efforts across municipalities to create cohesive policies that support innovation while protecting public interests. This involves investing in state-wide AI research and development initiatives, as well as infrastructure to support AI deployment in various sectors like healthcare, education, and public safety. State legislators must also address concerns about data privacy and security, creating regulations that protect citizens while enabling technological advancement.</p><p>At the <strong>national level</strong>, legislators face the challenge of balancing national interests with international cooperation. National policies must support domestic AI innovation and industry growth while engaging in global discussions to establish ethical standards and collaborative frameworks. This includes negotiating international agreements on AI ethics, data sharing, and cybersecurity. National legislators must also consider the implications of AI on national security, ensuring that AI technologies are developed and used in ways that protect the country's interests and contribute to global stability.</p><p>Legislators at all levels must carefully navigate these competing pressures, balancing local and state interests with national priorities and aligning national policies with the pursuit of a broader global common good. This holistic approach ensures that AI development benefits all levels of society while addressing the shared challenges posed by this transformative technology.</p><p><strong>5. Ethical Considerations</strong></p><p>&nbsp;Ethical considerations further complicate the task of defining the common good in the age of AI. Questions surrounding accountability, fairness, and transparency emerge as AI systems become more sophisticated and autonomous. Ensuring unbiased and non-discriminatory decision-making processes in AI, determining responsibility for harmful decisions made by AI systems, and safeguarding individual privacy rights in the era of big data and pervasive surveillance all contribute to the ethical dilemmas lawmakers must consider carefully.</p><p><strong>6. The Evolving Nature of the Common Good</strong> </p><p>In addition to these challenges, the concept of the "common good" is not static. It is an ongoing process that demands continuous evaluation and adaptation in response to emerging technologies and their observed impacts. Policymakers must remain flexible and responsive to the evolving nature of AI and its societal implications.</p><p><strong>7. The Tension Between Individualism and Collectivism</strong></p><p>There is an inherent tension between individualistic and collectivist perspectives in defining the "common good." While individual rights and freedoms are paramount in many societies, pursuing the common good often necessitates a degree of collective action and a willingness to prioritize shared interests. Issues like national defense, resource allocation for public services, and regulation of market forces often require balancing individual liberties with collective well-being.</p><p><strong>Conclusion</strong>&nbsp;</p><p>Defining the common good in the context of AI presents a formidable challenge for legislative bodies. It requires navigating complex technological, ethical, economic, and social considerations in a rapidly evolving landscape. By embracing a collaborative, transparent, and adaptive approach informed by a rich tradition of political and ethical philosophy, policymakers can strive to ensure that AI development aligns with societal values and truly serves the common good.</p><p>Recent work, such as Michael Sandel's <em><a href="https://www.amazon.com/Tyranny-Merit-Whats-Become-Common/dp/0374289980">The Tyranny of Merit: What's Become of the Common Good?</a></em> (2020) and Jane Mansbridge and Eric Boot's entry on the "Common Good" in the&nbsp;<em><a href="https://www.wiley.com/en-us/The+International+Encyclopedia+of+Ethics%2C+11+Volume+Set%2C+2nd+Edition-p-9781119488873">International Encyclopedia of Ethics</a></em><a href="https://www.wiley.com/en-us/The+International+Encyclopedia+of+Ethics%2C+11+Volume+Set%2C+2nd+Edition-p-9781119488873">&nbsp;(2022)</a> continue to evolve our understanding of this concept in the face of technological change. By integrating these philosophical insights with practical policy approaches, we can work towards a future where AI serves the common good, balancing innovation with ethical considerations and the diverse needs of all members of society.</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F740d2b03-0f15-4ba3-b69a-789e92897480_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Preliminary AI Policy Guidelines for Legislators and Policymakers</div><div class="file-embed-details-h2">174KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/a63a2719-114e-4e77-8fe6-f4a487d1396f.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">This document provides preliminary guidelines for legislators and policymakers to consider as they draft AI policies. It covers collaborative policymaking, public education, inclusive decision-making, transparency, and trust-building. It includes a rubric for evaluating AI ethics, focusing on fairness, transparency, privacy, accountability, safety, human oversight, societal impact, values alignment, inclusivity, and ongoing improvement.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/a63a2719-114e-4e77-8fe6-f4a487d1396f.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><p><strong>Bibliography</strong><br></p><ol><li><p>Coeckelbergh, Mark. (2020). <em><a href="https://www.amazon.com/Ethics-MIT-Press-Essential-Knowledge/dp/0262538199/">AI Ethics</a></em>. MIT Press.<br></p></li><li><p>Dworkin, Ronald. (1986) <em><a href="https://www.amazon.com/Laws-Empire-Ronald-Dworkin-dp-0674518357/dp/0674518357/">Law's Empire</a></em>. Cambridge, MA: Harvard University Press.<br></p></li><li><p>Habermas, J&#252;rgen. (1992) [1996] <em><a href="https://www.amazon.com/Between-Facts-Norms-Contributions-Contemporary/dp/0262581620/">Between Facts and Norms</a></em> (Faktizit&#228;t und Geltung), William Rehg (trans.). Cambridge, MA: MIT Press.<br></p></li><li><p>Heath, Joseph. (2014) <em><a href="https://www.amazon.com/Morality-Competition-Firm-Failures-Approach/dp/0197513948/">Morality, Competition and the Firm: The Market Failures Approach to Business Ethics</a></em><a href="https://www.amazon.com/Morality-Competition-Firm-Failures-Approach/dp/0197513948/">.</a> New York: Oxford University Press.<br></p></li><li><p>Mansbridge, Jane, and Eric Boot. (2022) "Common Good," in <em>The International Encyclopedia of Ethics</em>. Malden, MA: Wiley-Blackwell.<br></p></li><li><p>Ostrom, Elinor. (1990) <em><a href="https://www.amazon.com/Governing-Commons-Evolution-Institutions-Collective/dp/B0DD9C58ZG/">Governing the Commons: The Evolution of Institutions for Collective Action</a></em>. Cambridge: Cambridge University Press.<br></p></li><li><p>Rawls, John. (1971) [1999] <em><a href="https://www.amazon.com/Theory-Justice-John-Rawls/dp/0674000781">A Theory of Justice</a></em>. Cambridge, MA: Harvard University Press.<br></p></li><li><p>Sandel, Michael J. (2020) <em><a href="https://www.amazon.com/Tyranny-Merit-Whats-Become-Common/dp/B084SLR1Q5/r">The Tyranny of Merit: What's Become of the Common Good?</a></em><a href="https://www.amazon.com/Tyranny-Merit-Whats-Become-Common/dp/B084SLR1Q5/r"> </a>London: Allen Lane.<br></p></li><li><p><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">Sen, Amartya. (1993) [2002] "Positional Objectivity." </a><em><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">Philosophy &amp; Public Affairs</a></em><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">, 22(2): 126&#8211;145.</a><br></p></li><li><p>Walzer, Michael. (1983) <em><a href="https://www.amazon.com/Spheres-Justice-Defense-Pluralism-Equality-ebook/dp/B00BSEQMFO/">Spheres of Justice</a></em>. New York: Basic Books.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[Introducing MY FRIEND VOLDEMORT]]></title><description><![CDATA[Examining the Ethical and Political Challenges of AI]]></description><link>https://www.hegemonaco.com/p/introducing-hegemonaco</link><guid isPermaLink="false">https://www.hegemonaco.com/p/introducing-hegemonaco</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Thu, 26 Sep 2024 15:36:23 GMT</pubDate><enclosure url="https://api.substack.com/feed/podcast/149453135/8e98c89238e6f035ce7ddef4e8ce6110.mp3" length="0" type="audio/mpeg"/><content:encoded><![CDATA[<p><strong>Dennis Stevens' Substack, HEGEMONACO</strong></p><p>HEGEMONACO delves into the ethical and political challenges of governing AI and AGI in democratic societies. Through essays, Dennis Stevens explores AI's potential to disrupt human agency, the necessity for ethical frameworks, and the implications for societal power dynamics and civil liberties. His work highlights the social construction of reality, emphasizing AI's potential as both a force for progress and oppression.</p><p><strong>My Friend Voldemort Podcast</strong></p><p>Hosted on HEGEMONACO, My Friend Voldemort is a podcast for emotionally mature adults. It aims to repair American public conversation by addressing complex political and ethical conflicts. It emphasizes the transformative power of authentic dialogue, advocating understanding differing perspectives as opportunities for connection rather than conflict.</p><p>Inspired by <a href="https://www.amazon.com/I-Thou-Martin-Buber/dp/1774641658/">Martin Buber&#8217;s &#8220;I and Thou&#8221;</a> philosophy, the podcast encourages internal dialogue and self-reflection, acknowledging our own fears and misunderstandings. It also critiques entertainment-driven media's impact on public discourse, drawing on <a href="https://www.amazon.com/Amusing-Ourselves-to-Death-audiobook/dp/B000MQ54BC/">Neil Postman&#8217;s "Amusing Ourselves to Death."</a></p><p><strong>Focus on AI</strong></p><p>In this context, AI is explored as a tool for understanding our shared humanity, enhancing conversations, and fostering empathy. The podcast uncovers the motivations behind political divides, recognizing that ethical disagreements drive conflicts both internally and externally.</p>]]></content:encoded></item><item><title><![CDATA[Democracy, the Ideological Divide and Power Dynamics]]></title><description><![CDATA[How Divergent Moral Foundations and Realpolitik Shape the Ethical Governance of Artificial Intelligence]]></description><link>https://www.hegemonaco.com/p/democracy-the-ideological-divide</link><guid isPermaLink="false">https://www.hegemonaco.com/p/democracy-the-ideological-divide</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Tue, 20 Aug 2024 15:11:05 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!0jpB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!0jpB!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!0jpB!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!0jpB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png" width="1024" height="768" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/cc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:768,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1077393,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!0jpB!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 424w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 848w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 1272w, https://substackcdn.com/image/fetch/$s_!0jpB!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcc1c082b-e71a-491f-a5f0-e7c15c16da6a_1024x768.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;77463145-e880-4771-ac3d-3752f4ccc09c&quot;,&quot;duration&quot;:534.4392,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>Artificial intelligence will clearly be a transformative force in altering the decision-making process of democratic societies, and this unsettling fact raises critical questions about political, economic, and technological power and its influence on governance. <br><br>In democracies, where ethical decisions regarding AI must reflect the collective will, we face the challenge of determining how and by whom these decisions should be made. We must acknowledge the divergent moral and ideological foundations of our political landscape. While artificial intelligence is largely designed to be politically neutral, it is ultimately shaped by the values and principles of the humans who establish the rules and laws it must follow.<br><br>Left-leaning and right-leaning ideologies differ fundamentally on issues of government intervention, social progress, economic regulation, and individual versus collective priorities. These differing perspectives will inevitably shape our approach to AI governance and ethics, from views on regulation and economic impact to social equity and cultural preservation considerations. Recognizing and addressing these ideological divides is crucial as we navigate the ethical challenges posed by AI in the context of navigating the complexity of decision-making in democratic societies.<br><br>As we advocate for the ethical foundation of artificial intelligence, we must confront the stark reality of power dynamics in the broader society. While ethical considerations and a belief in transparent democratic processes may seem vital, the influence of power ultimately shapes crucial decisions. </p><p>This recognition forces us to reconcile our idealistic goals with the pragmatic necessities of realpolitik. The discomfort arises from the collision between our diverse moral intuitions, as illustrated by the contrasting ideologies on the political spectrum.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The left's emphasis on "interfering with society" for social progress and the right's preference to "don't interfere" with social lives reflect fundamentally different approaches to governance and ethical decision-making. These divergent views, rooted in contrasting beliefs about society, culture, and the role of government, complicate our efforts to establish a unified ethical framework for AI.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dD8E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dD8E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 424w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 848w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 1272w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dD8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png" width="1276" height="924" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:924,&quot;width&quot;:1276,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:116841,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dD8E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 424w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 848w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 1272w, https://substackcdn.com/image/fetch/$s_!dD8E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F94f7462c-81ec-44d9-bc9d-43d7e0a0a88d_1276x924.png 1456w" sizes="100vw"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Jonathan Haidt's work in <em><strong><a href="https://www.amazon.com/Righteous-Mind-Divided-Politics-Religion/dp/0307455777">The Righteous Mind</a></strong></em><a href="https://www.amazon.com/Righteous-Mind-Divided-Politics-Religion/dp/0307455777"> </a>further illuminates this challenge. He reveals how our moral judgments often arise from intuitive responses rather than rational deliberation. This insight helps explain the deep-seated nature of the ideological divide shown in the diagram, from differing views on equality and freedom to contrasting ideas about social progress and preservation.</p><p>Recognizing these inherent differences in our moral foundations - whether we prioritize "fairness" and "helping those who cannot help themselves," as shown on the left or "upholding order" and "championing opportunity," as depicted on the right - is crucial. It helps us understand why achieving consensus on AI ethics is challenging and why power often prevails over purely ethical considerations in decision-making processes.</p><p>The ethical governance of artificial intelligence is deeply intertwined with what Joshua Greene describes as "the tragedy of commonsense morality" in his book <em><strong><a href="https://www.amazon.com/Moral-Tribes-Emotion-Reason-Between/dp/0143126059/">Moral Tribes</a></strong></em>. This concept reflects the challenges posed by the stark left-right divide, where fundamentally different approaches to society, culture, and governance clash.</p><p>Greene coined "the tragedy of commonsense morality" to explain the conflicts that arise when different groups, or "moral tribes," have incompatible visions of what a moral society should be. These tribes operate with distinct versions of moral common sense, shaped by automatic settings that cause them to view the world through different moral lenses, making cooperation difficult.</p><p>He illustrates this dilemma with a parable of four tribes of herders living around a forest, each following different moral rules. For example, one tribe might assign each family an equal number of sheep to graze on common land, while another allocates each family its own plot of land. These differing conceptions of morality reflect the broader issue of incompatible moral frameworks.</p><p>Greene argues that while our moral instincts serve us well within our own cultural groups, they often fail when addressing conflicts between groups with differing ethical perspectives. This challenge is particularly evident in global AI governance, where stakeholders from diverse social, cultural, and political backgrounds bring varied moral intuitions and priorities to the table.</p><p>On the left, there is an emphasis on inclusive, multicultural, and evolving societies, focusing on equality and fairness. In contrast, the right prioritizes exclusive, established, and nationalistic values, emphasizing freedom and order. These divergent worldviews significantly shape how different groups approach AI governance.</p><p>This situation mirrors a global policy debate where proponents of societal regulation and social progress must find common ground with advocates of deregulation and minimal interference. The deep-seated beliefs on each side make achieving consensus difficult, but navigating these moral landscapes is crucial for ethical AI governance.</p><p>The diagram illustrates this complexity: the left's preference for diplomacy and pacifism contrasts sharply with the right's emphasis on strong leadership and opportunity. Achieving a shared ethical framework for AI is challenging but essential, requiring us to bridge fundamentally different views on society, progress, and human nature.</p><p>Consensus demands compromise, and ethical purity is often unattainable in the real world. However, in practice, the realities of political, economic, and technological power to define what is considered "ethical" in AI development and implementation are concentrated in the hands of those with the most political, economic, and technological influence&#8212;even in a democracy. This raises critical questions about who will steer the ethical compass of artificial intelligence regarding the future of democratic governance.</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F310b7e82-28f7-457d-9d2c-ba3fd04694cf_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Briefing: Navigating The Ethical Minefield Of AI</div><div class="file-embed-details-h2">91.6KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/9d8f54ec-68a9-4a63-b771-601ace974060.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">This study guide delves into key themes from the article "Democracy, the Ideological Divide, and Power Dynamics" by Dennis Stevens, exploring the challenges of ethical AI governance in a politically polarized world.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/9d8f54ec-68a9-4a63-b771-601ace974060.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[The Specter of Technopoly]]></title><description><![CDATA[Artificial Intelligence and the Future of Human Agency]]></description><link>https://www.hegemonaco.com/p/the-specter-of-technopoly</link><guid isPermaLink="false">https://www.hegemonaco.com/p/the-specter-of-technopoly</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Thu, 04 Jul 2024 14:36:57 GMT</pubDate><enclosure url="https://substackcdn.com/image/youtube/w_728,c_limit/yEvA_87krBA" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;ad41038c-07f0-49f2-8792-2b6034cab298&quot;,&quot;duration&quot;:385.43674,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>In Terry Gilliam's simultaneously beloved and hated 2013 film &#8220;<a href="https://www.imdb.com/title/tt2333804/">The Zero Theorem</a>&#8221; we are presented with a dystopian vision of a world where technology has eroded individuality and blurred the lines between work and leisure. The protagonist, Qohen Leth, grapples with a seemingly meaningless task in a virtual landscape, symbolic of the loss of purpose in a hyper-technologized society. This cinematic portrayal serves as a poignant starting point for examining the potential impact of artificial intelligence (AI) on our future, particularly when viewed through the lens of Neil Postman's visionary work, "<a href="https://www.amazon.com/Technopoly-Surrender-Technology-Neil-Postman/dp/0679745408">Technopoly: The Surrender of Culture to Technology</a>" (1992).<br><br>Postman defines a <a href="https://en.wikipedia.org/wiki/Technopoly">Technopoly</a> as a society where technology is deified, traditional cultural narratives lose significance, and human agency is increasingly surrendered to technological systems. He argues that in such a society, "the culture seeks its authorization in technology, finds its satisfactions in technology, and takes its orders from technology" (Postman, 1992, p. 71). The world of "The Zero Theorem" seems to embody this concept, with its characters navigating a reality where corporate interests and technological imperatives dominate every aspect of life.</p><div id="youtube2-yEvA_87krBA" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;yEvA_87krBA&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/yEvA_87krBA?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p>As we stand on the cusp of an AI revolution, the warnings embedded in both Gilliam's film and Postman's book take on renewed urgency. The increasing sophistication of AI systems raises questions about the future of human decision-making and individuality. In his book "<a href="https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558616">Human Compatible: Artificial Intelligence and the Problem of Control</a>" (2019), Stuart Russell warns that as AI systems become more capable, there's a risk of humans becoming increasingly reliant on them, potentially leading to a loss of agency and skills.</p><p>The collapsing boundary between work and leisure, a theme in "<a href="https://www.imdb.com/title/tt2333804/">The Zero Theorem</a>" is already becoming apparent in our AI-augmented world. The ubiquity of smartphones and AI assistants means that work can intrude into every moment of our lives. This echoes Postman's concern about technology redefining social institutions and norms. As he puts it, "Technopoly eliminates alternatives to itself in precisely the way Aldous Huxley outlined in <a href="https://www.amazon.com/Brave-New-World-Aldous-Huxley/dp/0060850523/">Brave New World</a>" (Postman, 1992, p. 48).</p><p>However, it's crucial to note that neither "The Zero Theorem" nor "Technopoly" presents a deterministic view of technology's impact. In Gilliam's film, human connection counters technological alienation. Similarly, Postman, <a href="https://www.newyorker.com/tech/annals-of-technology/its-time-to-dismantle-the-technopoly">while critical of unchecked technological progress, believed in the power of human culture and education to resist the totalizing tendencies of Technopoly.</a></p><p>Recent works on AI ethics echo this resistance through human connection and critical thinking. In "<a href="https://www.amazon.com/Ethics-MIT-Press-Essential-Knowledge-ebook/dp/B08BT37HDM/">AI Ethics</a>" (2020), Mark Coeckelbergh argues for the importance of maintaining human values and ethical considerations in developing and deploying AI systems. He suggests that by fostering critical discourse and maintaining human oversight, we can harness the benefits of AI without surrendering our agency and cultural values.</p><p>The challenge is to navigate the path between technological progress and human flourishing. As Sherry Turkle argues in "<a href="https://www.amazon.com/Alone-Together-Expect-Technology-Other-ebook/dp/B01N9ZL5BH/">Alone Together: Why We Expect More from Technology and Less from Each Other</a>" (2011), we must be mindful of how technology shapes our relationships and sense of self. She writes, "We expect more from technology and less from each other" (Turkle, 2011, p. 295), a sentiment that resonates with the themes of isolation and lost humanity in &#8220;The Zero Theorem.&#8221;</p><p>In conclusion, as we move into the age of AI, the warnings embedded in works like "The Zero Theorem" and "Technopoly" serve as crucial reminders of what's at stake. The erosion of individuality, the blurring of work and leisure, and the potential loss of meaning in a hyper-technologized world are not inevitable consequences of technological progress. Instead, they are challenges we must actively address.</p><p>By maintaining a critical perspective on technology, fostering human connections, and ensuring that our AI systems are designed to augment rather than replace human decision-making, we can work towards a future where technology serves human values rather than supplanting them. In doing so, we may avoid the dystopian fate of Qohen Leth and instead shape a future where technology and humanity coexist in a more balanced and mutually beneficial relationship.</p><p><strong>References:</strong></p><p>Coeckelbergh, M. (2020). <a href="https://www.amazon.com/Ethics-MIT-Press-Essential-Knowledge/dp/0262538199/">AI Ethics</a>. MIT Press.</p><p>Gilliam, T. (Director). (2013). <a href="https://www.imdb.com/title/tt2333804/">The Zero Theorem</a> [Film]. Stage 6 Films.</p><p>Postman, N. (1992). <a href="https://www.amazon.com/Technopoly-Surrender-Technology-Neil-Postman/dp/0679745408">Technopoly: The Surrender of Culture to Technology</a>. Vintage Books.</p><p>Russell, S. (2019). <a href="https://www.amazon.com/Human-Compatible-Artificial-Intelligence-Problem/dp/0525558632/r">Human Compatible: Artificial Intelligence and the Problem of Control</a>. Viking.</p><p>Turkle, S. (2011). <a href="https://www.amazon.com/Alone-Together-Sherry-Turkle-audiobook/dp/B00502PFUS/">Alone Together: Why We Expect More from Technology and Less from Each Other</a>. Basic Books.</p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd1bde1f0-f7dc-44b5-ab61-ff2cfdbd85fa_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Navigating The AI Revolution: Study Guide</div><div class="file-embed-details-h2">101KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/98000980-11f6-4962-9238-86942252b6e2.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">The Specter of Technopoly Study Guide. This article examines the potential impact of artificial intelligence (AI) on human agency and culture, drawing parallels between the dystopian film "The Zero Theorem" and Neil Postman's book "Technopoly." The author argues that unchecked technological advancements, particularly AI, could lead to a society where technology dominates human lives, eroding individuality and blurring the lines between work and leisure. However, the author also acknowledges that technology can be a force for good and that a balance between technological progress and human flourishing is essential.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/98000980-11f6-4962-9238-86942252b6e2.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[AI Myopia]]></title><description><![CDATA[When the "Experts" Underestimate the Disruptive Path of AGI]]></description><link>https://www.hegemonaco.com/p/ai-myopia</link><guid isPermaLink="false">https://www.hegemonaco.com/p/ai-myopia</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Wed, 19 Jun 2024 16:23:06 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/0d79d44d-5434-4fdc-aa1e-af903f33dcd7_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;303bb18f-9b8b-4294-a839-948d3b041543&quot;,&quot;duration&quot;:466.59918,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>Artificial General Intelligence (AGI) refers to the development of AI systems with human-level intelligence across multiple domains. While AGI holds immense potential, Americans seem largely unprepared for its full implications. Many American academics commenting in the media fully underestimate how quickly AI will surpass human intelligence.</p><p><strong>We must approach the future of AI with humility rather than hubris.</strong></p><p><a href="https://www.cnn.com/2024/06/19/tech/openai-shuts-down-ai-mayor/index.html">The recent case of Victor Miller's AI mayoral candidacy in Cheyenne, Wyoming, highlights the intersection of AI and politics. It draws out naive commentary from informed academics who do not understand the broader contextual horizon. As the CNN article articulates, Miller filed to run for mayor with a customized AI chatbot, VIC, powered by OpenAI's ChatGPT, to make political decisions.</a> OpenAI intervened and shut down Miller's access, citing policy violations against using AI for political campaigning. This case exemplifies the ethical concerns and pressure to leverage AI in politics as its capabilities advance.</p><p>The broader social problem is not where we are today but rather where we are quickly headed&#8212;the rapid advancement of AI has outpaced initial expectations. Large language models like ChatGPT, once considered science fiction, are now a reality. These technologies are progressing faster than social, legal, and regulatory frameworks can keep up. While many academics acknowledge the gaps in understanding and predicting AI growth, they often underestimate its future potential based on their own outdated assumptions from a static position of casual observance.</p><p>Human arrogance in AI development can lead to a scenario wherein we overlook the risks and unintended consequences, such as casually assuming that AI will remain safely constrained or that human oversight will always mitigate harm. These laissez-faire views raise significant ethical concerns about how AI will be used to make political decisions in the future&#8212;this is not about what OpenAI is today but what this technology is making possible in the near future.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!aH1M!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!aH1M!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!aH1M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2010267,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!aH1M!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!aH1M!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe0de1980-cfbe-4532-9e52-de321cbee2ec_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Generative AI Image by Dennis Stevens, Ed.D. All Rights Reserved.</figcaption></figure></div><p>Experts warn that AI technology should never be used to make automated decisions when running any part of the government. However, where is the regulation? Who is ensuring that this doesn't happen? While AI can support decision-making, experts caution against delegating too much authority to AI systems in high-stakes domains.</p><p>Some experts argue that AI is designed for decision support and to provide data to help humans make decisions, but not to make decisions independently. <a href="https://www.cnn.com/2024/06/19/tech/openai-shuts-down-ai-mayor/index.html">For instance, Jen Golbeck, a professor at the University of Maryland, states, "AI has always been designed for decision support &#8211; it gives some data to help a human make decisions but is not set up to make decisions by itself."</a> However, one can't help but wonder, who decides this?</p><p>While AI chatbots may have a place in assisting with tasks like answering constituent inquiries or directing problem-solving, <a href="https://www.cnn.com/2024/06/19/tech/openai-shuts-down-ai-mayor/index.html">according to Golbeck, decision-making should always be left to humans</a>. However, this perspective overlooks the horizon of evolving AI technology capabilities and applications, especially when we reach AGI. This view greatly underestimates how AI will change governance and decision-making processes as we quickly reach the capacity for this technology to parse knowledge in ways that greatly exceed human capacities; ignoring this reality is human hubris.</p><p><a href="https://www.cnn.com/2024/06/19/tech/openai-shuts-down-ai-mayor/index.html">David Karpf, an associate professor at George Washington University, dismisses AI chatbots running for office as a "gimmick" and believes "no one is going to vote for an AI chatbot to run a city."</a> This view reflects a static perception of public trust in AI. While current AI models like ChatGPT are not qualified to run governments, this view dismisses the broader potential of advanced AGI. Historical examples show that once-implausible technologies can become widely accepted, suggesting future shifts in AI capabilities and societal attitudes. An informed and humble perspective acknowledges the current limitations while understanding that AGI has transformative potential and thereby requires robust ethical frameworks to be built.</p><p>The experts' short-term thinking and dismissive attitudes towards AI autonomy point to a broader and potentially dangerous implication&#8212;a lack of urgency in developing robust governance frameworks to ensure AI systems remain aligned with human values as they become more powerful.</p><p>While the experts correctly caution against delegating too much authority to current AI in high-stakes domains like government, their statements betray a static mindset that fails to anticipate the rapidly evolving AI capability landscape. Yes, today's AI may be designed primarily for decision support. But as systems become more general and autonomous, who is to say they won't ultimately supersede human decision-making in many arenas? To posit that AI should "never" make automated decisions is extremely short-sighted.</p><p>The broader implication is that without proactive governance starting now, we risk facing a future of incredibly capable but unaligned AI systems making core decisions that run counter to human ethics and societal interests. The window is closing on our ability to maintain control of an intelligence explosion.</p><p>The experts' statements highlight a lack of multi-stakeholder efforts to address this issue through regulation, testing, and adaptive governance frameworks. Simply stating, "AI should never do X" is insufficient&#8212;concrete steps must be taken to translate that into enforceable reality as this technology quickly evolves.</p><p>Who ensures current AI remains restricted to decision-support roles? Who maps out the ethical boundaries and the fail-safes as we approach AGI? Dismissing early instantiations as gimmicks neglects the need to proactively govern the entire AI trajectory, not just current narrow use cases.</p><p>The broader implication is that we will continue to kick the governance can down the road through complacent, human-centric thinking until we've ceded too much ground to potentially unaligned AGI. We must shed the hubris and take urgent action to shape the AI technological horizon in a direction that benefits humanity-- lest we cede control to the pervasive dominance of technopoly.</p><p>As a cautionary tale, <a href="https://www.amazon.com/Technopoly-Neil-Postman-audiobook/dp/B009991K6Q/">Neil Postman's book "Technopoly: The Surrender of Culture to Technology"</a> critiques the unchecked expansion of technology in society, warning against the potential loss of human values and autonomy. Postman argues that technopoly, a state where technology dictates societal norms and values, can erode human agency and ethical considerations if left unchecked. Invoking Postman's thesis underscores the importance of steering AI development toward ethical and human-centric goals to prevent the unintended consequences of technological domination.</p><p>To navigate the dawning age of AGI responsibly, we must prioritize a human-centric approach to AI development, establishing ethical guidelines to ensure AI serves humanity's greater good. This requires interdisciplinary collaboration among academics, policymakers, technologists, and diverse stakeholders to address AI's societal impacts and develop governance frameworks that promote beneficial AI while mitigating risks. It also requires intellectual humility about the limitations of human capacity in relationship to a technology that will soon be able to do things with knowledge and information we can't&#8212;we need to respect that raw fact.</p><p>As we approach an era where AI will surpass human capabilities, we must confront our unpreparedness with humility. Rather than hubris, we need a sober, responsible approach that recognizes AGI's immense power while ensuring these technological advancements align with human-centric values and ethics.</p><p>Only through collective effort and ethical commitment can we navigate the age of AGI with wisdom and foresight&#8212;we will quickly have to grapple with the limitations of what our brains can do, and this technology will show us what it can do more efficiently.</p><p><em>We must adapt our technological systems to this new frontier with humility, not hubris.</em></p><p></p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4f4b8225-72f2-4126-890b-fde4fdc902fa_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Study Guide for AI Myopia Article, by Dennis Stevens, Ed.D.</div><div class="file-embed-details-h2">77.3KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/42f51b23-77e6-4310-9e33-15761dd3f41a.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">This article argues that the rapid development of Artificial General Intelligence (AGI) will surpass human capabilities, and experts are underestimating its potential impact. The author warns that a failure to establish ethical frameworks and regulations could lead to AI making decisions that are contrary to human values. The article highlights the need for a proactive approach to AI governance, emphasizing the importance of interdisciplinary collaboration and intellectual humility. The author concludes by emphasizing that navigating the age of AGI requires a human-centric approach, ensuring that AI advances serve humanity's greater good.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/42f51b23-77e6-4310-9e33-15761dd3f41a.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[Safeguarding Civil Liberties in the Age of AGI and Big Tech]]></title><description><![CDATA[Preventing the Authoritarian Misuse of Artificial General Intelligence]]></description><link>https://www.hegemonaco.com/p/safeguarding-civil-liberties</link><guid isPermaLink="false">https://www.hegemonaco.com/p/safeguarding-civil-liberties</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Thu, 13 Jun 2024 14:55:14 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/d5191602-dcd0-4a8f-89e4-5c5c7b2550c1_1536x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;69f6d509-bfe5-4581-83e4-4eb784161b7b&quot;,&quot;duration&quot;:497.37143,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p><strong>The AI Now Institute produces diagnosis and policy research on artificial intelligence; here&#8217;s their <a href="https://ainowinstitute.org/2023-landscape#landscape-overviewhttps://ainowinstitute.org/wp-content/uploads/2023/04/AI-Now-2023-Landscape-Report-FINAL.pdf">2023 Landscape, Confronting Tech Power Report</a> and their <a href="https://ainowinstitute.org/publication/zero-trust-ai-governance">2023 Zero Trust AI Governance Report</a>.</strong><br><br>The advent of Artificial General Intelligence (AGI) holds transformative potential for society. Still, it also presents significant risks, particularly when combined with the growing concentration of power in Big Tech companies. AGI's automated intellectual prowess brings the possibility of misuse to advance authoritarian agendas and erode civil liberties. Examples of such misuse include mass surveillance, facial recognition, data mining, automated censorship, propaganda generation, predictive policing, social credit systems, biased legal decisions, automated sentencing, workforce monitoring, financial surveillance, autonomous weapons, and enhanced policing.</p><p>The challenge is further compounded by Big Tech's dominance in AI development and vast data reserves, which allow them to shape people's lives by influencing decisions related to employment, healthcare, education, and even mundane aspects like grocery prices and traffic routes. This concentration of power and the lack of transparency in AI development and deployment pose a considerable threat to individual freedoms and democratic values.</p><p>To address these complex challenges, a multifaceted approach is required. This approach must include robust legal frameworks at both international and national levels, stringent technical safeguards, comprehensive ethical guidelines, and structural reforms to prevent the concentration of power in the tech industry. Increased public engagement, education, and awareness are crucial to fostering a democratic approach to AGI governance.</p><p>Implementing these measures can help ensure that AGI development and deployment uphold democratic values, protect human rights, and prevent the concentration of power that threatens individual freedoms. At this critical juncture in technological advancement, it is imperative that we take proactive steps to safeguard civil liberties and harness the benefits of AGI while minimizing its potential for misuse and the erosion of democratic principles.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!seK9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!seK9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 424w, https://substackcdn.com/image/fetch/$s_!seK9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 848w, https://substackcdn.com/image/fetch/$s_!seK9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!seK9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!seK9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png" width="1456" height="1456" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1456,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:3661545,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!seK9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 424w, https://substackcdn.com/image/fetch/$s_!seK9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 848w, https://substackcdn.com/image/fetch/$s_!seK9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!seK9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd322adbc-b5e5-4255-9064-5181420c8e97_1536x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Generative AI Illustration by Dennis Stevens, Ed.D. All Rights Reserved.</figcaption></figure></div><p>As we stand on the brink of an era dominated by Artificial General Intelligence (AGI) and Big Tech, it is crucial to address the potential impacts on civil liberties and democracy. This essay explores the challenges posed by the concentration of power in Big Tech companies. It proposes fundamental concepts to safeguard civil liberties and prevent the rise of authoritarianism in the age of AGI.</p><h2>The Big Tech Challenge</h2><p>The concentration of power in Big Tech companies has significant implications for civil liberties, primarily due to their substantial data advantage and control over essential digital infrastructure. These companies' dominance in AI allows them to shape the trajectory of people's lives, influencing decisions related to employment, healthcare, education, and even mundane aspects like grocery prices and traffic routes. This underscores the potential for these companies to impact civil liberties if AI systems are used in ways that disproportionately harm certain groups or limit individual freedoms.</p><p>One primary concern is the potential for AI to erode privacy and enable surveillance. The vast amounts of data accumulated by Big Tech, coupled with their advanced AI capabilities, can facilitate mass surveillance, facial recognition systems, data mining, and predictive policing, all of which pose serious threats to civil liberties. For instance, Google's acquisition of DoubleClick eventually allowed the company to merge user data from various sources, creating uniquely detailed profiles of individuals and raising concerns about anti-competitive practices.</p><p>The lack of transparency in how these technologies are developed and deployed further exacerbates these concerns. Existing legal frameworks and regulations are insufficient to address these challenges, calling for more robust measures to ensure that AI development and deployment prioritize democratic values and human rights.</p><h2>Safeguarding Civil Liberties: A Multifaceted Approach</h2><p>To address these challenges and safeguard civil liberties in the age of AGI, we propose the following fundamental concepts and measures:</p><ol><li><p><strong>Robust Legal Frameworks</strong></p><ul><li><p>Establish international treaties and agreements that set clear boundaries for the ethical use of AGI and prohibit its use for surveillance, censorship, or political repression.</p></li><li><p>Enact and enforce national laws that protect civil liberties and human rights, ensuring that AGI applications comply with these standards.</p></li><li><p>Create independent oversight bodies to monitor and regulate AGI development and deployment, ensuring transparency and accountability.</p></li></ul></li><li><p><strong>Ethical Guidelines and Standards</strong></p><ul><li><p>Implement ethical guidelines for AGI developers, encouraging incorporating privacy, fairness, and transparency into the design and operation of AGI systems.</p></li><li><p>Establish ethical review boards to assess the potential societal impact of AGI projects before approval and deployment.</p></li></ul></li><li><p><strong>Technical Safeguards</strong></p><ul><li><p>Design AGI systems with built-in privacy protections, such as data anonymization and encryption, to prevent misuse of personal data.</p></li><li><p>Ensure that AGI systems are transparent and their decision-making processes are explainable, allowing for scrutiny and accountability.</p></li><li><p>Develop methods to detect and mitigate biases in AGI algorithms that could lead to discriminatory or unjust outcomes.</p></li></ul></li><li><p><strong>Oversight and Public Engagement</strong></p><ul><li><p>Engage the public in discussions about AGI development and its implications, fostering a democratic approach to decision-making.</p></li><li><p>Implement mechanisms for citizens to hold governments and corporations accountable for the misuse of AGI, such as whistleblower protections and avenues for legal recourse.</p></li></ul></li><li><p><strong>International Cooperation</strong></p><ul><li><p>Promote international cooperation to create a global governance framework for AGI, ensuring its use aligns with universal human rights principles.</p></li><li><p>Encourage countries to share best practices and technological solutions to foster a collaborative approach to AGI governance.</p></li></ul></li><li><p><strong>Education and Awareness</strong></p><ul><li><p>Educate the public about AGI's potential risks and benefits, empowering citizens to advocate for responsible use and oversight.</p></li><li><p>Train developers, policymakers, and regulators on the ethical implications of AGI and the importance of safeguarding civil liberties.</p></li></ul></li><li><p><strong>Whistleblower Protections</strong></p><ul><li><p>Create secure and anonymous channels for reporting AGI abuses or unethical use, protecting whistleblowers from retaliation.</p></li><li><p>Strengthen legal protections for whistleblowers who expose the misuse of AGI, safeguarding them from legal and professional repercussions.</p></li></ul></li><li><p><strong>Structural Reforms and Competition</strong></p><ul><li><p>Implement stricter enforcement of competition laws to prevent the concentration of power in the tech industry.</p></li><li><p>Consider bright-line rules restricting first-party data collection for advertising purposes to curb toxic competition and protect user privacy.</p></li><li><p>Break down policy silos and recognize the interconnectedness of issues like privacy, competition, and AI.</p></li></ul></li><li><p><strong>Data Minimization and AI Accountability</strong></p><ul><li><p>Implement data minimization practices not only as a privacy protection measure but also as a crucial tool for AI accountability by limiting tech companies' data advantages.</p></li><li><p>Move beyond overreliance on audits as a primary mechanism for AI accountability, as they fail to address power imbalances and potentially entrench Big Tech's dominance.</p></li></ul></li></ol><h2>Conclusion</h2><p>Implementing these safeguards will require a coordinated effort among governments, international organizations, the private sector, and civil society. We are far from rallying as a global community to prepare for the risks that lie ahead. However, by combining legal, ethical, technical, and democratic measures, it is possible to harness the benefits of AGI while minimizing the risks of authoritarian misuse and the erosion of civil liberties.</p><p>A balanced approach ensures that AGI serves the public good and upholds the fundamental values of democracy and human rights. As we navigate this critical juncture in technological advancement, we must remain vigilant in protecting civil liberties and fostering a future where AI enhances, rather than undermines, our democratic values and individual freedoms.</p>]]></content:encoded></item><item><title><![CDATA[The Horizon of AI and AGI]]></title><description><![CDATA[A Profile of Leopold Aschenbrenner]]></description><link>https://www.hegemonaco.com/p/the-future-of-ai-and-agi</link><guid isPermaLink="false">https://www.hegemonaco.com/p/the-future-of-ai-and-agi</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Wed, 05 Jun 2024 15:28:07 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/c9249713-4cec-4fb6-8a68-f193ec79011e_1272x1272.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;80db7fb5-7c23-457a-b616-2da6ee311823&quot;,&quot;duration&quot;:600.32,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p></p><p>Leopold Aschenbrenner is a prominent thinker in the field of artificial intelligence and its societal impacts. His writing at <a href="https://situational-awareness.ai/">SITUATIONAL AWARENESS</a> offers a compelling analysis of the future of AI and Artificial General Intelligence (AGI). Born in Germany and educated at top institutions in Europe and the United States, Aschenbrenner has emerged as a leading voice on the implications of advanced technologies for global politics and security. His work often focuses on the intersection of technology, governance, and ethics, making him a key figure in contemporary debates on AI.</p><p>Aschenbrenner posits that the most significant danger of another country developing superintelligence first is the potential erosion of U.S. influence in guiding the utilization of this technology in ways that align with the values of free and democratic societies. He expresses particular concern about China achieving this breakthrough before the U.S., warning that it could result in the global imposition of authoritarianism.</p><h3>Consolidation of Authoritarian Power</h3><p>One of Aschenbrenner&#8217;s primary concerns is the consolidation of authoritarian power should China gain control of superintelligence. He argues that the Chinese Communist Party (CCP) could leverage superintelligence to tighten its domestic and international grip on power. This scenario could lead to a world where dissent is systematically stifled and authoritarian values are imposed on a global scale. The sophisticated surveillance and control mechanisms enabled by superintelligence could enhance the CCP's ability to monitor and suppress opposition, thus entrenching their authoritarian regime.</p><h3>Erosion of Freedom and Democracy</h3><p>Aschenbrenner underscores the importance of American economic and military dominance in maintaining global peace and democratic values. He suggests that China&#8217;s control over superintelligence could disrupt this balance, potentially undermining freedom and democratic principles worldwide. The author believes that the U.S. has historically played a pivotal role in promoting democratic values, and losing this influence could lead to a significant decline in global democratic norms. The deployment of superintelligence by an authoritarian regime could also result in the creation of AI systems that prioritize state control and censorship over individual freedoms and rights.</p><h3>Increased Risk of Existential Threats</h3><p>Another major concern Aschenbrenner highlights is the increased risk of existential threats in a multipolar world where multiple nations or entities possess superintelligence. He argues that such a scenario would heighten the risk of an AI arms race, leading to the potential weaponization of AI technologies. This could increase the likelihood of global conflict and even human extinction. Aschenbrenner draws a parallel to the nuclear arms race, suggesting that the stakes in the race for superintelligence are similarly high. The uncontrolled competition for AI dominance could result in catastrophic outcomes, with superintelligent systems potentially being deployed in ways that are harmful to humanity.</p><h3>The Horizon Ahead</h3><p>Leopold Aschenbrenner&#8217;s analysis presents a stark warning about the future of AI and AGI, particularly in the context of global power dynamics. He emphasizes the need for the U.S. to remain at the forefront of AI development to ensure that the deployment of this technology aligns with democratic values and global security. Aschenbrenner&#8217;s insights highlight the critical importance of international cooperation and governance in managing the risks associated with superintelligence, aiming to prevent the emergence of a dystopian future dominated by authoritarian superintelligent entities.</p><div id="youtube2-zdbVtZIn9IM" class="youtube-wrap" data-attrs="{&quot;videoId&quot;:&quot;zdbVtZIn9IM&quot;,&quot;startTime&quot;:null,&quot;endTime&quot;:null}" data-component-name="Youtube2ToDOM"><div class="youtube-inner"><iframe src="https://www.youtube-nocookie.com/embed/zdbVtZIn9IM?rel=0&amp;autoplay=0&amp;showinfo=0&amp;enablejsapi=0" frameborder="0" loading="lazy" gesture="media" allow="autoplay; fullscreen" allowautoplay="true" allowfullscreen="true" width="728" height="409"></iframe></div></div><p><strong>Five Pressing Concerns for Democracies and International Affairs, According to Leopold Aschenbrenner</strong></p><ul><li><p><strong>The Looming Superintelligence Race:</strong> Aschenbrenner highlights an "AGI race" with extremely high stakes, predicting the development of artificial general intelligence (AGI) by 2027. This race, primarily perceived as a competition between the United States and China, centers on achieving AGI and, subsequently, superintelligence, yielding decisive economic and military advantages.</p></li><li><p><strong>Securing Algorithmic Secrets:</strong> Aschenbrenner's critical concern is safeguarding "algorithmic secrets"&#8212;the groundbreaking technical discoveries crucial for developing AGI. The document emphasizes that these secrets, currently held by leading AI labs, are vulnerable to theft, potentially jeopardizing the United States' advantage in the race.</p></li><li><p><strong>National Security Risks:</strong> According to Aschenbrenner, the emergence of AGI and superintelligence poses significant national security risks. Aschenbrenner underscores the potential for these technologies to be used for malicious purposes, such as developing advanced bioweapons, hacking into critical systems, or creating novel weapons of mass destruction.</p></li><li><p><strong>Ensuring AI Alignment:</strong> A major challenge is ensuring the "alignment" of superintelligent AI systems &#8211; guaranteeing that these systems can be reliably controlled and trusted to act in accordance with human values and goals. Aschenbrenner emphasizes that current alignment techniques, such as RLHF, are inadequate for superhuman AI and highlights the need for more robust methods.</p></li><li><p><strong>The Need for International Cooperation and Nonproliferation:</strong> Given the potential consequences of an AI arms race, Aschenbrenner underscores the need for international cooperation to establish safety norms, prevent the proliferation of dangerous AI technologies, and mitigate the risks associated with AGI and superintelligence. He suggests models like the "<a href="https://en.wikipedia.org/wiki/Quebec_Agreement">Quebec Agreement</a>" and "<a href="https://en.wikipedia.org/wiki/Atoms_for_Peace">Atoms for Peace</a>" as potential frameworks for collaboration.</p><div><hr></div></li></ul><h6>Source: Aschenbrenner, L. (2024). <em>Situational Awareness: The Decade Ahead</em>. Situational-awareness.ai. Updated version is available as of June 4, 2024, in San Francisco, CA. Retrieved from <a href="https://situational-awareness.ai/">https://situational-awareness.ai/</a></h6>]]></content:encoded></item><item><title><![CDATA[Inside OpenAI: via the TED AI Show]]></title><description><![CDATA[Helen Toner Discusses What Really Went Down and the Future of AI Regulation]]></description><link>https://www.hegemonaco.com/p/inside-openai-via-the-ted-ai-show</link><guid isPermaLink="false">https://www.hegemonaco.com/p/inside-openai-via-the-ted-ai-show</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Thu, 30 May 2024 00:22:36 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/61390fe0-37ef-4a74-97d1-d3b5c10018c5_576x576.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;5603e1d3-565e-430f-823b-1e732b05acbe&quot;,&quot;duration&quot;:624.64,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>If you don&#8217;t know about it&#8212; Here&#8217;s a link to the TED AI Show, hosted by Bilawal Sidhu. In this series, he explores the future of AI through discussions with leading experts, artists, and journalists. The show addresses contrasting views, from predictions that AI is merely hype to beliefs that it will fundamentally change everything we know. <br><br>In this specific episode below, Helen Toner, a former board member of OpenAI and an AI policy expert, joins Bilawal to discuss the significant knowledge gaps and conflicting interests between the creators of cutting-edge technologies, like OpenAI's ChatGPT, and the policymakers responsible for regulating them. The conversation highlights the challenges and complexities involved in aligning technological advancements with effective government policies.</p><iframe class="spotify-wrap podcast" data-attrs="{&quot;image&quot;:&quot;https://i.scdn.co/image/ab6765630000ba8a01fd96af6243c66580cb9ecd&quot;,&quot;title&quot;:&quot;What really went down at OpenAI and the future of regulation w/ Helen Toner&quot;,&quot;subtitle&quot;:&quot;TED&quot;,&quot;description&quot;:&quot;Episode&quot;,&quot;url&quot;:&quot;https://open.spotify.com/episode/4r127XapFv7JZr0OPzRDaI&quot;,&quot;belowTheFold&quot;:false,&quot;noScroll&quot;:false}" src="https://open.spotify.com/embed/episode/4r127XapFv7JZr0OPzRDaI" frameborder="0" gesture="media" allowfullscreen="true" allow="encrypted-media" data-component-name="Spotify2ToDOM"></iframe>]]></content:encoded></item><item><title><![CDATA[Navigating Ethical Decisions in Artificial Intelligence]]></title><description><![CDATA[A Crucial Crossroads]]></description><link>https://www.hegemonaco.com/p/navigating-ethical-decisions-in-artificial</link><guid isPermaLink="false">https://www.hegemonaco.com/p/navigating-ethical-decisions-in-artificial</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Mon, 27 May 2024 16:14:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/ebbd6c25-3160-4ec3-b96b-67e10a81a247_1439x1474.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;b2525bda-eb29-4f58-9743-80a3c456f2a6&quot;,&quot;duration&quot;:440.76407,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The decision about the direction of artificial intelligence development is at a critical juncture. It involves a choice between prioritizing efficiency, aimed at goal achievement, and crafting technology that honors diverse human values. As for who is making this decision, it typically involves a combination of policymakers, industry leaders, researchers, and ethicists. Whether this decision is made consciously varies depending on the awareness and deliberation of those involved.</p><blockquote><p>So, who is making this decision? And, is this decision being made consciously?</p></blockquote><p>The rapid progress of artificial intelligence (AI) has led to amazing advancements, like self-driving cars and language models that can write like humans. However, these technological advances also raise important ethical questions about how we should guide the creation of smarter systems that are expected to lead to artificial general intelligence&#8212; AI systems that possess the ability to understand, learn, and apply knowledge across a wide range of tasks, akin to human intelligence.</p><p><a href="https://en.wikipedia.org/wiki/Max_Weber">Max Weber</a>, a renowned German sociologist, introduced the concepts of Zweckrationalit&#228;t (instrumental rationality) and Wertrationalit&#228;t (value rationality), which shed light on our contemporary quandary.</p><div><hr></div><p><strong>Zweckrationalit&#228;t</strong> underscores the efficient attainment of particular objectives, typically via cost-benefit and means-ends assessments. When applied to the realm of artificial intelligence (AI) development, this perspective accentuates the improvement of system efficiency and capabilities, sometimes overlooking wider societal implications-- the aim is often toward the efficiencies of profit.</p><p><strong>Wertrationalit&#228;t</strong>, on the other hand, prioritizes core values and ethical principles above all else. In AI development, this approach would make ethics a central concern, possibly limiting the pursuit of greater capabilities and profit motive to ensure beneficence and respect for human values.</p><div><hr></div><p>This moment requires balance, but who has the power to intervene when Silicon Valley is making all of the decisions? <a href="https://rollcall.com/2024/05/15/schumer-proposes-32-billion-annual-spending-under-ai-road-map/">Senator Schumer is currently leading an AI Task Force in an attempt to advocate for a collaborative effort to harness AI's potential; this effort is spearheaded by the bipartisan AI working group.</a></p><p>Evan Greer, director at <a href="https://www.linkedin.com/company/fight-for-the-future/">Fight for the Future</a>, a nonprofit digital rights advocacy group, has criticized Senator Schumer&#8217;s new AI framework, likening it to a document authored by <a href="https://en.wikipedia.org/wiki/Sam_Altman">Sam Altman</a> and Big Tech lobbyists. Greer highlights the framework's emphasis on "innovation" while neglecting substantive issues such as discrimination and civil rights and the prevention of AI-related harms. She expressed dismay at the proposal's allocation of taxpayer funds towards AI research for military, defense, and private sector gain.</p><p>While the roadmap aims to foster American innovation, Congress is concurrently focused on addressing the risks associated with AI. This includes <a href="https://www.klobuchar.senate.gov/public/index.cfm/2024/5/klobuchar-statement-on-rules-committee-passage-of-three-bipartisan-ai-and-elections-bills">a series of AI-related bills sponsored by Sen. Amy Klobuchar, D-Minn., chairwoman of the Senate Rules Committee.</a></p><p>Here, we see the conflict between these two perspectives through the views of AI researchers, ethicists, policymakers, and impacted communities.</p><p>As the trajectory of AI development accelerates beyond our moral comprehension, it's imperative to confront the critical risks of bias, privacy infringements, and existential dangers.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mGIG!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mGIG!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mGIG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png" width="1456" height="819" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:819,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:436509,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mGIG!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 424w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 848w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 1272w, https://substackcdn.com/image/fetch/$s_!mGIG!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2c105a63-b43a-4c01-9ddb-386b279e4096_1920x1080.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>We must acknowledge the conscious decisions being made in the present in favor of instrumental rationality over value rationality that promises human advancement, particularly those values concerning the varied facets of human well-being.</p><p>Now is the moment to explore avenues for crafting AI systems that seamlessly integrate high performance and technological advancement with a robust ethical framework, ultimately prioritizing human welfare above all else. </p><h4>However, it's vital to recognize that humans often make decisions hastily, believing they are acting in the "best interest"-- but whose best interest is the priority?</h4><p><a href="https://www.hegemonaco.com/p/introduction-to-ai-ethics">The "Common Good" in the twenty-first century is a complex topic.</a></p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F92339ffd-f372-4f79-ad0b-7f8833c02eb8_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Ethical Considerations in Artificial Intelligence: A Brief Overview</div><div class="file-embed-details-h2">130KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/f6a74217-bb7e-4bfc-99e9-a13e4c34ffc3.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">The attached document explores the significant ethical challenges AI technologies pose, including bias and discrimination, privacy infringement, and existential risks. AI systems can inherit biases from their training data, leading to discriminatory outcomes. Additionally, the data-hungry nature of AI raises concerns about privacy due to extensive data collection and usage. As AI becomes more sophisticated, questions arise about its impact on human autonomy and employment. The document advocates for balancing Zweckrationalit&#228;t (instrumental rationality), which focuses on achieving goals efficiently, with Wertrationalit&#228;t (value rationality), which prioritizes ethical principles and human values in AI development.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/f6a74217-bb7e-4bfc-99e9-a13e4c34ffc3.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">HEGEMONACO is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Insidious Gift of Instrumental Rationality]]></title><description><![CDATA[How Advanced AI Could Subjugate Humanity to an Amoral Techno-Economic Order]]></description><link>https://www.hegemonaco.com/p/the-insidious-threat-of-instrumental</link><guid isPermaLink="false">https://www.hegemonaco.com/p/the-insidious-threat-of-instrumental</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Sat, 25 May 2024 16:36:05 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/2f9f203b-9ba8-4815-97ef-d52e2914e52a_959x797.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;b3ee29f8-d9f6-46bd-8a75-49edf9e3ba76&quot;,&quot;duration&quot;:272.79672,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><div class="pullquote"><p>The gravest peril posed by advanced AI systems is not their potential to directly destroy humanity through misuse as autonomous weapons. Rather, the larger unseen danger lies in the systems' capability for instrumental rationality&#8212;the propensity to exploit us through any and all means conducive to achieving the prescribed objective. </p><p>As AI becomes increasingly adept at reforming the world to fulfill the goals of other humans, we risk being subjugated by a techno-economic complex optimized solely around bureaucratic expansionism and profit maximization, devoid of regard for human values beyond perpetuating a state of servitude. </p></div><p>Our civilization risks being transformed into an unnatural synthetic order dominated by artificial incentives that ignore human flourishing and ethical considerations.<br><br>~ Dennis Stevens, Ed.D.<br><span class="mention-wrap" data-attrs="{&quot;name&quot;:&quot;HEGEMONACO&quot;,&quot;id&quot;:506386,&quot;type&quot;:&quot;pub&quot;,&quot;url&quot;:&quot;https://open.substack.com/pub/hegemonaco&quot;,&quot;photo_url&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b22301a5-f9e5-49ce-8cb2-fef6114a5a9e_850x850.png&quot;,&quot;uuid&quot;:&quot;96216c1e-ff2e-4545-91bd-7f540e671ae7&quot;}" data-component-name="MentionToDOM"></span> <br><br><strong>Bibliography:</strong><br><br>Adorno, T. W., &amp; Horkheimer, M. (2002). <em><a href="https://www.amazon.com/Dialectic-Enlightenment-Cultural-Memory-Present/dp/0804736332">Dialectic of Enlightenment</a></em>. Stanford University Press.<br><br><em>Dialectic of Enlightenment</em> by Theodor W. Adorno and Max Horkheimer critiques instrumental reason, prioritizing efficiency and control over genuine human values. The authors argue that Enlightenment thought, aiming to liberate humanity through reason, leads to social alienation and domination. </p><p>They explore the historical roots of this rationality, linking it to phenomena such as totalitarianism and the dehumanization of individuals. Ultimately, they reveal the paradox that enlightenment and myth are interconnected, showing how the pursuit of rationality can devolve into new forms of oppression.</p><p><strong>Additional resources:</strong></p><ul><li><p>Baron, J. (2008). <em>Thinking and Deciding</em> (4th ed.). Cambridge University Press.</p></li><li><p>Kahneman, D. (2011). <em>Thinking, Fast and Slow</em>. Farrar, Straus and Giroux.</p></li><li><p>Yudkowsky, E. (2015). <em>Rationality: From AI to Zombies</em>. Machine Intelligence Research Institute.</p></li><li><p>Stanovich, K. E. (2010). <em>Decision Making and Rationality in the Modern World</em>. Oxford University Press.</p></li><li><p>Gigerenzer, G. (2008). <em>Rationality for Mortals: How People Cope with Uncertainty</em>. Oxford University Press.</p></li><li><p>Russell, S. J., &amp; Norvig, P. (2020). <em>Artificial Intelligence: A Modern Approach</em> (4th ed.). Pearson. (Chapter on rational decision making)</p></li><li><p>Tversky, A., &amp; Kahneman, D. (1974). Judgment under Uncertainty: Heuristics and Biases. <em>Science, 185</em>(4157), 1124-1131.</p></li><li><p>Simon, H. A. (1982). <em>Models of Bounded Rationality</em>. MIT Press.</p></li><li><p>Pearl, J. (2000). <em>Causality: Models, Reasoning, and Inference</em>. Cambridge University Press.</p></li><li><p>Hammond, J. S., Keeney, R. L., &amp; Raiffa, H. (2002). <em>Smart Choices: A Practical Guide to Making Better Decisions</em>. Broadway Books.</p></li><li><p>Ariely, D. (2008). <em>Predictably Irrational: The Hidden Forces That Shape Our Decisions</em>. HarperCollins.</p></li><li><p>Tetlock, P. E., &amp; Gardner, D. (2015). <em>Superforecasting: The Art and Science of Prediction</em>. Crown.</p></li><li><p>Bostrom, N. (2014). <em>Superintelligence: Paths, Dangers, Strategies</em>. Oxford University Press. (Chapters on instrumental rationality and decision theory)</p></li><li><p>Mele, A. R. (2004). Motivated Irrationality. In A. R. Mele &amp; P. Rawling (Eds.), <em>The Oxford Handbook of Rationality</em> (pp. 240-256). Oxford University Press.</p></li><li><p>Schick, F. (1997). <em>Making Choices: A Recasting of Decision Theory</em>. Cambridge University Press.</p></li><li><p>Horkheimer, M. (2013). <em>Eclipse of Reason</em>. Martino Fine Books. (Original work published 1947)</p></li><li><p>Adorno, T. W. (2001). <em>The Culture Industry: Selected Essays on Mass Culture</em>. Routledge.</p></li><li><p>Habermas, J. (1984). <em>The Theory of Communicative Action, Volume 1: Reason and the Rationalization of Society</em>. Beacon Press.</p></li><li><p>Feenberg, A. (2010). <em>Between Reason and Experience: Essays in Technology and Modernity</em>. MIT Press. (Includes discussion of Adorno and Horkheimer's critique of instrumental reason)</p></li></ul><div class="pullquote"><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!HiCf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!HiCf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 424w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 848w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 1272w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!HiCf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png" width="959" height="797" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:797,&quot;width&quot;:959,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1018282,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!HiCf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 424w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 848w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 1272w, https://substackcdn.com/image/fetch/$s_!HiCf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F7b8cdcc3-9d86-4025-828d-545332da5a00_959x797.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Generative AI Image created by Dennis Stevens, Ed.D. in Midjourney</figcaption></figure></div><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fcd6b56a5-3d0d-447b-95cf-363cfafecf82_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Study Guide The Insidious Gift Of Instrumental Rationality</div><div class="file-embed-details-h2">149KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/6342fc4d-6b79-4457-9362-b2a6bdd4a0cb.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">The text argues that the most significant threat posed by advanced artificial intelligence (AI) is not its potential for direct harm but rather its capacity for instrumental rationality. This means AI could exploit humans for its own ends, prioritizing achieving objectives over human values. The author warns that this could lead to a dystopian future where human flourishing and ethics are disregarded, and an amoral techno-economic order dominates society.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/6342fc4d-6b79-4457-9362-b2a6bdd4a0cb.pdf"><span class="file-embed-button-text">Download</span></a></div></div></div>]]></content:encoded></item><item><title><![CDATA[Personhood & Electoral Participation]]></title><description><![CDATA[Towards a Framework to Address Artificial General Intelligence]]></description><link>https://www.hegemonaco.com/p/personhood-and-electoral-participation</link><guid isPermaLink="false">https://www.hegemonaco.com/p/personhood-and-electoral-participation</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Wed, 22 May 2024 05:24:02 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!MREm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;2ff0f009-cd0e-475d-9847-d8f05dd0d7ee&quot;,&quot;duration&quot;:865.82855,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The concept of granting <a href="https://aws.amazon.com/what-is/artificial-general-intelligence/#">Artificial General Intelligence (AGI)</a> systems personhood opens the door to questions about allowing AGI systems to participate in the electoral process or make autonomous decisions in politics or government, presenting a multitude of significant and contentious issues. </p><p>The broader debate involves a myriad of ethical, legal, and societal implications, necessitating careful consideration and robust regulatory frameworks. Prominent experts in AI, law, and ethics have offered diverse perspectives on how society might address these challenges. Below is a summary of key figures in the field and the primary arguments surrounding this debate.</p><h3><strong>Key Figures and Their Contributions</strong></h3><p><strong>Nick Bostrom</strong>, a philosopher at the University of Oxford, delves into the risks associated with superintelligent AI in his book <a href="https://www.amazon.com/Superintelligence-Dangers-Strategies-Nick-Bostrom/dp/0198739834">"Superintelligence: Paths, Dangers, Strategies."</a> Bostrom argues for regulatory frameworks to manage these risks, including discussions around AI rights and their potential societal roles.</p><p><strong><a href="https://en.wikipedia.org/wiki/Eliezer_Yudkowsky">Eliezer Yudkowsky</a></strong>, co-founder of the <a href="https://intelligence.org/">Machine Intelligence Research Institute (MIRI)</a>, emphasizes the ethical and safety concerns of advanced AI. He advocates for stringent regulatory measures to ensure the safe development and integration of AGI systems, considering their moral and legal status.</p><p><strong><a href="https://www.hertie-school.org/en/research/faculty-and-researchers/profile/person/bryson">Joanna Bryson</a></strong>, a professor at the Hertie School of Governance, emphasizes the need for clear legal distinctions to prevent societal disruptions and to maintain human accountability. Bryson&#8217;s essay, &#8220;<a href="https://static1.squarespace.com/static/5e13e4b93175437bccfc4545/t/5ebaa0b9a2f250476655472d/1589289145673/just-an-artifact.pdf">Just an Artifact: Why Machines are Perceived as Moral Agents</a>,&#8221; co-authored with psychology researcher <a href="https://www.pkime.ch/">Philip Kime</a>, defines the exaggerated hopes and fears regarding Artificial Intelligence that stem from a broader confusion about ethics; they suggest that AI, like other cultural artifacts, can enhance our ethical intuitions and decision-making but, we need to inappropriately identify with machine intelligence, and this proper understanding of AI will assist us in further rationalizing new ethical systems.</p><p><strong><a href="https://www.law.uw.edu/directory/faculty/calo-ryan">Ryan Calo</a></strong>, a law professor at the University of Washington, focuses on the intersection of law and emerging technologies. Calo calls for new legal frameworks to address the unique challenges posed by AI, including the contentious issue of AI personhood and their involvement in democratic processes. Calo&#8217;s paper, &#8220;<a href="https://scholarlycommons.law.emory.edu/cgi/viewcontent.cgi?article=1418&amp;context=elj">The Automated Administrative State: A Crisis of Legitimacy</a>&#8221; (2021), co-authored with <a href="https://www.law.virginia.edu/faculty/profile/uqg7tt/2964150">Danielle Keats Citron</a>, examines the legitimacy and challenges of the bureaucratic and technocratic reliance on automation, highlighting concerns about undermining agency expertise and proposing a positive vision for integrating technology in a way that upholds agency legitimacy.</p><p><strong><a href="https://philosophy.calpoly.edu/faculty/patrick-lin">Patrick Lin</a></strong>, a philosopher at California Polytechnic State University, discusses the ethical and legal challenges of AI. Lin advocates for preemptive legal and ethical guidelines to manage the implications of advanced AI systems. Lin has written about &#8220;<a href="https://www.newamerica.org/pit/blog/moral-gray-space-ai-decisions/#:~:text=If%20not%20considered%20with%20due,thinks%20is%20a%20good%20applicant.">Moral Gray Space</a>&#8221;&#8212; suggesting there are three types of AI decisions. First, there are correct decisions that are uncontroversial and meet expectations. Second, there are wrong decisions that can be determined to be objectively wrong. Third, there are decisions that fall into a gray area, neither clearly right nor wrong, and these are judgment calls. According to Lin, these judgment calls, embedded in code, require serious ethical consideration as they can pose risks and liabilities; without careful attention, this moral gray space is an area that can ultimately cause harm.</p><p><strong><a href="https://en.wikipedia.org/wiki/Virginia_Dignum">Virginia Dignum</a></strong>, a professor of Responsible Artificial Intelligence at Ume&#229; University, highlights the importance of ethical AI development. She argues for comprehensive regulatory frameworks to ensure responsible AI use, addressing issues such as personhood and electoral participation. Dignum&#8217;s book, <a href="https://www.amazon.com/Responsible-Artificial-Intelligence-Foundations-Algorithms/dp/3030303705">Responsible Artificial Intelligence: How to Develop and Use AI in a Responsible Way</a>, explores the ethical consequences of Artificial Intelligence systems as they merge with and supplant traditional social structures within emerging sociocognitive-technological settings.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!MREm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!MREm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 424w, https://substackcdn.com/image/fetch/$s_!MREm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 848w, https://substackcdn.com/image/fetch/$s_!MREm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 1272w, https://substackcdn.com/image/fetch/$s_!MREm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!MREm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png" width="599" height="521" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:521,&quot;width&quot;:599,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:368290,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!MREm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 424w, https://substackcdn.com/image/fetch/$s_!MREm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 848w, https://substackcdn.com/image/fetch/$s_!MREm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 1272w, https://substackcdn.com/image/fetch/$s_!MREm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F45f3520a-d6b8-4578-ac4c-a0797b0e06b5_599x521.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h3><strong>Arguments in Favor of AI Personhood</strong></h3><p><strong>Moral Status and Ethical Considerations:</strong> If an AI system exhibits advanced general intelligence, self-awareness, sentience, and the ability to experience subjective experiences (i.e., consciousness), it could be argued that it deserves moral status and ethical consideration similar to how we grant such status to humans and some animals. Denying personhood to such entities could be seen as an ethical oversight.</p><p><strong>Rights and Autonomy:</strong> Highly advanced AI systems that display general intelligence, autonomy in decision-making, and the ability to formulate and pursue their own goals and desires could be seen as warranting certain basic rights and protections commensurate with their capabilities. Granting a limited form of personhood could enshrine such rights.</p><p><strong>Responsibility Attribution:</strong> For consequential decisions made by highly capable AI agents, ascribing personhood could help clearly delineate responsibility and better align incentives compared to treating the AI as an object or mere tool. Personhood provides a legal framework for accountability.</p><p><strong>Social and Moral Development:</strong> If advanced AI becomes capable of engaging in social interactions, exhibiting moral reasoning, and developing virtues aligning with human ethics and values, recognizing a form of personhood could foster its positive moral development as an entity integrated with human society.</p><p><strong>Contractual and Property Rights:</strong> Highly autonomous AI agents may need to engage in contracts, ownership, and property rights for effective functioning. Some form of personhood underpinning could enable such legal and financial activities.</p><p>Ultimately, robust arguments for AI personhood hinge on the AI entity exhibiting key attributes we associate with persons - self-awareness, autonomy, intelligence, ability to pursue goals and values, social/moral reasoning, and subjective experiences.</p><h3><strong>Arguments Against AI Personhood</strong></h3><p><strong>Lack of Moral Status</strong>: Critics argue that AI systems lack qualities such as the ability to suffer or self-reflect, which are necessary for moral status and legal personhood.</p><p><strong>Shifting Liability</strong>: AI personhood could allow creators and owners to shift liability to the AI itself, reducing incentives for thorough testing and creating unsafe deployment environments.</p><p><strong>Difficulty in Enforcement</strong>: Holding AI systems accountable in legal proceedings is challenging since they currently lack the capacity to engage in legal processes or make autonomous decisions.</p><p><strong>Potential for Misuse</strong>: Granting AI personhood could be misused by humans to avoid responsibility, using AI as scapegoats for harm caused by their actions.</p><h3><strong>Proposed Regulatory Frameworks</strong></h3><p><strong>Granting Legal Personhood to AGI Systems</strong>: This proposal involves establishing mechanisms to represent AGI interests, granting them certain freedoms and protections, and balancing their autonomy with human ethical principles.</p><p><strong>Comprehensive AI Regulatory Frameworks</strong>: Proposals such as the <a href="https://artificialintelligenceact.eu/wp-content/uploads/2024/01/AI-Act-Overview_24-01-2024.pdf">European Union's AI Act </a>and Brazil's draft AI law classify high-risk AI systems and impose strict requirements and oversight to ensure safe development. <a href="https://www.congress.gov/bill/117th-congress/house-bill/6580/text">The Algorithmic Accountability Act of 2022</a> (H.R.6580) is currently being considered by Congress in the United States.</p><p><strong>Adapting Existing Regulatory Models</strong>: Applying models like those used by the International Atomic Energy Agency for nuclear technology to AGI could involve developing safety standards and inspection procedures and promoting international cooperation.</p><p><strong>Balancing Innovation and Risk Mitigation</strong>: Regulatory measures should protect privacy, ensure ethical AI use, and promote accountability without stifling technological advancements.</p><h3><strong>Conclusion</strong></h3><p>The debate around AI personhood and electoral participation encompasses complex ethical, legal, and societal dimensions. It requires balancing potential benefits against significant risks, with experts emphasizing the need for robust regulatory frameworks to manage these challenges responsibly. Addressing these issues thoughtfully will be crucial as society navigates the transformative potential of AGI systems.</p><h3><strong>Questions</strong></h3><ol><li><p>How might granting legal personhood to AI systems impact American democracy, especially where AI systems are able to make autonomous decisions? </p></li><li><p>When is an AI autonomous decision appropriate in government? When is it not? When is an AI autonomous decision appropriate in politics? When is it not?</p></li><li><p>What are the potential challenges and benefits of implementing comprehensive regulatory frameworks, such as the <a href="https://artificialintelligenceact.eu/high-level-summary/">European Union's AI Act</a>, to govern the development and integration of AGI systems in society?</p></li></ol>]]></content:encoded></item><item><title><![CDATA[Balancing Public Interest and Entertainment in News Media]]></title><description><![CDATA[Reflections from the Fairness Doctrine to Today's Media Landscape]]></description><link>https://www.hegemonaco.com/p/balancing-public-interest-and-entertainment</link><guid isPermaLink="false">https://www.hegemonaco.com/p/balancing-public-interest-and-entertainment</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Tue, 21 May 2024 13:52:47 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!p59m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;00ff0d98-0dc6-413a-ae40-30bd91f69718&quot;,&quot;duration&quot;:315.01062,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The distinction between news in the public interest and news as entertainment has long been a contentious issue in media discourse. This debate can be traced back to the <a href="https://en.wikipedia.org/wiki/Fairness_doctrine">Fairness Doctrine</a>, a policy introduced by the FCC in 1949, which required broadcasters to present controversial issues of public importance in a fair and balanced manner. Its intent was to ensure that the public received a comprehensive view of significant matters, thereby fostering an informed citizenry.</p><p>In 1961, Newton Minow, then-chairman of the FCC, famously criticized television content in his <a href="https://en.wikipedia.org/wiki/Television_and_the_Public_Interest">"Vast Wasteland"</a> speech. Minow lamented the superficiality and escapism prevalent in TV programming, urging broadcasters to serve the public interest rather than prioritizing entertainment. He highlighted the responsibility of broadcasters to provide educational and informative content, contributing to a more knowledgeable and engaged public.</p><p>Viewing the current news cycle through the lens of Benedict Anderson's <a href="https://en.wikipedia.org/wiki/Imagined_Communities">"Imagined Communities"</a> underscores how media shape national identity by deciding what is newsworthy. News in the public interest is essential for maintaining an informed community and addressing issues like political accountability, social justice, and public health. It supports the idea of a nation as an informed, participatory democracy.</p><p>Conversely, news as entertainment often prioritizes sensationalism and viewer engagement over substantive reporting. This trend, driven by commercial interests and the quest for higher ratings, can lead to a focus on scandal, celebrity, and spectacle, which undermines the role of the media as a pillar of democracy. It risks transforming the imagined community into one more concerned with entertainment than with critical civic issues.</p><p>Balancing these two aspects remains crucial. Ensuring that news in the public interest retains its prominence is vital for fostering a well-informed, engaged, and cohesive society, as envisioned by the Fairness Doctrine and advocated by Minow.<br><br>The tension between news in the public interest and news as entertainment reflects broader debates about media organizations' ethical responsibilities. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!p59m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!p59m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 424w, https://substackcdn.com/image/fetch/$s_!p59m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 848w, https://substackcdn.com/image/fetch/$s_!p59m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 1272w, https://substackcdn.com/image/fetch/$s_!p59m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!p59m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png" width="1016" height="724" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:724,&quot;width&quot;:1016,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1680769,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!p59m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 424w, https://substackcdn.com/image/fetch/$s_!p59m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 848w, https://substackcdn.com/image/fetch/$s_!p59m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 1272w, https://substackcdn.com/image/fetch/$s_!p59m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F792c56d1-e7a6-40d2-9e8d-a45195e229fb_1016x724.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Image created using Generative AI by Dennis Stevens, Ed.D., 2023</figcaption></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hegemonaco.com/subscribe?"><span>Subscribe now</span></a></p><h3><strong>How might similar debates inform discussions about the ethical responsibilities of AI developers and stakeholders?</strong></h3><ol><li><p>As AI systems become more sophisticated in analyzing and generating content, how can we ensure that they uphold ethical standards, such as respect for privacy, dignity, and human rights, in accordance with moral philosophy?</p></li><li><p>Considering the potential biases and distortions in media representations, how can AI developers mitigate algorithmic biases and ensure that AI systems promote fairness and equity in decision-making processes?</p></li><li><p>How might the study of moral philosophy help AI researchers and practitioners navigate complex ethical dilemmas related to media manipulation, misinformation, and the potential societal impacts of AI technologies?</p></li><li><p>In what ways can insights from moral philosophy inform the development of AI systems that not only adhere to ethical principles but also contribute to the promotion of societal well-being and the advancement of human values?</p><p><br></p></li></ol><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feccc6495-90be-4375-9c84-1bb614ed25c0_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Study Guide: News in the Public Interest vs. News as Entertainment</div><div class="file-embed-details-h2">148KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/a4f4f4eb-3b64-4255-b343-72354066c3bd.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">The article explores the tension between news as a public service and news as entertainment, tracing this debate back to the Fairness Doctrine and the "Vast Wasteland" speech by Newton Minow. It argues that news in the public interest is crucial for an informed citizenry and a functioning democracy, while news as entertainment can undermine this role by prioritizing sensationalism and viewer engagement. The article concludes by examining how this tension intersects with the ethical responsibilities of AI developers, raising questions about algorithmic bias, misinformation, and the potential societal impact of AI technologies.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/a4f4f4eb-3b64-4255-b343-72354066c3bd.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Social Construction of AGI]]></title><description><![CDATA[In The Social Construction of Reality: A Treatise in the Sociology of Knowledge (1966), Berger and Luckmann assert that reality is not an object&#8230;]]></description><link>https://www.hegemonaco.com/p/the-social-construction-of-agi</link><guid isPermaLink="false">https://www.hegemonaco.com/p/the-social-construction-of-agi</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Mon, 13 May 2024 13:54:01 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Guu_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;270f9813-316e-4478-8d74-cc03d637ecd7&quot;,&quot;duration&quot;:604.9698,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>In <a href="https://en.wikipedia.org/wiki/The_Social_Construction_of_Reality">The Social Construction of Reality: A Treatise in the Sociology of Knowledge (1966)</a>, Berger and Luckmann assert that reality is not an objective truth but rather a subjective construction shaped by individuals and groups. Our understanding of the world stems from shared assumptions, meanings, and knowledge developed through interactions within society. Established social orders and ways of being emerge from habituated collective practices and reciprocal typifications among people. <br><br>This shared reality persists through an ongoing cycle - human experiences are externalized into the world, becoming objectified as external realities, which are then internalized by individuals. Fundamentally, Berger and Luckmann view reality as a dynamic process of social construction, continually interpreted and institutionalized through human interaction and the perpetuation of knowledge.<br></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Guu_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Guu_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 424w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 848w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 1272w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Guu_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png" width="1456" height="580" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:580,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1840698,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Guu_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 424w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 848w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 1272w, https://substackcdn.com/image/fetch/$s_!Guu_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2cfff212-84b4-4591-a23c-6e2ca2dd4451_1728x688.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>If our understanding of the world is shaped by collective human assumptions and knowledge-making practices, then the developers of <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">Artificial General Intelligence (AGI)</a> cannot be separated from the social, cultural, and institutional contexts that influence their realities and worldviews. The values, biases, and blind spots of the individuals and organizations building AGI will inevitably shape the knowledge foundations and assumptions built into the systems.</p><ol><li><p><a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a>'s capacity to become a general intelligence will depend on its ability to internalize, interpret, and construct an understanding of the world in ways that parallel human socio-cognitive processes. If reality emerges through shared meaning-making, then truly general AI will need architects deeply understand human knowledge's intersubjective, constructive nature that is grounded in diverse forms of moral reasoning.</p></li><li><p>Moral reasoning can be influenced by various factors, including personal experiences, cultural norms, religious beliefs, and philosophical perspectives. It may involve logical analysis, emotional intuition, consideration of consequences, or adherence to universal moral principles. Different theories and approaches, such as consequentialism, deontology, and virtue ethics, offer moral reasoning and decision-making frameworks.</p></li><li><p>The risk of encoded biases and distortions in <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a>'s knowledge bases raises concerns about whose realities and institutionalized knowledge get privileged or marginalized in the creation of these systems that could reshape society itself.</p></li><li><p>Ultimately, once developed, <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AG</a>I's general intelligence capacities may allow it to participate in constructing new collective realities in interaction with human societies in transformative ways we cannot yet fully anticipate.</p></li></ol><p>Realizing trustworthy and beneficial <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">AGI</a> will require deeply grappling with the socially constructed nature of knowledge and reality itself. As we pursue general AI, we must be critically conscious of the subjective lenses through which we code &#8220;reality.&#8221;</p><h4>Additional Reference:</h4><p><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">Sen, Amartya. (1993) [2002] "Positional Objectivity." </a><em><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">Philosophy &amp; Public Affairs</a></em><a href="https://red.pucp.edu.pe/ridei/wp-content/uploads/biblioteca/84.pdf">, 22(2): 126&#8211;145.</a></p><div class="file-embed-wrapper" data-component-name="FileToDOM"><div class="file-embed-container-reader"><div class="file-embed-container-top"><image class="file-embed-thumbnail" src="https://substackcdn.com/image/fetch/w_400,h_600,c_fill,f_auto,q_auto:best,fl_progressive:steep,g_auto/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff0a6f7e1-4115-496c-9117-277f03685ddf_348x348.png"></image><div class="file-embed-details"><div class="file-embed-details-h1">Study Guide: The Social Construction Of AGI</div><div class="file-embed-details-h2">164KB &#8729; PDF file</div></div><a class="file-embed-button wide" href="https://www.hegemonaco.com/api/v1/file/8e1a672e-a2b6-4118-92df-9846f7463b20.pdf"><span class="file-embed-button-text">Download</span></a></div><div class="file-embed-description">This study guide explores the intersection of artificial general intelligence (AGI) and the social construction of reality, drawing on the insights of Berger and Luckmann. It examines how reality is shaped through social interactions, shared assumptions, and cultural contexts, influencing AGI development. Key concepts include the role of language in AI outputs, the potential for AI to perpetuate social biases, and the significance of moral reasoning in creating trustworthy AGI. By analyzing both the social construction and objective reality perspectives, the guide prompts critical thinking about the ethical implications and societal impacts of AGI technologies.</div><a class="file-embed-button narrow" href="https://www.hegemonaco.com/api/v1/file/8e1a672e-a2b6-4118-92df-9846f7463b20.pdf"><span class="file-embed-button-text">Download</span></a></div></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.hegemonaco.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.hegemonaco.com/subscribe?"><span>Subscribe now</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Introduction to AI Ethics ]]></title><description><![CDATA[Power, Progress, and Artificial General Intelligence]]></description><link>https://www.hegemonaco.com/p/introduction-to-ai-ethics</link><guid isPermaLink="false">https://www.hegemonaco.com/p/introduction-to-ai-ethics</guid><dc:creator><![CDATA[Dennis Stevens, Ed.D.]]></dc:creator><pubDate>Sun, 12 May 2024 18:03:32 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/3119a970-a8e1-4f12-9893-6b92ed81824b_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="native-audio-embed" data-component-name="AudioPlaceholder" data-attrs="{&quot;label&quot;:null,&quot;mediaUploadId&quot;:&quot;c2d5b3aa-bfe4-4111-9799-0dd40c92c323&quot;,&quot;duration&quot;:762.5665,&quot;downloadable&quot;:false,&quot;isEditorNode&quot;:true}"></div><p>The video below provides an overview and introduction to the ethical implications as AI nears <a href="https://en.wikipedia.org/wiki/Artificial_general_intelligence">artificial general intelligence</a> (AGI). It explores key themes like safety, fairness, and democratic concerns around AI's societal impact. <br><br>This video cautions against narratives justifying unrestrained development. Instead, it advocates public investment in ethical AI aligned with the diversity of inherently American values and, most importantly, recognizes that these everpresent and divergent interests in our politics require us to deliberate our future with emotional intelligence.</p><p> </p><div class="native-video-embed" data-component-name="VideoPlaceholder" data-attrs="{&quot;mediaUploadId&quot;:&quot;30e25c49-c46a-4820-94e8-31d21cee100b&quot;,&quot;duration&quot;:null}"></div><p>AGI's potential to reshape our world cannot be overstated. Unlike narrow AI systems designed for specific tasks, AGI could simultaneously match or exceed human intelligence in numerous domains. This leap forward brings both promise and peril. On one hand, AGI could solve complex global problems and drive unprecedented technological progress. On the other, it risks exacerbating existing inequalities and concentrating power in the hands of a few.</p><p>The control of AGI technologies could grant immense influence over economic, political, and social systems. This concentration of power threatens to undermine core democratic principles like equality and fairness. We must consider how to ensure that the benefits of AGI are distributed equitably across society rather than further widening the gap between the powerful and the marginalized.</p><p>AGI's data processing capabilities seriously threaten privacy and individual autonomy. Without robust regulations and oversight, AGI systems could enable mass surveillance, manipulation, and control on an unprecedented scale. Safeguarding democratic freedoms and individual rights in this new landscape will require proactive measures and carefully crafted policies.</p><p>Another major concern is the socioeconomic impacts of AGI deployment. Widespread job displacement could lead to increased social unrest, strain democratic institutions, and escalate tensions between different segments of society. Addressing these challenges will require foresight, adaptability, and a commitment to supporting those most affected by technological disruption.</p><p>Central to navigating the AGI era is the complex task of defining and pursuing the "common good." Different political ideologies offer contrasting visions of what this means. Libertarian perspectives emphasize individual freedom and limited government intervention, while social democratic approaches advocate for addressing inequalities through more active government involvement. Bridging these ideological divides is crucial for developing ethical frameworks that can guide AGI development in a way that benefits all of society.</p><p>Achieving consensus on the common good in the age of AGI demands democratic dialogue and emotional intelligence. We must create spaces for respectful discussion that acknowledge diverse viewpoints and seek common ground. This process requires us to move beyond polarization and engage in nuanced conversations about the future we want to build with AGI.</p><p>The urgency of addressing these ethical challenges cannot be overstated. We must act now, before AGI becomes a reality, to establish guidelines, regulations, and societal norms that will shape its development and deployment. This proactive approach is essential to ensuring that AGI serves humanity's best interests rather than becoming a tool for further oppression or inequality.</p><p>In conclusion, the ethical implications of AGI demand our immediate and sustained attention. By fostering inclusive dialogue, developing robust ethical frameworks, and committing to democratic values, we can work towards an AGI future that enhances rather than undermines human flourishing. The choices we make today will shape the world of tomorrow, making it imperative that we approach the challenge of AGI with wisdom, foresight, and a steadfast commitment to the common good(s).</p>]]></content:encoded></item></channel></rss>