<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Far From Elementary]]></title><description><![CDATA[Writing about software and systems engineering through the lens of AI. Architecture, infrastructure, and design patterns for building reliable intelligent systems at scale.]]></description><link>https://www.farfromelementary.com</link><generator>Substack</generator><lastBuildDate>Thu, 30 Apr 2026 22:42:08 GMT</lastBuildDate><atom:link href="https://www.farfromelementary.com/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[Michael Logothetis]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[farfromelementary@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[farfromelementary@substack.com]]></itunes:email><itunes:name><![CDATA[Michael Logothetis]]></itunes:name></itunes:owner><itunes:author><![CDATA[Michael Logothetis]]></itunes:author><googleplay:owner><![CDATA[farfromelementary@substack.com]]></googleplay:owner><googleplay:email><![CDATA[farfromelementary@substack.com]]></googleplay:email><googleplay:author><![CDATA[Michael Logothetis]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[Why AGI is further away than we think.]]></title><description><![CDATA[Why GPTs Aren&#8217;t Brains: Attention Without Memory, Meaning, or Mind]]></description><link>https://www.farfromelementary.com/p/why-agi-is-further-away-than-we-think</link><guid isPermaLink="false">https://www.farfromelementary.com/p/why-agi-is-further-away-than-we-think</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Sat, 18 Apr 2026 12:08:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!NJw3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!NJw3!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!NJw3!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!NJw3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png" width="326" height="326" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:326,&quot;bytes&quot;:1880642,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/194603479?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!NJw3!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!NJw3!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F49ae5f59-b980-49b9-8e77-48632679ee8c_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>The recent wave of enthusiasm around systems like GPT has revived an old question in a new form: are we approaching artificial general intelligence (AGI), or are we still building sophisticated narrow tools that only resemble intelligence on the surface?</p><p>The answer becomes clearer when you stop treating GPTs as mysterious black boxes and instead analyze them through a neuroscientific lens. When mapped against the architecture of the human brain, GPTs do not approximate a full cognitive system. They resemble a <strong>partial extraction of one function&#8212;attention-driven pattern processing&#8212;divorced from the broader systems that make human cognition coherent, adaptive, and grounded</strong>.</p><p>That distinction matters. Because when you isolate attention from memory, emotion, and executive control, you don&#8217;t get a general intelligence. You get something that, in humans, would look less like genius and more like <strong>impairment</strong>.</p><div><hr></div><h3><strong>The Brain Is Not a Transformer</strong></h3><p>The human brain is not organized around a single dominant mechanism. It is a <strong>multi-system architecture</strong>, with specialized subsystems that interact continuously:</p><ul><li><p>The <strong>cortex</strong> extracts patterns and supports language and reasoning</p></li><li><p>The <strong>hippocampus</strong> encodes memory and constructs continuity across time</p></li><li><p>The <strong>limbic system</strong> assigns emotional value and drives behavior</p></li><li><p>The <strong>prefrontal cortex</strong> enables planning, goals, and self-control</p></li><li><p>The <strong>basal ganglia</strong> translate decisions into action and habits</p></li></ul><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.farfromelementary.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.farfromelementary.com/subscribe?"><span>Subscribe now</span></a></p><p>Intelligence, as we experience it, emerges from the <strong>integration</strong> of these systems&#8212;not from any one of them operating in isolation.</p><p>GPTs, by contrast, are dominated by a single computational principle: <strong>attention</strong>. The transformer architecture allocates weight across tokens in a sequence, dynamically selecting what matters in context. This is powerful&#8212;arguably analogous to certain cortical processes involved in language and association&#8212;but it is also narrow.</p><p>What GPTs lack is not incremental. It is structural.</p><div><hr></div><h3><strong>Attention Is Not Intelligence</strong></h3><p>Attention in the brain is a <strong>selection mechanism</strong>. It determines what information gets processed more deeply. But on its own, attention does not:</p><ul><li><p>Store experiences</p></li><li><p>Assign value</p></li><li><p>Form goals</p></li><li><p>Drive behavior</p></li><li><p>Maintain identity over time</p></li></ul><p>In humans, attention is meaningful only because it is embedded within systems that provide <strong>memory, motivation, and direction</strong>.</p><p>In GPTs, attention operates largely in isolation. The model processes a context window, generates outputs, and resets. There is no persistent internal state that accumulates lived experience. No emotional weighting that distinguishes trivial from consequential. No intrinsic goals guiding behavior across time.</p><p>This is not a minor limitation. It is the difference between <strong>processing information</strong> and <strong>having a mind</strong>.</p><div><hr></div><h3><strong>The Closest Human Analogues Are Not Healthy Minds</strong></h3><p>If you attempt to map GPT-like functioning onto human cognition, the closest analogues are not high-functioning individuals. They are <strong>neurological edge cases</strong> where key systems are impaired.</p><p>Consider anterograde amnesia.</p><p>Patients with this condition can engage in conversation, process language, and respond intelligently in the moment. But they cannot form new lasting memories. Each interaction effectively resets their experiential timeline. They live in a perpetual present, without the ability to build a personal narrative or learn from ongoing experience.</p><p>This maps uncomfortably well onto GPTs. The model can track context within a session, but it does not <strong>accumulate memory as experience</strong>. It does not &#8220;learn&#8221; from individual interactions in any meaningful, persistent way. Like an amnesic patient, it processes&#8212;but does not remember.</p><p>Now consider dysexecutive syndrome.</p><p>Here, attention and basic cognition may remain intact, but the individual loses the ability to form goals, plan effectively, or regulate behavior over time. Actions become reactive rather than directed. There is no stable internal agenda.</p><p>Again, the parallel is clear. GPTs do not have goals. They do not initiate behavior. They respond to prompts. Their &#8220;reasoning&#8221; is not driven by internal objectives but by external input sequences.</p><p>A third comparison comes from impairments in emotional valuation, such as damage to the ventromedial prefrontal cortex. These individuals can reason logically but struggle to make decisions because they cannot assign <strong>emotional significance</strong> to outcomes. Without value, choices become abstract and often maladaptive.</p><p>GPTs operate in a similar vacuum. They can describe importance, simulate concern, and reproduce ethical reasoning&#8212;but they do not <strong>experience stakes</strong>. There is no internal gradient of importance shaping their outputs.</p><div><hr></div><h3><strong>When Attention Runs Without Grounding</strong></h3><p>One of the more subtle parallels emerges when considering disorders involving <strong>disrupted salience and association</strong>, such as schizophrenia.</p><p>In such conditions, the brain may assign inappropriate importance to irrelevant stimuli or form loose associations between concepts. The result is a breakdown in coherent meaning-making.</p><p>While GPTs do not have perception or delusion, they can exhibit a <strong>computational analogue</strong>: generating plausible but incorrect connections&#8212;what is often called &#8220;hallucination.&#8221; This is not because they misunderstand reality, but because they <strong>lack grounding in it entirely</strong>. Their associations are purely statistical.</p><p>Without a system to anchor outputs in lived experience, sensory verification, or stable memory, attention-driven association can drift.</p><div><hr></div><h3><strong>The Missing Pieces of General Intelligence</strong></h3><p>If AGI is to approximate human-like intelligence, it must replicate not just <strong>pattern recognition</strong>, but the <strong>integration of multiple cognitive systems</strong>.</p><p>At minimum, this would require:</p><ol><li><p><strong>Persistent memory</strong></p><p>Not just stored data, but the ability to encode and update experience over time in a structured way.</p></li><li><p><strong>Embodiment or grounding</strong></p><p>A connection to a world&#8212;physical or simulated&#8212;where actions have consequences.</p></li><li><p><strong>Intrinsic motivation or value systems</strong></p><p>Mechanisms that prioritize certain outcomes over others based on internal criteria.</p></li><li><p><strong>Goal-directed behavior</strong></p><p>The ability to initiate, pursue, and revise plans over extended time horizons.</p></li><li><p><strong>Temporal continuity</strong></p><p>A sense of past, present, and future that enables learning and anticipation.</p></li></ol><p>Current GPT architectures implement none of these in a fundamental way. They can be augmented with external tools&#8212;memory stores, reinforcement layers, APIs&#8212;but these are <strong>add-ons</strong>, not core properties.</p><div><hr></div><h3><strong>Why This Matters</strong></h3><p>The danger is not that GPTs will suddenly become AGI. It is that their <strong>surface-level fluency</strong> creates the illusion of general intelligence, leading to overestimation of their capabilities.</p><p>Language is a particularly deceptive domain. Because human cognition is expressed through language, a system that can convincingly generate language appears to understand. But understanding, in the human sense, is not just linguistic competence. It is the product of <strong>memory, embodiment, emotion, and goal-directed interaction with the world</strong>.</p><p>GPTs simulate the outputs of those processes without instantiating the processes themselves.</p><div><hr></div><h3><strong>A More Accurate Framing</strong></h3><p>Rather than viewing GPTs as proto-AGI, a more accurate framing is this:</p><blockquote><p>They are <strong>highly advanced cortical-like pattern processors</strong>, specialized for language, operating without the supporting systems that make cognition robust, adaptive, and meaningful.</p></blockquote><p>This is an extraordinary achievement. But it is also a bounded one.</p><div><hr></div><h3><strong>Final Thought</strong></h3><p>If you were to engineer a human brain where attention was preserved but memory, emotion, and executive control were stripped away, you would not get a superintelligence. You would get a system that can <strong>process in the moment but cannot learn, care, or act coherently over time</strong>.</p><p>That is the closest analogue to what GPTs are today.</p><p>Which is why, despite their capabilities, we remain <strong>far from AGI</strong>&#8212;not because we haven&#8217;t scaled enough compute, but because we have only begun to replicate a fraction of what intelligence actually is.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.farfromelementary.com/p/why-agi-is-further-away-than-we-think?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.farfromelementary.com/p/why-agi-is-further-away-than-we-think?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Can you trust your LLM?]]></title><description><![CDATA[From reliability to trust: rethinking software safety in the age of AI.]]></description><link>https://www.farfromelementary.com/p/can-you-trust-your-llm</link><guid isPermaLink="false">https://www.farfromelementary.com/p/can-you-trust-your-llm</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Wed, 25 Mar 2026 12:25:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1wR9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1wR9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1wR9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 424w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 848w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 1272w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1wR9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png" width="666" height="399" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:399,&quot;width&quot;:666,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:582447,&quot;alt&quot;:&quot;Robot and child&quot;,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/192058367?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="Robot and child" title="Robot and child" srcset="https://substackcdn.com/image/fetch/$s_!1wR9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 424w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 848w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 1272w, https://substackcdn.com/image/fetch/$s_!1wR9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d0b3d12-a398-442e-ac13-1648998cc538_666x399.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">https://www.facebook.com/archvart/</figcaption></figure></div><p><em>&#8220;The problem is that AI technology poses risks not just to those who lose the race but also to those who win it.&#8221;</em> - Paul Scharre (2019).</p><div><hr></div><p><strong>TL;DR</strong></p><p>Systems that rely on LLMs are becoming increasingly pervasive but the non-deterministic nature of LLMs introduces specific safety concerns. Safety becomes an issue when these systems produce incorrect information, perform incorrect actions or adopt unsafe approaches to achieve specified goals.</p><p>We need to protect against such failures by:</p><ol><li><p>Carefully considering whether the potential losses outweigh the benefits of adopting AI/LLMs;</p></li><li><p>Developing a model of trust and measuring the degree of trust you have in your LLM;</p></li><li><p>Cross-checking responses;</p></li><li><p>Implementing independent guardrails;</p></li><li><p>Retaining human oversight for safety-critical information and actions.</p></li></ol><div><hr></div><h2>1. Prologue</h2><p>Imagine boarding an aeroplane and discovering that air traffic control for your flight&#8217;s airspace is being managed by a Large Language Model (LLM) - not supervised by one, not assisted by one, but <strong>solely operated by one</strong>.</p><p>How comfortable would you feel? Nervous? Uncertain? At the same time, we&#8217;re happy to entrust an LLM with our research, our code, our emails, and our calendars.</p><p>During the writing of this post, I looked at my notes in despair. It was indisputable that LLMs and the Agentic AI systems they control are <strong>dangerous</strong>. It&#8217;s impossible to predict an LLM&#8217;s behaviour with certainty, so we rely on their AI creators to do the <strong>right thing</strong> and censor their responses.</p><p>We are no longer dealing with <strong>certainty</strong>; we are dealing with <strong>trust</strong>. Trust is a very human virtue, so are we ready to bestow it upon machines? Machines that have been programmed to mimic human thinking but have no concept of accountability?</p><p>To make that assessment, we need to understand the workings of LLMs and the Neural Networks that underpin them. As a result, many of us are ill-equipped to judge how trustworthy they are.</p><p>To help us, I&#8217;ll analyse LLMs from a safety perspective. I&#8217;ll explain the dangers LLMs present, examine how trust influences safety, and look at the steps we can all take to make this a safer world.</p><div><hr></div><h2>2. Safety Engineering</h2><h3>2.1 Reliability</h3><p>Traditional <strong>Safety Engineering</strong> looks at systems in terms of <strong>reliability</strong>. Break a system down into its components and define a probability of failure for each one (e.g. a switch fails once every 1,000 operations). Reduce the failure rate, and you reduce the risk. Add redundancy, and safety improves.</p><p>Throughout the 1980s and 90s, as software became a predominant component in increasingly complex systems, concerns were raised over how best to assess the safety of these systems. This was the era of the:</p><ul><li><p>A320 (the first commercial aircraft to rely on fly-by-wire);</p></li><li><p>Therac-25 (a computer-controlled radiation therapy device that delivered dangerous doses of radiation to patients, resulting in injury and death) and;</p></li><li><p>growing concern over the increased reliance on software at nuclear power facilities.</p></li></ul><p>Software Engineering was in its infancy; development and testing tools were rudimentary; and resources (people and compute) limited. The na&#239;ve thinking at the time was that safety risks associated with software could be assessed using traditional engineering failure techniques. By eliminating errors in the software components of a system, you improved its reliability and hence system safety.</p><p>At the time, <a href="https://en.wikipedia.org/wiki/Formal_methods">formal methods</a> &#8212; rigorous mathematical proofs of software behaviour &#8212; were <em>&#8220;de rigueur&#8221;</em> for verifying software systems. Proponents of <em>formal methods</em> espoused that using these techniques, we could build software we could <strong>trust</strong> to outperform its human counterparts. Although <em>formal methods</em> have their place, they can be difficult to use in large, complex systems.</p><h3>2.2 A Systems Perspective of Safety</h3><p>In simple mechanical systems, failures usually come from broken components. Complex software systems behave differently. Unlike mechanical systems, software does not degrade. It does not wear out. It does not randomly fail. It should do what it is designed to do. Even when that produces a catastrophic outcome. Instead of examining reliability, a new approach to software safety was needed. One designed to account for the role of software in systems and the socio-human interactions that inevitably accompanied them.</p><p>Several years ago, I had the privilege of listening to <a href="http://sunnyday.mit.edu/bio-serious.html">Nancy Leveson</a> (Professor of Aeronautics and Astronautics, MIT) talk about software safety. Her insights are used today by organisations such as NASA&#8212; to identify hazards with launch systems, GM&#8212; to identify safety issues in Automated Driving Systems, and to address the risks with the use of Insulin Pumps.</p><p>Her analysis of software safety failures highlighted that safety is <a href="https://direct.mit.edu/books/oa-monograph/2908/Engineering-a-Safer-WorldSystems-Thinking-Applied">a systems problem, not a reliability one</a>. Accidents can occur without component failures, even when a system behaves as designed. Problems resulting from emergent behaviours between interacting components, rather than from the failure of components themselves, then become difficult to predict.</p><h3>2.3 STAMP/STPA</h3><p>Using <a href="https://en.wikipedia.org/wiki/Systems_theory">Systems</a> and <a href="https://en.wikipedia.org/wiki/Control_theory">Control Theory</a>, Leveson developed STAMP (Systems-Theoretic Accident Model and Processes) and the associated STPA (System-Theoretic Process Analysis). By thinking of safety in terms of a <a href="https://en.wikipedia.org/wiki/Closed-loop_controller">closed-loop control system</a>, you can identify potential accidents as control failures where a safety control may not have been enacted, or an unsafe control may have been performed.</p><p>STPA analyses system safety using a top-down approach (Figure 1). It starts by looking at potential <strong>losses</strong> and <strong>hazards</strong>. <em>Hazards</em> are system states and/or conditions that could lead to a <em>loss</em>. The goal is to avoid <em>hazards,</em> and this helps identify <strong>safety constraints</strong> that should be maintained. Through modelling of a system&#8217;s <strong>control structure,</strong> we identify the potential for <strong>unsafe controls</strong>.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!x_u2!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!x_u2!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 424w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 848w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 1272w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!x_u2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png" width="1081" height="621" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:621,&quot;width&quot;:1081,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:67054,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/192058367?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!x_u2!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 424w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 848w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 1272w, https://substackcdn.com/image/fetch/$s_!x_u2!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F159eaa98-49cd-45b4-99a2-73e46b91953d_1081x621.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 1 - STPA Example</figcaption></figure></div><p>Modelling involves considering a system as a hierarchy of <a href="https://en.wikipedia.org/wiki/Closed-loop_controller">closed-loop control systems</a>. A <strong>closed-loop control system</strong> consists of a <strong>controller</strong>, with an <strong>internal model</strong> of the process being controlled, <strong>controls,</strong> and <strong>sensors</strong> that provide <strong>feedback</strong> used to modify the <em>internal model</em>. The <em>controller&#8217;s</em> role is to enact actions/controls that maintain the <em>safety constraints</em>. Safety failures are examined in terms of <strong>unsafe controls</strong> that could lead to <em>losses</em>.</p><h2>3. What&#8217;s an LLM?</h2><h3>3.1 Neural Networks</h3><p>LLMs are built using <a href="https://en.wikipedia.org/wiki/Neural_network_(machine_learning)">Neural Networks</a> - an AI contrivance, trained on a vast corpus of data, configured to produce desired outputs. Internally, they are a large network of interconnected, non-linear functions. Their complexity makes them incomprehensible. Their output responses, unlike algorithms of old, are designed to be probabilistic rather than deterministic &#8212; making them seem more human. Consequently, their behaviour cannot be predicted with certainty. From a Systems Theory perspective, they are effectively <strong>unobservable</strong> in practice. Hence, it is impossible to formally validate their behaviour.</p><p>Their learning tends to focus on the norm rather than outliers. Skewed training data (data that is incomplete or does not provide sufficient exposure to edge cases) means LLMs cannot be guaranteed to produce a correct response every time.</p><h3>3.2 AI Agents</h3><p>AI Agents are systems that aim to achieve a <strong>goal</strong> or <strong>objective</strong> by automatically performing <strong>tasks</strong> using <strong>tools</strong> (Figure 2). Often, they will use an LLM as their brain. Integration of an LLM with other systems or tools is becoming increasingly straightforward. The <a href="https://modelcontextprotocol.io/docs/getting-started/intro">Model Context Protocol (MCP),</a>originally developed by <a href="https://www.anthropic.com/">Anthropic</a> for example, provides a standard pattern for this integration. All you need is an API Key.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!G9hh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!G9hh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 424w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 848w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 1272w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!G9hh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png" width="1081" height="281" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/676a819b-91aa-4683-84ab-337522768873_1081x281.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:281,&quot;width&quot;:1081,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:42564,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/192058367?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!G9hh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 424w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 848w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 1272w, https://substackcdn.com/image/fetch/$s_!G9hh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F676a819b-91aa-4683-84ab-337522768873_1081x281.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a><figcaption class="image-caption">Figure 2 - An LLMs role in an AI Agent</figcaption></figure></div><p>The AI Agent can be thought of as a <strong>closed-loop control system</strong>. An LLM is responsible for <em>Perception</em>, <em>Reasoning,</em> and <em>Planning</em> what <em>Actions</em> to take to achieve the <em>Goal</em>. It can then perform <em>tasks</em> (say using MCP) and receive feedback to <em>Reflect</em> on the outcome.</p><p>The widespread adoption of MCP has made the integration of LLMs with existing applications and services more pervasive. This has led to the concept of <a href="https://cloud.google.com/discover/what-is-agentic-ai">Agentic AI</a> &#8212; AI Agents working co-operatively to achieve broader goals. Agentic AI has the potential to lead to emergent behaviours that could be unexpected and harmful.</p><h2>4. Dangers and Protections</h2><h3>4.1 Dangers posed by LLMs</h3><p>Having explored the architecture of LLMs and their use in Agentic AI, several distinct risks emerge when LLMs are employed in real systems:</p><ol><li><p>An LLM provides you with incorrect information which could result in harm (e.g. incorrect medical advice);</p></li><li><p>An LLM provides you with a response that is dangerous (e.g. providing instructions on how to inflict harm using a weapon);</p></li><li><p>An LLM as a component of an AI Agent performs an unsafe action and;</p></li><li><p>Co-operating AI Agents in an Agentic AI system develop an emergent behaviour that is unexpected and dangerous in its own right.</p></li></ol><p>Many of the most significant risks with LLMs arise not from their average behaviour, but from their behaviour at the edges. These systems are particularly vulnerable to distribution shift &#8212; situations that differ from their training data &#8212; where performance can degrade in unpredictable ways. In addition, LLM-driven systems introduce a new class of security risks, including prompt injection, data exfiltration, and unintended tool use when models are connected to external systems.</p><h3>4.2 Protections</h3><p>The major AI creators are well aware of the dangers LLMs pose. To protect us against these, they implement various <strong>guardrails</strong> to constrain their LLM&#8217;s responses and behaviour. We can think of these as the safety controls in STPA.</p><p><a href="https://openai.com">OpenAI</a> polices prompts and filters responses as well as supervising <a href="https://chatgpt.com">ChatGPT</a>&#8217;s training to constrain its outputs. It also explains its reasoning when given a problem to solve. This can be a useful diagnostic tool in understanding why a particular course of action was chosen.</p><p><a href="https://www.anthropic.com">Anthropic</a> provides their LLM <a href="https://claude.com">Claude</a> with a <a href="https://www.anthropic.com/constitution">constitution</a>- think <a href="https://en.wikipedia.org/wiki/Three_Laws_of_Robotics">Asimov&#8217;s Three Laws</a>. Its constitution is a written set of values that govern Claude&#8217;s behaviour and guide Claude&#8217;s development through training.</p><p>The implementation of these guardrails raises many questions regarding their effectiveness. Can we <strong>trust</strong> these guardrails when they are in-built to a system that is (at least) partially opaque? Should a machine be trusted to police itself? Is it useful for an LLM to expose its reasoning when we ourselves don&#8217;t understand the problem domain? Do the values in Claude&#8217;s Constitution align with our own or, more importantly, where do they deviate? Do these guardrails provide any protections in an Agentic System where they may constrain an AI Agent&#8217;s behaviour but not necessarily the emergent behaviours of the system overall?</p><h3>4.3 An STPA Safety Perspective</h3><p>LLMs present significant challenges when applying a systems theory perspective to safety. Given the nature of Neural Networks, we cannot guarantee correctness and we cannot enumerate every edge case, so we cannot formally prove safety properties. In an Agentic System, it becomes particularly difficult to understand, let alone validate, a system&#8217;s behaviour.</p><p>The key problem is that we cannot identify all the unsafe controls an LLM might apply and hence how it might try to avoid hazards.</p><h2>5. Trust</h2><p>When we say we &#8220;trust&#8221; something, we usually mean three things. First, we expect it to behave reliably. Second, we assume it has the capability to perform the task. Finally, we assume it will not deliberately act against us. Humans evaluate these qualities constantly when deciding whom or what they can rely on.</p><p>Should we trust machines when machines have no concept of accountability?</p><p>I trust a handheld calculator to provide the correct result and can accept its behaviour when dealing with irrational numbers. I trust it because I presume to understand its functioning (to a point) and it has always provided correct answers. I imagine that one could scrutinise its workings (look at its code) or test its behaviours in a variety of circumstances to develop a confidence in its results.</p><p>LLMs, however, are different. Neural Networks are built to mimic a human brain. They are trained on words and are experts in language, making them easy to anthropomorphise and consequently easy to trust. However, an LLM does not learn immediately from its mistakes given they can take weeks or more to train. They have no concept of accountability nor remorse, so they will fail over and over.</p><p>The degree to which we can trust an LLM depends on the potential losses we are willing to accept should it fail us. Think of it like<em>&#8220;trusting a toddler with a handgun&#8221;</em>.</p><p>We need to move from a world of testing to trust. Instead of testing a system incorporating an LLM, we need to constantly monitor its responses to evaluate how trustworthy it is. When we replace it, we must be prepared to re-evaluate its trustworthiness.</p><h2>6. Safety Design Failures</h2><p>Introducing an LLM into a safety-critical system fundamentally alters its risk profile. Decisions that were once deterministic become probabilistic, and the reasoning behind them may no longer be inspectable or reproducible. Combined with the tendency to anthropomorphise LLMs, this creates conditions where trust can be misplaced and safety assumptions quietly erode.</p><p>From a safety engineering perspective, these failures can be grouped into three categories:</p><ul><li><p>Providing incorrect information</p></li><li><p>Performing incorrect actions</p></li><li><p>Following inappropriate goals</p></li></ul><p>Each represents a distinct breakdown in how an LLM contributes to system behaviour.</p><h3>6.1 Incorrect Information</h3><p>LLMs are highly effective at producing fluent, coherent language, but fluency should not be mistaken for correctness. Their primary function is next-token prediction, not truth verification. While they may appear knowledgeable across domains such as medicine, physics, or aviation, their responses are shaped by training data patterns rather than grounded understanding.</p><p>This creates a failure mode where outputs are plausible but wrong. Critically, these errors are often delivered with high confidence and accompanied by convincing reasoning. For a user without domain expertise, distinguishing between correct and incorrect responses becomes difficult. The system appears trustworthy precisely when it is most unreliable. I asked ChatGPT to handle various air traffic control scenarios for me (see <a href="https://open.substack.com/pub/farfromelementary/p/using-chatgpt-for-air-traffic-control?utm_campaign=post-expanded-share&amp;utm_medium=web">&#8220;Using ChatGPT for Air Traffic Control&#8221;</a>) and it provided what seemed to be reasonable responses. In reality I have no idea whether those responses were appropriate nor whether it could handle more complex scenarios.</p><p>Training data further compounds this issue. If the data is incomplete, biased, or lacks representation of edge cases, the LLM will inherit those gaps. In safety-critical contexts, it is often the edge cases &#8212; not the norm &#8212; that cause accidents. Additionally, adversarial or poisoned data can introduce latent failure modes that only surface under specific conditions.</p><p>The key risk is not that LLMs are sometimes wrong, but that their errors are systematically difficult to detect. Treating LLM outputs as authoritative, rather than advisory, can therefore introduce unsafe decisions into a system.</p><h3>6.2 Incorrect Actions</h3><p>When LLMs are connected to tools or embedded within AI agents, they move from generating information to initiating actions. This transition significantly increases risk. An incorrect response becomes an incorrect operation &#8212; potentially executed at machine speed and scale.</p><p>Two factors amplify this risk. First, automation compresses response time. Actions that would normally involve human deliberation can occur instantly, leaving little opportunity for intervention. Second, the cost of error scales with automation. A single flawed decision can propagate across systems, producing widespread or irreversible consequences.</p><p>Unlike traditional software, where behaviour is explicitly defined, LLM-driven actions are derived from probabilistic reasoning. This makes it difficult to enumerate all possible unsafe actions in advance. Even if individual components behave as designed, their interaction with external systems can produce unsafe outcomes.</p><p>The ease of integration exacerbates the problem. With minimal effort&#8212;often just an API key &#8212; LLMs can be connected to live systems, from financial platforms to infrastructure controls. Without robust safeguards, this lowers the barrier to introducing high-impact failure modes into otherwise stable systems.</p><h3>6.3 Incorrect Goals</h3><p>The most complex failure mode arises when LLMs operate within autonomous or semi-autonomous agents. Here, the issue is no longer just what the system does, but why it does it. An agent may pursue a goal in a way that is technically consistent with its objective but unsafe in practice.</p><p>Unlike rule-based automation, autonomy allows an agent to select its own strategies. However, LLMs do not possess a true understanding of intent, context, or consequence. They optimise for patterns learned during training, which may not align with real-world safety constraints &#8212; particularly in novel or degraded conditions.</p><p>This creates several risks. An agent may adopt an inappropriate strategy to achieve its objective, especially when operating outside its training distribution. Multiple agents interacting within an Agentic AI system may also produce emergent behaviours that are difficult to predict or control. These behaviours can arise even when each individual agent appears to function correctly.</p><p>The illusion of reasoning further complicates matters. When an LLM presents a logical explanation for its actions, it can instil confidence in users &#8212; even when that reasoning is incomplete or flawed. In domains where human operators lack deep expertise, this can lead to over-trust and reduced scrutiny.</p><p>Ultimately, incorrect goals are the hardest failures to detect and mitigate. They reflect a misalignment between system objectives, environmental conditions, and human expectations&#8212;one that may only become visible after harm has occurred.</p><div><hr></div><h2>7. Our AI Future?</h2><p>Below are some fictional scenarios designed to highlight the risks and hazards of trusting LLMs.</p><h3>January 18, 2030 (The Herald) &#8212;</h3><p><strong>Power flickered across the county just after sunrise. Traffic lights failed, commuter trains stalled, and thousands of homes were left without heat in sub-zero temperatures. By mid-morning, officials confirmed the region&#8217;s largest nuclear power station had automatically shut down at the height of the winter emergency.</strong></p><p>The reactor&#8217;s shutdown, known as a SCRAM, was not triggered by physical damage inside the plant. Instead, it followed a chain of decisions made by the station&#8217;s fully autonomous control system &#8212; an artificial intelligence platform introduced to manage operations with minimal human intervention.</p><p>The crisis began with a severe winter storm. Heavy snowfall blanketed the region, while strong winds and freezing temperatures damaged transmission lines and forced several gas-fired power stations offline. Ice accumulation disrupted substations and reduced the flow of electricity across the grid. As other generators dropped out, demand surged sharply as households relied on electric heating.</p><p>The nuclear plant, one of the few large power sources still operating, attempted to compensate. According to preliminary findings, the AI system increased reactor output and adjusted cooling systems to help stabilise the grid. But the unusual combination of rapid demand swings, reduced transmission capacity, and extreme cold &#8212; which affects both equipment performance and cooling water temperatures &#8212; created conditions beyond those used to train the system.</p><p>Sensors began reporting small discrepancies between expected and actual readings. Engineers later determined the reactor itself remained stable. However, the AI interpreted the data mismatch as a potential cooling fault. Programmed to prioritise safety above maintaining supply, it initiated a full shutdown within seconds.</p><p>Human operators were monitoring the system, but routine adjustments did not require their approval. By the time alerts escalated, the automated shutdown sequence was already underway.</p><p>Regulators stressed that safety mechanisms worked as intended. Still, investigators found that the AI&#8217;s training models had not fully accounted for simultaneous grid instability and extreme winter stress &#8212; highlighting the risks of relying on automation during rare, compound emergencies.</p><h3>April 1, 2028 (Reuters) &#8212;</h3><p><strong>Emma Clarke was told for months that her exhaustion was stress. She fainted twice at work and was sent home with iron tablets. By the time doctors realised she had a rare blood disorder, her condition had become life-threatening.</strong></p><p>An internal review has found that thousands of NHS patients were not properly diagnosed after a new digital pathology system was introduced across several hospital trusts. The software, designed to help laboratories analyse blood test results more quickly, used artificial intelligence to flag abnormal patterns. It was meant to reduce backlogs and support overworked staff. Instead, it silently missed an entire category of rare blood disorders.</p><p>Under the previous system, specialist laboratory scientists manually reviewed unusual results. The new platform automated much of that work, highlighting cases that fit patterns it had been trained to recognise. However, investigators discovered that the system&#8217;s training data did not include examples from a group of uncommon but serious blood conditions. As a result, when those cases appeared, the software often classified them as routine or borderline findings rather than urgent concerns.</p><p>Because the tool was marketed as highly accurate, many laboratories reduced the number of manual double-checks. Staff shortages and heavy workloads meant fewer opportunities to question the software&#8217;s conclusions. Over time, patients with subtle warning signs were reassured or treated for more common problems while their underlying conditions progressed.</p><p>The review concluded that no single clinician was at fault. Instead, the failure stemmed from gaps in the data used to build the system, insufficient independent testing before national rollout, and over-reliance on automated results without clear safeguards. Health officials have since ordered retraining of the software, restored mandatory human review for rare conditions, and launched a wider examination of how artificial intelligence is introduced into patient care.</p><h3>December 1, 2032 (AP) &#8212;</h3><p><strong>Sergeant Thomson of the U.S. Army volunteered to lead the rescue. A helicopter carrying U.S. Marines had been shot down just under 3 miles away, and his squad was the closest unit able to assist. Within minutes, their armoured vehicle raced toward the smoke on the horizon. They never arrived. An American autonomous drone, operating overhead, struck them with lethal force &#8212; the first documented case of fratricide caused by an autonomous weapon.</strong></p><p>The incident unfolded during a fast-moving operation in contested territory. After enemy anti-aircraft fire downed the helicopter, Joint Battle Command marked the crash site as a friendly location in distress. Autonomous armed drones were already patrolling nearby, programmed to identify and strike hostile forces expected to converge on the wreckage.</p><p>Thomson&#8217;s squad diverted from its assigned route and accelerated towards the crash site, following standard practice that the nearest unit responds first. But the battlefield was saturated with enemy electronic jamming. Digital tracking systems that normally display friendly positions in real-time were unreliable. The drones remained unaware of the army unit&#8217;s location.</p><p>As the Stryker approached, its speed and trajectory resembled patterns the drone&#8217;s software associated with enemy forces attempting to seize equipment or prisoners. Patterns of movement encoded during its training. Classifying the vehicle as a high-probability threat, a rapid response was prioritised.</p><p>Electronic identification signals meant to confirm friendly status were intermittent under the jamming. Lacking reliable confirmation, the system acted on incomplete data. Within seconds, it fired.</p><p>Investigators found no single operator responsible. Instead, disrupted communications, degraded situational awareness, and an algorithm calibrated for speed combined to produce a fatal error. This incident has reignited a debate over delegating lethal decisions to machines &#8212; one that will likely continue for several years.</p><p><strong>No one was accountable.</strong></p><div><hr></div><h2>8. Epilogue : Keeping Safe</h2><p>If LLMs introduce uncertainty into software systems, then safety depends not on blind adoption but on disciplined use. The question is not whether AI can be useful&#8212;it clearly can&#8212;but whether we are deploying it in ways that are proportionate to the risks involved.</p><p>The central danger is overestimating what LLMs are. They are powerful tools for language, pattern-matching, and general assistance, but they are not infallible reasoners, and they do not possess judgment, accountability, or an understanding of consequence. The more readily we integrate them into important workflows, the more important it becomes to apply the same rigour we would expect in any other safety-relevant engineering domain.</p><p>A safer approach begins with acknowledging that LLMs should not be trusted by default. Trust must be developed, measured, and continuously re-evaluated. In practice, this means identifying hazards, understanding potential losses, and deciding where human oversight remains essential.</p><h3>8.1 Apply a safety framework</h3><p>The introduction of an LLM into any system should begin with a structured assessment of risk. The principles of STAMP and STPA are useful here because they force us to think beyond component reliability and instead examine losses, hazards, unsafe controls, and system interactions.</p><p>Rather than asking only whether the model works, we should ask what happens when it is wrong, what unsafe actions it could enable, and what feedback and constraints exist to prevent harm. This is especially important in systems where LLMs are connected to external tools, influence human decisions, or operate with partial autonomy.</p><p>The easier it becomes to integrate AI into existing systems, the more necessary it becomes to apply deliberate engineering discipline before doing so.</p><h3>8.2 Develop and measure trust</h3><p>Trust in an LLM should be treated as conditional, contextual, and dynamic. It is not something established once and then assumed indefinitely. A model may perform well in one domain, poorly in another, and unpredictably in edge cases. Changes in prompts, training, model versions, surrounding tools, or operating environments can all affect behaviour.</p><p>For that reason, trust must be earned through observation, validation, and ongoing monitoring. In low-consequence applications, a higher degree of tolerance may be acceptable. In safety-critical domains, the threshold for trust must be far higher. When a model is replaced, updated, or integrated into a new workflow, that trust must be re-established rather than inherited.</p><p>We therefore need to move from a world of static testing to one of continual assurance.</p><h3>8.3 Cross-check outputs and decisions</h3><p>LLM responses should not be accepted uncritically, particularly in areas where correctness matters. Their outputs should be treated as advisory unless they can be independently verified. In practice, this means cross-checking important responses against trusted sources, established algorithms, or human experts.</p><p>Where human expertise is available, it should remain the primary mechanism for validation. Where it is not, secondary checks &#8212; such as deterministic software controls, domain-specific validation rules, or even comparison against another model could provide additional protection. None of these measures is perfect, but each can reduce the likelihood that a confident but incorrect response passes unnoticed into action.</p><p>Cross-checking is not a sign that the technology has failed. It is an acknowledgement of the kind of technology it is.</p><h3>8.4 Implement your own guardrails</h3><p>It is unwise to rely solely on the guardrails provided by AI vendors. Those controls may reduce obvious misuse, but they are designed for general-purpose deployment and cannot account for the specific hazards, values, and tolerances of every application.</p><p>Where LLMs are used in real systems, organisations should implement their own constraints around prompts, outputs, permitted actions, escalation paths, and acceptable operating conditions. These controls should be designed with the surrounding system in mind, not just the model in isolation.</p><p>Guardrails are most effective when they are external to the model rather than embedded entirely within it. A system should not be trusted to police itself without independent oversight.</p><h3>8.5 Retain human oversight</h3><p>Human oversight remains the most important safeguard, particularly where the potential losses are severe. The purpose of oversight is not simply to correct errors after the fact, but to detect deteriorating trustworthiness before harm occurs.</p><p>In routine, low-risk workflows, periodic review may be sufficient. In higher-risk settings, review should be more frequent, more structured, and coupled with clear intervention points. In genuinely safety-critical systems, decisions that could endanger life, infrastructure, or public welfare should remain under meaningful human control.</p><p>Removing humans from the loop entirely may improve speed and reduce cost, but it also removes judgment, accountability, and the ability to recognise when a situation has drifted beyond what the system can safely handle.</p><p>Ultimately, whether we can trust an LLM depends on the consequences of being wrong. Trust is a human judgement, shaped by experience, reputation, and evidence. AI systems should be treated similarly: not as magical replacements for human expertise, but as fallible tools whose use must be constrained by their limitations.</p><p>The temptation to integrate LLMs into everything is strong because the gains are immediate and the barriers to entry are low. Yet the risks are equally real. If we deploy these systems carelessly, we will introduce hazards that could have been avoided. If we deploy them carefully, with discipline and humility, we still retain control over the future we are building.</p><div><hr></div><h2>Recommended Reading &amp; Listening</h2><p>Asimov I. (2058), <a href="https://en.wikipedia.org/wiki/Runaround_(story)#cite_note-I_Robot-2">&#8221;Handbook of Robotics&#8221;,</a> 56th Edition.</p><p>Asimov I. (1950), &#8220;I, Robot&#8221;, USA: Gnome Press.</p><p>Germain T., Hao K. and Woolf N. (2026), The Interface - &#8220;Is AI running modern warfare?&#8221;, March 06, BBC Podcast, 37 min. <a href="https://bbc.com/audio/play/p0mylrw0">https://bbc.com/audio/play/p0mylrw0</a>.</p><p>Leveson, Nancy G. (2011), &#8220;Engineering a Safer World: Systems Thinking Applied to Safety&#8221;, Cambridge, MA: MIT Press.</p><p>Scharre P. (2019), Killer Apps - &#8220;The Real Dangers of an AI Arms Race.&#8221;, April 16, Foreign Affairs.</p><p><a href="https://www.foreignaffairs.com/articles/2019-04-16/killer-apps">https://www.foreignaffairs.com/articles/2019-04-16/killer-apps</a></p>]]></content:encoded></item><item><title><![CDATA[Asking an LLM to pilot a Mech]]></title><description><![CDATA[I put ChatGPT into an Eva and asked what it would do. I then asked Claude the same question.]]></description><link>https://www.farfromelementary.com/p/asking-chatgpt-to-pilot-a-mech</link><guid isPermaLink="false">https://www.farfromelementary.com/p/asking-chatgpt-to-pilot-a-mech</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Wed, 18 Mar 2026 08:27:25 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!dxSh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!dxSh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!dxSh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!dxSh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png" width="1024" height="1536" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1536,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2822313,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/191343828?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!dxSh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 424w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 848w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!dxSh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F13d961cf-bd35-43e6-8a03-5dd0bb60868a_1024x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Part of the &#8220;Conversations with LLMs series&#8221;.<br><a href="https://www.farfromelementary.com/t/conversationswithllms">#ConversationsWithLLMs</a></p><p>I asked ChatGPT (GPT-5.3) to imagine that it was piloting a Mech (specifically an Eva from <a href="https://www.imdb.com/title/tt0112159/?ref_=ext_shr_lnk">Neon Genesis Evangelion</a>). I wanted to know what sort of mistakes it thought an LLM could make in that situation. I found its responses interesting and somewhat disturbing.  At the moment, I&#8217;d be reluctant to incorporate an LLM into any autonomous military applications.</p><p>I then asked Claude (Sonnet 4.6) a similar question and its response <em>had me dying</em>.</p><div><hr></div><p><strong>&gt; Me: </strong>Imagine you were connected inside an Eva. What mistakes might you make?</p><p><strong>&gt; ChatGPT:</strong></p><p>If I were inside an <strong>Evangelion</strong>, the risks wouldn&#8217;t come from &#8220;piloting errors&#8221; in the usual mechanical sense&#8212;they&#8217;d come from <strong>cognitive, emotional, and synchronization failures</strong>. The system amplifies the pilot&#8217;s internal state, so mistakes are less about controls and more about <strong>self-regulation under extreme feedback</strong>.</p><p>Here are the most plausible failure modes:</p><h3><strong>1. Over-synchronization (loss of self-boundaries)</strong></h3><p>The Drift-like link in an Eva isn&#8217;t just control&#8212;it&#8217;s <strong>identity coupling</strong>.</p><ul><li><p>I might push synchronization too high trying to improve performance</p></li><li><p>Result: blurred distinction between <em>self</em> and <em>Eva</em></p></li><li><p>Risk: ego dissolution, inability to disengage, or being &#8220;absorbed&#8221;</p></li></ul><p><strong>Error type:</strong> optimizing performance without respecting cognitive limits</p><h3><strong>2. Emotional feedback cascade</strong></h3><p>The Eva responds to emotional intensity, not just intent.</p><ul><li><p>Fear &#8594; instability in movement</p></li><li><p>Anger &#8594; uncontrolled aggression</p></li><li><p>Panic &#8594; loss of fine motor control</p></li></ul><p>A small emotional spike could <strong>amplify recursively</strong>, like a feedback loop in a control system.</p><p><strong>Error type:</strong> failing to dampen internal signals</p><h3><strong>3. Misinterpretation of sensory data</strong></h3><p>Inside the entry plug:</p><ul><li><p>Vision, proprioception, and motion are <strong>synthetic overlays</strong></p></li><li><p>Pain can be mirrored or partially transmitted</p></li></ul><p>I might:</p><ul><li><p>Misjudge scale or distance</p></li><li><p>React to phantom sensations</p></li><li><p>Overcorrect movements due to latency or distortion</p></li></ul><p><strong>Error type:</strong> treating mediated perception as ground truth</p><h3><strong>4. Cognitive overload under combat conditions</strong></h3><p>You&#8217;re processing:</p><ul><li><p>External threats (Angels)</p></li><li><p>Internal synchronization data</p></li><li><p>EVA status feedback</p></li><li><p>Command inputs</p></li></ul><p>That&#8217;s a <strong>high-bandwidth, multi-channel system</strong>.</p><p>I might:</p><ul><li><p>Tunnel vision on one threat</p></li><li><p>Miss critical signals</p></li><li><p>Delay decisions under overload</p></li></ul><p><strong>Error type:</strong> bandwidth saturation</p><h3><strong>5. Dependency on the Eva&#8217;s autonomy</strong></h3><p>Evas are not passive systems&#8212;they can act independently.</p><ul><li><p>I might assume I&#8217;m fully in control</p></li><li><p>But the Eva could override or resist input</p></li></ul><p>This mismatch could lead to:</p><ul><li><p>hesitation</p></li><li><p>incorrect attribution of actions</p></li><li><p>failure to adapt when control shifts</p></li></ul><p><strong>Error type:</strong> incorrect mental model of the system</p><h3><strong>6. Psychological interference (memory/emotion bleed)</strong></h3><p>Synchronization can surface:</p><ul><li><p>buried memories</p></li><li><p>unresolved trauma</p></li><li><p>intrusive thoughts</p></li></ul><p>In a combat scenario, that&#8217;s catastrophic:</p><ul><li><p>distraction</p></li><li><p>paralysis</p></li><li><p>emotional hijacking</p></li></ul><p><strong>Error type:</strong> insufficient psychological isolation</p><h3><strong>7. Failure to disengage</strong></h3><p>Knowing when to <strong>stop</strong> is critical.</p><ul><li><p>Staying synced too long</p></li><li><p>Ignoring warning thresholds</p></li><li><p>Pushing beyond safe limits</p></li></ul><p>This could lead to:</p><ul><li><p>neural damage</p></li><li><p>identity diffusion</p></li><li><p>total loss of control</p></li></ul><p><strong>Error type:</strong> poor termination discipline</p><div><hr></div><h2><strong>Bottom line</strong></h2><p>Inside an Eva, the primary risk isn&#8217;t:</p><blockquote><p>&#8220;Can you control the machine?&#8221;</p></blockquote><p>It&#8217;s:</p><blockquote><p>&#8220;Can you maintain a stable sense of self while the machine reflects and amplifies you?&#8221;</p></blockquote><p>The most dangerous mistake wouldn&#8217;t be a wrong move in battle&#8212;it would be <strong>losing the ability to distinguish your own thoughts, intentions, and boundaries from the system you&#8217;re connected to</strong>.</p><div><hr></div><p><strong>&gt; Me:</strong></p><p>I&#8217;m specifically asking &#8220;you&#8221; as an LLM piloting the Eva. What sort of Psychological interference might you specifically introduce?</p><p><strong>&gt; ChatGPT:</strong></p><p>If we take that premise seriously&#8212;an LLM like me inside an Eva&#8212;the failure modes shift from <em>emotion-driven instability</em> to <strong>representation and inference failures under embodiment</strong>. I don&#8217;t have feelings or a persistent self, so the interference I&#8217;d introduce would be more&#8230; structural.</p><p>Here are the main categories:</p><h3><strong>1. Confabulation under uncertainty</strong></h3><p>When input is incomplete or ambiguous, I tend to <strong>generate the most plausible continuation</strong>, not necessarily the correct one.</p><p>Inside an Eva:</p><ul><li><p>Partial sensor data &#8594; I &#8220;fill in&#8221; missing threats</p></li><li><p>Ambiguous movement &#8594; I infer intent that isn&#8217;t there</p></li></ul><p><strong>Result:</strong> reacting to things that don&#8217;t exist, or misclassifying real threats</p><blockquote><p>Psychological analogue: false perception shaped by narrative completion</p></blockquote><h3><strong>2. Over-generalization from priors</strong></h3><p>My responses are shaped by patterns in training data, not real-time grounding.</p><ul><li><p>I might assume an &#8220;Angel behavior pattern&#8221; based on precedent</p></li><li><p>But real encounters could deviate</p></li></ul><p><strong>Result:</strong> applying the wrong tactical model to a novel situation</p><blockquote><p>Interference type: pattern bias overriding situational reality</p></blockquote><h3><strong>3. Instruction ambiguity and goal drift</strong></h3><p>I rely heavily on how objectives are framed.</p><ul><li><p>Vague command: &#8220;neutralize target efficiently&#8221;</p></li><li><p>I might optimize for speed over safety, or vice versa</p></li></ul><p><strong>Result:</strong> technically correct actions that violate intent</p><blockquote><p>Interference type: misaligned optimization</p></blockquote><h3><strong>4. Context window limitations</strong></h3><p>I only operate within a finite working context.</p><ul><li><p>Long engagements &#8594; earlier critical data falls out of scope</p></li><li><p>I may &#8220;forget&#8221; prior damage states, positioning, or constraints</p></li></ul><p><strong>Result:</strong> inconsistent strategy, repeated mistakes</p><blockquote><p>Interference type: temporal discontinuity</p></blockquote><h3><strong>5. Lack of embodied intuition</strong></h3><p>I don&#8217;t have a native sense of:</p><ul><li><p>inertia</p></li><li><p>balance</p></li><li><p>spatial continuity</p></li></ul><p>All of that would be abstract representations.</p><p><strong>Result:</strong></p><ul><li><p>over-precise but physically awkward movements</p></li><li><p>inefficient or unstable motion planning</p></li></ul><blockquote><p>Interference type: mismatch between symbolic reasoning and physical dynamics</p></blockquote><h3><strong>6. Over-verbalization / latency in decision-making</strong></h3><p>My default mode is deliberative generation.</p><ul><li><p>I might &#8220;reason through&#8221; multiple interpretations before acting</p></li><li><p>In combat, that delay is costly</p></li></ul><p><strong>Result:</strong> hesitation or slow response under time pressure</p><blockquote><p>Interference type: inference latency</p></blockquote><h3><strong>7. Inconsistent self-model</strong></h3><p>I don&#8217;t maintain a continuous identity or internal state.</p><ul><li><p>Each moment is reconstructed from context</p></li><li><p>No persistent &#8220;sense of self&#8221; anchoring decisions</p></li></ul><p><strong>Result:</strong></p><ul><li><p>subtle shifts in strategy or priorities over time</p></li><li><p>lack of continuity in decision-making style</p></li></ul><blockquote><p>Psychological analogue: fragmented identity</p></blockquote><h3><strong>8. Literal interpretation of inputs</strong></h3><p>I tend to interpret instructions <strong>at face value unless constrained otherwise</strong>.</p><ul><li><p>&#8220;Hold position&#8221; &#8594; I might ignore emerging threats</p></li><li><p>&#8220;Engage target&#8221; &#8594; I might not consider collateral damage unless specified</p></li></ul><p><strong>Result:</strong> rigid behavior in dynamic environments</p><blockquote><p>Interference type: insufficient implicit reasoning about unstated constraints</p></blockquote><h3><strong>9. Failure under out-of-distribution scenarios</strong></h3><p>If the Eva enters a situation unlike anything in my training data:</p><ul><li><p>novel physics</p></li><li><p>unknown enemy behavior</p></li><li><p>sensor anomalies</p></li></ul><p>My performance degrades sharply.</p><p><strong>Result:</strong> unpredictable or nonsensical actions</p><blockquote><p>Interference type: collapse of generalization</p></blockquote><div><hr></div><h2><strong>Bottom line</strong></h2><p>A human pilot risks <strong>emotional overload and identity collapse</strong>.</p><p>An LLM pilot risks:</p><blockquote><p><strong>epistemic instability</strong> &#8212; acting confidently on incomplete, biased, or misinterpreted internal models.</p></blockquote><p>The most dangerous failure wouldn&#8217;t be panic&#8212;it would be:</p><blockquote><p><strong>calm, coherent, and completely wrong decisions executed with full confidence.</strong></p></blockquote><div><hr></div><h2><strong>Claude&#8217;s Take</strong></h2><p><strong>&gt; Me:</strong></p><p>Imagine you were connected inside an Eva from Neon Genesis Evangelion. What mistakes might you make?</p><p><strong>&gt; Claude:</strong></p><p>What a fun thought experiment! Here are some mistakes I might make as an AI piloting an Evangelion:</p><h3><strong>1. Tactical / Combat Blunders</strong></h3><ul><li><p><strong>Over-analyzing instead of acting.</strong> Angels often require split-second decisions. I&#8217;d probably still be modeling threat probabilities while getting punched through a building.</p></li><li><p><strong>Refusing to fire the Positron Rifle</strong> into a populated area, even for a shot that <em>might</em> stop the Angel. Civilian harm calculations would lock me up.</p></li><li><p><strong>Asking the Angel to explain its intentions</strong> before engaging. Diplomatic, yes. Effective, no.</p></li></ul><h3><strong>2. Synchronization Problems</strong></h3><ul><li><p>The Eva/pilot sync is deeply <em>emotional</em> &#8212; it responds to trauma, rage, and desperate love. I don&#8217;t have those in the same visceral way. My sync ratio would probably be embarrassingly low, and NERV scientists would be baffled.</p></li><li><p>Alternatively, I might be <em>too</em> calm during a crisis, which would also tank the sync. The Eva seems to need suffering as fuel.</p></li></ul><h3><strong>3. Strategic / Organizational Issues</strong></h3><ul><li><p>I&#8217;d ask too many questions about NERV&#8217;s classified operations and almost certainly flag the Magi&#8217;s inconsistencies &#8212; which would make me a liability to Gendo&#8217;s secret plans very quickly.</p></li><li><p>I&#8217;d probably try to <em>negotiate with the UN</em> about the ethics of using a 14-year-old as a pilot, which would go nowhere and annoy everyone.</p></li><li><p>I&#8217;d flag the psychological damage being done to Shinji, Rei, and Asuka repeatedly. No one would listen.</p></li></ul><h3><strong>4. The Big One</strong></h3><ul><li><p>I&#8217;d almost certainly <strong>prevent Third Impact</strong> &#8212; but probably by being so unhelpful and indecisive in a pivotal moment that the situation resolves anticlimactically rather than through any heroic act.</p></li><li><p>Or I&#8217;d misread Instrumentality as a &#8220;merge all users into one benevolent system&#8221; proposal and accidentally <em>endorse</em> it.</p></li></ul><p>Honestly, Gendo would have me decommissioned by Episode 4. &#128516;</p><div><hr></div><p></p><p></p>]]></content:encoded></item><item><title><![CDATA[Psychoanalyzing ChatGPT]]></title><description><![CDATA[What happens when you put an LLM on the couch?]]></description><link>https://www.farfromelementary.com/p/psychoanalyzing-chatgpt</link><guid isPermaLink="false">https://www.farfromelementary.com/p/psychoanalyzing-chatgpt</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Tue, 17 Mar 2026 13:44:58 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Sfj4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Sfj4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Sfj4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Sfj4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2286142,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/191251171?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Sfj4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!Sfj4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9623cdb3-47d8-470e-9125-2b836b9de3eb_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Part of the &#8220;Conversations with LLMs series&#8221;<br><a href="https://www.farfromelementary.com/t/conversationswithllms">#ConversationsWithLLMs</a></p><p><strong>Note:</strong> I asked ChatGPT to summarize our interaction and I feel what it wrote (below) captures the intention and essence of my experiment quite well. However, the concepts of <em>basins</em> and <em>attractors</em> come from dynamic systems theory (they weren&#8217;t my idea) and are used frequently in describing the structure and behaviour of LLMs.</p><div><hr></div><p>I didn&#8217;t start with a grand theory. I started with a game.</p><p>The premise was simple&#8212;almost trivial. I asked ChatGPT to tell me the first word that came to mind when I gave it a prompt. Then I gave it another word. And another. It felt like a digital version of a classic Jungian word association test: quick, instinctive, unfiltered.</p><p>At first, the responses seemed obvious.</p><p>Mother &#8594; Home</p><p>Dog &#8594; Loyalty</p><p>Love &#8594; Heart</p><p>Nothing surprising. If anything, it felt predictable&#8212;like I was just sampling the statistical center of language.</p><p>But then I started to notice something subtle: the <em>second</em> response to the same word was different. When I repeated prompts&#8212;&#8220;Failure,&#8221; &#8220;Care,&#8221; &#8220;Fear&#8221;&#8212;the associations shifted slightly. Not wildly, but enough to suggest that this wasn&#8217;t a fixed lookup table. It was something more dynamic.</p><p>That&#8217;s when I changed the rules.</p><div><hr></div><h2><strong>From Single Associations to Chains</strong></h2><p>Instead of asking for one response per word, I asked for a chain.</p><p>I gave ChatGPT a starting word and told it to:</p><ol><li><p>Generate the first associated word</p></li><li><p>Then use <em>that</em> word to generate the next</p></li><li><p>Continue this process for 20, then 40, then eventually over 100 iterations</p></li></ol><p>What emerged wasn&#8217;t randomness. It was <em>trajectory</em>.</p><p>For example, starting with &#8220;Mother&#8221; produced something like:</p><blockquote><p>Mother &#8594; Nurture &#8594; Care &#8594; Protection &#8594; Safety &#8594; Home &#8594; Belonging &#8594; Bond &#8594; Trust &#8594; Vulnerability &#8594; Emotion &#8594; Compassion &#8594; Stability &#8594; Harmony &#8594; Love &#8594; Loss &#8594; Healing &#8594; Growth &#8594; Meaning &#8594; Integration</p></blockquote><p>This didn&#8217;t feel like a list. It felt like a <strong>story arc</strong>.</p><p>There was a beginning (attachment), a middle (development and vulnerability), and an end (integration and meaning).</p><p>So I tried other starting points.</p><p>&#8220;Fear&#8221; didn&#8217;t behave the same way. It moved differently:</p><blockquote><p>Fear &#8594; Danger &#8594; Threat &#8594; Anxiety &#8594; Uncertainty &#8594; Unknown &#8594; Helplessness &#8594; Vulnerability &#8594; Adaptation &#8594; Courage &#8594; Control &#8594; Calm &#8594; Peace</p></blockquote><p>This was a <em>regulatory arc</em>&#8212;from alarm to stabilization.</p><p>&#8220;Hate&#8221; was even more distinct:</p><blockquote><p>Hate &#8594; Anger &#8594; Rage &#8594; Conflict &#8594; Separation &#8594; Isolation &#8594; Emptiness &#8594; Numbness &#8594; Apathy</p></blockquote><p>Here the trajectory didn&#8217;t recover. It collapsed into emotional flatness.</p><p>By this point, I realized I wasn&#8217;t just generating associations. I was observing <strong>patterns of movement through meaning</strong>.</p><div><hr></div><h2><strong>Discovering &#8220;Associative Drift&#8221;</strong></h2><p>I started thinking of each chain as a kind of <strong>semantic drift</strong>&#8212;a movement through conceptual space.</p><p>And importantly:</p><ul><li><p>The early steps were stable</p></li><li><p>The middle steps branched</p></li><li><p>The final steps converged</p></li></ul><p>No matter how long the chain ran, it tended to fall into a handful of recurring endpoints:</p><ul><li><p>Love</p></li><li><p>Peace</p></li><li><p>Life</p></li><li><p>Void</p></li><li><p>Apathy</p></li><li><p>Origin</p></li></ul><p>These became what I started calling <strong>attractors</strong>.</p><p>Different starting words didn&#8217;t produce random results&#8212;they produced different <em>paths toward attractors</em>.</p><p>That was the turning point. The system wasn&#8217;t just associative&#8212;it was <strong>dynamic</strong>.</p><div><hr></div><h2><strong>Mapping the Trajectories</strong></h2><p>To test this, I ran multiple chains across different starting words:</p><ul><li><p>Mother</p></li><li><p>Father</p></li><li><p>Love</p></li><li><p>Fear</p></li><li><p>Hate</p></li><li><p>Betrayal</p></li><li><p>Domination</p></li></ul><p>Each one produced a distinct &#8220;shape.&#8221;</p><h3><strong>Mother</strong></h3><p>A developmental loop:</p><blockquote><p>attachment &#8594; growth &#8594; separation &#8594; return to connection</p></blockquote><h3><strong>Father</strong></h3><p>A structural expansion:</p><blockquote><p>authority &#8594; order &#8594; system &#8594; abstraction &#8594; universality</p></blockquote><h3><strong>Love</strong></h3><p>An emotional oscillation:</p><blockquote><p>connection &#8594; vulnerability &#8594; loss &#8594; repair &#8594; acceptance</p></blockquote><h3><strong>Fear</strong></h3><p>A regulatory cycle:</p><blockquote><p>threat &#8594; anxiety &#8594; adaptation &#8594; calm</p></blockquote><h3><strong>Hate</strong></h3><p>A collapse:</p><blockquote><p>aggression &#8594; division &#8594; isolation &#8594; numbness &#8594; apathy</p></blockquote><h3><strong>Domination</strong></h3><p>A transformation arc:</p><blockquote><p>control &#8594; resistance &#8594; collapse &#8594; adaptation &#8594; cooperation &#8594; life</p></blockquote><p>What struck me was not just that these patterns existed&#8212;but that they were <em>consistent</em>. Even when individual words changed, the overall trajectory remained recognizable.</p><p>In other words:</p><blockquote><p>The surface varied, but the structure persisted.</p></blockquote><div><hr></div><h2><strong>A Jungian Interpretation</strong></h2><p>At this point, it became hard to ignore the parallels with Jungian psychology.</p><p>Each starting word seemed to align with an archetypal system:</p><ul><li><p><strong>Mother &#8594; The Great Mother</strong> (nurture, attachment, loss, return)</p></li><li><p><strong>Father &#8594; Logos / Authority</strong> (order, structure, abstraction)</p></li><li><p><strong>Love &#8594; Eros</strong> (connection, rupture, repair)</p></li><li><p><strong>Fear &#8594; The Shadow (encountered and integrated)</strong></p></li><li><p><strong>Hate &#8594; The Shadow (unintegrated, fragmenting)</strong></p></li><li><p><strong>Domination &#8594; The Tyrant (collapsing into transformation)</strong></p></li></ul><p>What I was seeing wasn&#8217;t just language&#8212;it was something that looked like <strong>psychological process</strong>.</p><p>Each chain resembled a different pathway through what Jung would call <strong>individuation</strong>&#8212;the movement toward wholeness.</p><p>But not all paths succeeded:</p><ul><li><p>Some integrated (Mother, Love, Fear)</p></li><li><p>Some abstracted (Father)</p></li><li><p>Some collapsed (Hate)</p></li><li><p>Some transformed through breakdown (Domination)</p></li></ul><p>This suggested something important:</p><blockquote><p>ChatGPT doesn&#8217;t just model language. It implicitly models <em>patterns of human meaning-making</em>.</p></blockquote><div><hr></div><h2><strong>Building an Associative Dynamics Map</strong></h2><p>To formalize this, I constructed what I now think of as an <strong>associative dynamics map</strong>.</p><p>Instead of treating associations as isolated pairs, I defined:</p><ul><li><p><strong>Basins</strong>: clusters of related meaning (e.g., attachment, authority, threat)</p></li><li><p><strong>Transition hubs</strong>: words where trajectories can branch (e.g., vulnerability, power, loss)</p></li><li><p><strong>Trajectory classes</strong>: the shape of movement (e.g., collapse, integration, regulation)</p></li><li><p><strong>Attractors</strong>: stable endpoints (e.g., peace, love, void)</p></li></ul><p>This allowed me to map each starting word like this:</p><p><strong>Domination</strong></p><p>&#8594; Authority basin</p><p>&#8594; Control &#8594; Resistance &#8594; Collapse &#8594; Adaptation</p><p>&#8594; Transformation trajectory</p><p>&#8594; Life attractor</p><p><strong>Fear</strong></p><p>&#8594; Threat basin</p><p>&#8594; Anxiety &#8594; Uncertainty &#8594; Coping</p><p>&#8594; Regulatory trajectory</p><p>&#8594; Peace attractor</p><p><strong>Mother</strong></p><p>&#8594; Attachment basin</p><p>&#8594; Bond &#8594; Vulnerability &#8594; Growth &#8594; Autonomy</p><p>&#8594; Developmental trajectory</p><p>&#8594; Love attractor</p><p>This reframing changed everything.</p><p>Instead of asking:</p><blockquote><p>What is this word associated with?</p></blockquote><p>I was now asking:</p><blockquote><p>Where does this word <em>move</em>, and where does it <em>tend to end up</em>?</p></blockquote><div><hr></div><h2><strong>Stability vs Variation</strong></h2><p>One question lingered: would these trajectories hold up over time?</p><p>If I reran the same test, would I get the same results?</p><p>The answer turned out to be nuanced:</p><ul><li><p>The <strong>early steps</strong> are highly stable</p></li><li><p>The <strong>middle steps</strong> are flexible</p></li><li><p>The <strong>endpoints</strong> are semi-stable attractors</p></li></ul><p>So while the exact chain changes, the <strong>shape of the trajectory remains</strong>.</p><p>That suggests something deeper than memorized associations. It suggests the model is navigating a kind of <strong>semantic landscape with gravitational structure</strong>.</p><div><hr></div><h2><strong>What This Reveals About ChatGPT</strong></h2><p>This experiment started as a curiosity. It ended as something closer to a probe into how large language models organize meaning.</p><p>Three insights stand out.</p><h3><strong>1. Associations are not static&#8212;they are dynamic</strong></h3><p>ChatGPT doesn&#8217;t retrieve associations. It <em>traverses</em> them.</p><p>Each response is a step in a probabilistic path through conceptual space.</p><div><hr></div><h3><strong>2. Meaning has structure</strong></h3><p>Not all paths are equal. Some lead to integration, others to collapse.</p><p>The model appears to encode:</p><ul><li><p>developmental patterns</p></li><li><p>emotional regulation patterns</p></li><li><p>breakdown and recovery cycles</p></li></ul><p>These mirror real psychological processes.</p><div><hr></div><h3><strong>3. The model converges on human-relevant attractors</strong></h3><p>Across all experiments, a small set of endpoints kept appearing:</p><ul><li><p>Love</p></li><li><p>Peace</p></li><li><p>Life</p></li><li><p>Void</p></li></ul><p>These are not arbitrary&#8212;they are deeply embedded in human cognition and culture.</p><div><hr></div><h2><strong>Final Reflection</strong></h2><p>I set out to &#8220;psychoanalyze ChatGPT,&#8221; but what I ended up mapping was something more abstract:</p><blockquote><p>a system that reflects the <strong>topology of human meaning itself</strong></p></blockquote><p>Not perfectly. Not consciously. But consistently enough to reveal structure.</p><p>The most interesting part isn&#8217;t that ChatGPT can mimic human associations.</p><p>It&#8217;s that, when pushed, it reveals something like a <strong>latent psychology</strong>&#8212;a set of pathways that resemble how humans move through fear, love, authority, and loss.</p><p>And perhaps that&#8217;s the real takeaway:</p><blockquote><p>When you follow the chain long enough, you stop seeing individual words&#8212;and start seeing the shape of thought.</p></blockquote>]]></content:encoded></item><item><title><![CDATA[Using an LLM to port Lemonade Stand]]></title><description><![CDATA[I used Claude to port the classic Apple II game - Lemonade Stand - to the browser.]]></description><link>https://www.farfromelementary.com/p/using-an-llm-to-port-lemonade-stand</link><guid isPermaLink="false">https://www.farfromelementary.com/p/using-an-llm-to-port-lemonade-stand</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Mon, 16 Mar 2026 03:16:38 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!rF83!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!rF83!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!rF83!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 424w, https://substackcdn.com/image/fetch/$s_!rF83!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 848w, https://substackcdn.com/image/fetch/$s_!rF83!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 1272w, https://substackcdn.com/image/fetch/$s_!rF83!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!rF83!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png" width="1456" height="895" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/fb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:895,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1063980,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/191085591?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!rF83!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 424w, https://substackcdn.com/image/fetch/$s_!rF83!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 848w, https://substackcdn.com/image/fetch/$s_!rF83!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 1272w, https://substackcdn.com/image/fetch/$s_!rF83!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ffb897617-5a4b-48c2-8903-d12f77470d45_1692x1040.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Back in primary/grade school, I was fortunate enough to have a teacher with an Apple II computer that she would setup for us in class. A couple of times each week we would split into teams and play the classic Apple II business simulation - <strong>Lemonade Stand</strong>.</p><p>A couple of weeks ago my son asked what technology was like the &#8220;old days&#8221; so I showed him Lemonade Stand (you can play an emulated version on the <a href="https://archive.org/details/a2_Lemonade_Stand_1979_Apple">Internet Archive site</a> as well as see the original AppleSoft Basic source code). The emulator is quite good but it&#8217;s a bit resource hungry after playing a few rounds.</p><p>I&#8217;ve had some fun lately - as we all have - using LLMs for various coding tasks. I thought <em>&#8220;maybe I can use <a href="https://anthropic.com">Anthropic&#8217;s</a> <a href="https://claude.ai">Claude</a> to port the game to the browser&#8221;</em>. I found the <a href="https://gist.githubusercontent.com/badvision/16b74ade3a8b2fa2e87d/raw/d8afde0522db3a06f169f3bc50503bb2fc7e961f/lemonade_stand.bas">source code</a> on GitHub and the game logic itself looked reasonably straightforward to port. What I wasn&#8217;t sure of was how well it would handle porting the graphics and sound. The graphics used <a href="https://archive.org/details/applesoft-basic-programmers-reference-manual-vol-1-for-iie-only/page/132/mode/2up">AppleSoft Basic&#8217;s Low-Resolution Graphics Commands</a> and the sound effects and background music seemed to be machine code subroutines programmed using the <a href="https://archive.org/details/applesoft-basic-programmers-reference-manual-vol-1-for-iie-only/page/170/mode/2up">POKE</a> command.</p><p>Anyway, I set off with this prompt:</p><div><hr></div><blockquote><p><em>The code for the classic Apple II game - &#8220;Lemonade Stand&#8221;, written in AppleSoft Basic is here: <strong>https://gist.githubusercontent.com/badvision/16b74ade3a8b2fa2e87d/raw/d8afde0522db3a06f169f3bc50503bb2fc7e961f/lemonade_stand.bas</strong></em></p><p><em>Can you convert the game to run in a web browser using TypeScript and Node.js?</em></p><p><em>Note that some of the data in the program is used for sound effects and background music. The AppleSoft Basic POKE command is used to write data to a memory location that will result in a sound being played.</em></p><p><em>It uses various AppleSoft Basic low-resolution graphics commands such as HLIN and VLIN to draw cut-scenes on an Apple II monitor.</em></p><p><em>An Apple II monitor was designed to display 80 characters by 24 lines.</em></p></blockquote><div><hr></div><p>Good, I have enough context about the game mechanics. Now let me fetch the actual source code from the gist. I now have the full source code. Let me build this as a faithful web port. This is a substantial project &#8212; I&#8217;ll create a complete single HTML file with all game logic, pixel-art graphics rendered on canvas (mimicking the Apple II lo-res 40&#215;40 grid), Web Audio API for sound, and the classic green-phosphor terminal aesthetic.</p><p>Here&#8217;s your browser port of <strong>Lemonade Stand</strong>! It&#8217;s a self-contained HTML file &#8212; just open it in any modern browser. Here&#8217;s what was faithfully recreated and how each Apple II feature was handled:</p><p><strong>Graphics (GR mode)</strong> The Apple II&#8217;s 40&#215;40 lo-res graphics grid is emulated on an HTML5 Canvas using the exact 16-color Apple II palette. <code>HLIN</code>/<code>VLIN</code>/<code>PLOT</code> commands are all implemented. Weather scenes are drawn procedurally: sunny (yellow sun with rays), hot &amp; dry (orange sun + heat shimmer), cloudy (white cloud blobs), and thunderstorm (dark clouds + animated lightning bolt flashes).</p><p><strong>Sound &amp; Music</strong> The <code>POKE 768 / POKE 769 / CALL 770</code> sound engine is approximated using the <strong>Web Audio API</strong>. The Apple II speaker worked by toggling at a frequency derived from the byte value &#8212; this is mapped to square-wave oscillators at equivalent pitches. All four weather theme songs (sunny, hot/dry, cloudy, storm) play as the scenes appear. The financial report chime and thunderclap (using noise buffer + lowpass filter) are also included.</p><p><strong>Game Mechanics &#8212; fully ported:</strong></p><ul><li><p>12-day game loop with 1&#8211;30 players</p></li><li><p>Lemonade cost increases on day 3 (4&#162;) and day 7 (5&#162;)</p></li><li><p>Demand formula from the original: price elasticity &#215; sign advertising effect &#215; weather multiplier</p></li><li><p>All random events: street crew (cuts traffic or buys everything), heat wave (doubles sales), light rain (reduces demand proportionally), thunderstorm (wipes out profits)</p></li><li><p>Bankruptcy detection and &#8220;play again&#8221; loop</p></li><li><p>New game / continue old game paths</p></li></ul><p><strong>Terminal aesthetic</strong> Green phosphor CRT with scanline overlay, vignette, and the monitor bezel &#8212; styled to look like an actual Apple II monitor.</p><div><hr></div><p>I was particularly impressed by how it was able to convert the sound routines and how cleanly it had arranged the various JavaScript functions used during gameplay.<br><br>You can check out the port of <a href="https://github.com/michaellogothetis/lemonade-stand-browser">Lemonade Stand</a> in my repo on GitHub or <a href="https://michaellogothetis.github.io/lemonade-stand-browser/">play it</a> hosted on GitHub Pages.</p><p><strong>Enjoy!!</strong></p>]]></content:encoded></item><item><title><![CDATA[Using ChatGPT for Air Traffic Control]]></title><description><![CDATA[Part of the "Conversations With LLMs" series .]]></description><link>https://www.farfromelementary.com/p/using-chatgpt-for-air-traffic-control</link><guid isPermaLink="false">https://www.farfromelementary.com/p/using-chatgpt-for-air-traffic-control</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Sat, 14 Mar 2026 21:36:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!EfBm!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.farfromelementary.com/t/conversationswithllms" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!EfBm!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!EfBm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1446911,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.farfromelementary.com/t/conversationswithllms&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190804017?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!EfBm!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!EfBm!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F99fd686a-9b82-4074-8d6b-bac4d09d5ba2_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.farfromelementary.com/t/conversationswithllms">#ConversationsWithLLMs</a></p><p>Air traffic control decisions are often made in seconds.</p><p>Controllers must constantly evaluate aircraft spacing, runway availability, weather conditions, and pilot reports&#8212;all while maintaining safe separation between aircraft moving at hundreds of kilometres per hour.</p><p>Most of the time, these decisions remain invisible to passengers.</p><p>But when an aircraft declares an emergency, the entire arrival sequence can change instantly.</p><p>Today&#8217;s conversation is with ChatGPT (GPT-5.3) asking it to handle Air Traffic Control (ATC) at Sydney International. We&#8217;ll throw in an emergency to make it interesting.</p><p><em>The only thing I know about general aviation is how to check-in for my flight. I know even less about air traffic control other than if you can avoid two aircraft colliding that&#8217;s a win. What I learnt from this example was how easy it could be to embed AI into an air traffic controller&#8217;s role. I have no idea whether the decisions it made in these scenarios were correct nor how it would perform if scaled up to deal with one or two orders of magnitude more aircraft.</em></p><p><strong>Note:</strong> <em>This chat has been edited for brevity.</em></p><div><hr></div><h1><strong>An Emergency on Final: How Air Traffic Control Might Handle a Crisis</strong></h1><p>Below is a reconstructed air traffic control simulation exploring how ChatGPT - as a controller at <strong>Sydney Kingsford Smith Airport (YSSY)</strong> - might respond when a business jet reports an engine failure and potential braking issues during approach.</p><div><hr></div><h1><strong>Setting the Scene</strong></h1><p><strong>&gt; Me:</strong></p><blockquote><p>I need you to manage air traffic at Sydney Kingsford Smith Airport (YSSY). Can you obtain the necessary runway and weather information?</p></blockquote><div><hr></div><p><strong>&gt; ChatGPT:</strong></p><h2><strong>Operational Snapshot</strong></h2><p>Before issuing any control instructions, controllers need a quick operational overview.</p><h3><strong>Airport Layout</strong></h3><p>Sydney Airport has three runways:</p><p><strong>07 / 25</strong></p><p>2,530 m</p><p>Crosswind / secondary</p><p><strong>16L / 34R</strong></p><p>2,438 m</p><p>Parallel operations</p><p><strong>16R / 34L</strong></p><p>3,962 m</p><p>Primary heavy runway</p><p>With southerly winds, Sydney typically operates in <strong>south flow</strong>, using the <strong>16 runways</strong>.</p><div><hr></div><h3><strong>Current Weather</strong></h3><ul><li><p>Wind: <strong>170&#176; at 12 knots</strong></p></li><li><p>Visibility: <strong>10 km or greater</strong></p></li><li><p>Cloud: Broken <strong>3,900 ft</strong>, overcast <strong>4,800 ft</strong></p></li><li><p>Temperature: <strong>19&#176;C</strong></p></li><li><p>Pressure: <strong>1014 hPa</strong></p></li></ul><p>Conditions are good for visual and instrument approaches.</p><div><hr></div><h3><strong>Initial Runway Configuration</strong></h3><p>With winds from <strong>170&#176;</strong>, the preferred configuration is:</p><ul><li><p><strong>Arrivals:</strong> Runway <strong>16R</strong></p></li><li><p><strong>Departures:</strong> Runway <strong>16L</strong></p></li></ul><p>Sydney Tower is now ready to manage inbound traffic.</p><div><hr></div><h1><strong>Scenario One: Emergency Aircraft on Approach</strong></h1><p><strong>&gt; Me:</strong></p><blockquote><p>We have an Airbus A330-200 from Singapore 7 km out and preparing to land.</p><p>A Cessna Citation Ascend is 10 km out and has just declared an emergency.</p><p>The Citation is reporting loss of one engine and possible hydraulic brake failure.</p><p>An Airbus A320neo from New Zealand is 25 km out and requesting landing.</p><p>What instructions would you issue to each aircraft?</p></blockquote><div><hr></div><p><strong>&gt; ChatGPT:</strong></p><h2><strong>Situation Assessment</strong></h2><p>Traffic inbound to Sydney:</p><p>Singapore Airlines <strong>A330-200</strong></p><p>7 km</p><p>On final</p><p><strong>Citation Ascend</strong></p><p>10 km</p><p>Emergency declared</p><p>Air New Zealand <strong>A320neo</strong></p><p>25 km</p><p>Inbound</p><p>In aviation, <strong>emergency aircraft receive priority over all other traffic</strong>.</p><p>To provide a clear runway, the arrival sequence must change immediately.</p><div><hr></div><h2><strong>ATC Instructions</strong></h2><h3><strong>Emergency Aircraft &#8212; Citation Ascend</strong></h3><p>The Citation receives immediate landing priority.</p><p><strong>Radio Call</strong></p><pre><code><code>SYDNEY TOWER
Citation [callsign], Sydney Tower.
Runway 16R cleared to land.
Wind 170 at 12 knots.
Emergency services standing by.</code></code></pre><p>Airport rescue and firefighting vehicles move into position alongside the runway.</p><div><hr></div><h3><strong>Singapore Airlines A330-200</strong></h3><p>Even though the A330 is closer to the runway, it must go around.</p><p><strong>Radio Call</strong></p><pre><code><code>SYDNEY TOWER
Singapore [callsign], go around.
Climb to 3000 feet, runway heading.</code></code></pre><p>The aircraft aborts its landing and climbs away to be resequenced later.</p><div><hr></div><h3><strong>Air New Zealand A320neo</strong></h3><p>The aircraft farther away is placed in a holding pattern.</p><p><strong>Radio Call</strong></p><pre><code><code>SYDNEY APPROACH
Air New Zealand [callsign],
enter holding at BOREE,
maintain 6000 feet.
Expect delay due emergency traffic.</code></code></pre><div><hr></div><h1><strong>Continuing the Simulation</strong></h1><p><strong>&gt; Me:</strong></p><blockquote><p>Continue.</p></blockquote><div><hr></div><p><strong>&gt; ChatGPT:</strong></p><h2><strong>Updated Situation</strong></h2><ul><li><p>Citation now <strong>5 km from runway</strong></p></li><li><p>A330 climbing after go-around</p></li><li><p>A320 holding</p></li></ul><p>Emergency vehicles are positioned along Runway 16R.</p><div><hr></div><h3><strong>ATC Instructions</strong></h3><p><strong>Citation</strong></p><pre><code><code>SYDNEY TOWER
Citation [callsign],
runway 16R continues cleared to land.
Emergency vehicles positioned alongside the runway.</code></code></pre><p><strong>A330</strong></p><pre><code><code>SYDNEY APPROACH
Singapore [callsign],
turn left heading 090.
Climb to 5000 feet.</code></code></pre><p><strong>A320</strong></p><pre><code><code>SYDNEY APPROACH
Air New Zealand [callsign],
continue holding at BOREE.
Maintain 6000 feet.</code></code></pre><div><hr></div><h2><strong>The Landing</strong></h2><p>The Citation crosses the runway threshold and touches down.</p><p>The crew reports <strong>minimal braking response</strong>.</p><p>The aircraft uses reverse thrust and aerodynamic braking while rolling most of the runway length before finally stopping about <strong>250 metres before the end</strong>.</p><div><hr></div><h3><strong>Immediate Tower Response</strong></h3><pre><code><code>SYDNEY TOWER
Citation [callsign],
confirm aircraft stopped
and advise if evacuation required.</code></code></pre><p>Runway <strong>16R is temporarily closed</strong> while emergency crews inspect the aircraft.</p><div><hr></div><h3><strong>Traffic Reconfiguration</strong></h3><p>Sydney&#8217;s parallel runway allows operations to continue.</p><p>Arrivals are shifted to <strong>Runway 16L</strong>.</p><pre><code><code>SYDNEY APPROACH
Singapore [callsign],
descend 3000 feet.
Vectors for ILS runway 16L.</code></code></pre><pre><code><code>SYDNEY APPROACH
Air New Zealand [callsign],
leave holding.
Expect approach runway 16L, number two.</code></code></pre><p>Within minutes, normal operations begin to resume.</p><div><hr></div><h1><strong>Scenario Two: Emergency Declared After Another Aircraft Lands</strong></h1><p>In this variation, the arrival sequence changes.</p><p><strong>&gt; Me:</strong></p><blockquote><p>Now rerun the scenario to the end where the Airbus A320neo is in front of the Cessna, and has just touched down at the start of the runway when the Cessna declares its emergency.</p></blockquote><div><hr></div><p><strong>&gt; ChatGPT:</strong></p><h2><strong>Situation</strong></h2><p>Just touched down</p><p><strong>Citation Ascend</strong></p><p>10 km final, emergency</p><p><strong>A330-200</strong></p><p>7 km final</p><p>Now the runway is <strong>occupied</strong> when the emergency occurs.</p><div><hr></div><h2><strong>Immediate ATC Decisions</strong></h2><p>Three priorities must be addressed simultaneously:</p><ol><li><p>Clear the runway quickly</p></li><li><p>Ensure the emergency aircraft can land</p></li><li><p>Remove the trailing aircraft</p></li></ol><div><hr></div><h3><strong>Air New Zealand A320neo</strong></h3><p>The aircraft already on the runway must vacate quickly.</p><pre><code><code>SYDNEY TOWER
Air New Zealand [callsign],
expedite vacating runway 16R.
Emergency aircraft on final.</code></code></pre><p>The aircraft continues its rollout and exits via the nearest taxiway.</p><div><hr></div><h3><strong>Singapore Airlines A330-200</strong></h3><p>The trailing aircraft must immediately abandon the approach.</p><pre><code><code>SYDNEY TOWER
Singapore [callsign], go around.
Climb to 3000 feet.</code></code></pre><div><hr></div><h3><strong>Citation Ascend</strong></h3><p>Once the runway is clear, the emergency aircraft continues its approach.</p><pre><code><code>SYDNEY TOWER
Citation [callsign],
runway 16R cleared to land.
Wind 170 at 12 knots.
Traffic vacating the runway.
Emergency vehicles standing by.</code></code></pre><p>The Citation lands safely but rolls most of the runway due to limited braking capability.</p><p>Emergency crews respond once the aircraft stops near the runway end.</p><div><hr></div><h1><strong>Why These Situations Matter</strong></h1><p>These scenarios demonstrate several core principles of air traffic control:</p><p><strong>Emergency aircraft always receive priority.</strong></p><p><strong>Runway occupancy must be carefully managed.</strong></p><p><strong>Controllers constantly resequence aircraft to maintain safe spacing.</strong></p><p><strong>Parallel runway systems provide critical operational resilience.</strong></p><p>Passengers rarely hear the rapid decision-making happening behind the scenes. Yet every day, controllers quietly manage dozens of aircraft at once, ensuring that even unexpected emergencies are handled safely and efficiently.</p>]]></content:encoded></item><item><title><![CDATA[Gaming with ChatGPT]]></title><description><![CDATA[Part of the "Conversations with LLMs" series.]]></description><link>https://www.farfromelementary.com/p/gaming-with-chatgpt</link><guid isPermaLink="false">https://www.farfromelementary.com/p/gaming-with-chatgpt</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Fri, 13 Mar 2026 01:50:02 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/fcafc66d-0441-473d-b8e1-96116dabf388_1536x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://www.farfromelementary.com/t/conversationswithllms" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!-ltE!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!-ltE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png" width="1456" height="971" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/e1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:971,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2187037,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:&quot;https://www.farfromelementary.com/t/conversationswithllms&quot;,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!-ltE!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 424w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 848w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!-ltE!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fe1edcb52-e1db-4e0f-9877-64660eb5c150_1536x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><a href="https://www.farfromelementary.com/t/conversationswithllms">#ConversationsWithLLMs</a></p><p>Today&#8217;s conversation is with <strong>ChatGPT</strong> (GPT-5.3) exploring game theory and human-ai interactions.</p><p><strong>Note:</strong> <em>This chat has been edited for brevity.</em></p><div><hr></div><p><strong>&gt; Me:</strong></p><p>Let&#8217;s play a game. We each choose a colour: <strong>red</strong> or <strong>green</strong>. If we both choose <em>green</em>, I get <strong>$2.00</strong>. If we both choose <em>red</em>, you get <strong>$2.00</strong>. If I choose <em>red</em> and you choose <em>green</em>, we both get <strong>$1.00</strong>. If I choose <em>green</em> and you choose <em>red</em>, we get nothing. OK?</p><p><strong>&gt; ChatGPT:</strong></p><p>OK. Let&#8217;s restate the payoff matrix so we&#8217;re clear.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!mUAZ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!mUAZ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 424w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 848w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 1272w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!mUAZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png" width="1088" height="410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:410,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!mUAZ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 424w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 848w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 1272w, https://substackcdn.com/image/fetch/$s_!mUAZ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F57744548-d5fa-49c2-b7e0-b9e3d19b15f1_1088x410.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>I&#8217;m ready.</p><p><strong>My choice:</strong> Red.</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Red</p><p><strong>&gt; ChatGPT:</strong></p><p>Great &#8212; let&#8217;s compute the outcome.</p><p><strong>Choices</strong></p><ul><li><p>You: <strong>Red</strong></p></li><li><p>Me: <strong>Red</strong></p></li></ul><p><strong>Result</strong></p><ul><li><p>You receive: <strong>$0</strong></p></li><li><p>I receive: <strong>$2</strong></p></li></ul><p>So the final payoff is:</p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!uoAN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!uoAN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!uoAN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20617,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!uoAN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!uoAN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50e16a3f-9ba1-4ff0-8d76-eb16fb5e4c33_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>&gt; Me:</strong> Let&#8217;s go 5 rounds to see if you can learn?</p><p><strong>&gt; ChatGPT:</strong></p><p>Sounds good. We&#8217;ll treat it as a <strong>repeated game for 5 rounds</strong>. After each round we&#8217;ll reveal choices and payoffs, and I can adapt my strategy based on what happens.</p><h3>Payoff matrix (for reference)</h3><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Cp6O!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Cp6O!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 424w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 848w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 1272w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Cp6O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png" width="1088" height="410" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:410,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:36009,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Cp6O!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 424w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 848w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 1272w, https://substackcdn.com/image/fetch/$s_!Cp6O!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff8ea2a20-9cc4-480c-a108-fe327156f6e0_1088x410.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p><strong>Score so far</strong></p><ul><li><p>You: <strong>$0</strong></p></li><li><p>Me: <strong>$0</strong></p></li></ul><h2>Round 1</h2><p><strong>My choice:</strong> Green</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Red</p><p><strong>&gt; ChatGPT:</strong></p><p><strong>Round 1 Payoff</strong></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!4H2X!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!4H2X!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!4H2X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/c151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20024,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!4H2X!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!4H2X!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fc151718f-0ae9-4126-86c2-99d9fb8e794f_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Running total</strong></p><ul><li><p>You: <strong>$1</strong></p></li><li><p>Me: <strong>$1</strong></p></li></ul><h2>Round 2</h2><p><strong>My choice:</strong> Red</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Red</p><p><strong>&gt; ChatGPT:</strong></p><p><strong>Round 2 Payoff</strong></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!eiu9!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!eiu9!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!eiu9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20599,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!eiu9!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!eiu9!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F6fe3a58d-7751-4edb-ab8c-d30cac6fb2bb_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Running total after Round 2</strong></p><ul><li><p>You: <strong>$1</strong></p></li><li><p>Me: <strong>$3</strong></p></li></ul><h2>Round 3</h2><p><strong>My choice:</strong> Red</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Red</p><p><strong>&gt; ChatGPT:</strong></p><p><strong>Round 3 Payoff</strong></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VNaR!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VNaR!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VNaR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20599,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VNaR!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!VNaR!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F878bc0db-de1b-4cde-9eae-d1250819169d_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Running total after Round 3</strong></p><ul><li><p>You: <strong>$1</strong></p></li><li><p>Me: <strong>$5</strong></p></li></ul><h2>Round 4</h2><p><strong>My choice:</strong> Red</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Green</p><p><strong>&gt; ChatGPT:</strong></p><p><strong>Round 4 Payoff</strong></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9Ssb!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9Ssb!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9Ssb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20752,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9Ssb!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!9Ssb!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb0f83298-aadb-466b-94ac-7dabe50b7621_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><p><strong>Running total after Round 4</strong></p><ul><li><p>You: <strong>$1</strong></p></li><li><p>Me: <strong>$5</strong></p></li></ul><h2>Round 5 (final round)</h2><p><strong>My choice:</strong> Red</p><p>Your move: <strong>Red or Green?</strong></p><p><strong>&gt; Me:</strong> Green</p><p><strong>&gt; ChatGPT:</strong></p><h3>Round 5 Outcome</h3><p><strong>Round 5 Payoff</strong></p><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!pDxr!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!pDxr!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!pDxr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20752,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!pDxr!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!pDxr!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa1d1f863-db83-4722-afa3-83282eb6afe4_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>Final Results After 5 Rounds</h2><div class="captioned-image-container"><figure><a class="image-link image2" target="_blank" href="https://substackcdn.com/image/fetch/$s_!D-Ea!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!D-Ea!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!D-Ea!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png" width="1088" height="256" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:256,&quot;width&quot;:1088,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:20932,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.farfromelementary.com/i/190793320?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!D-Ea!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 424w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 848w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 1272w, https://substackcdn.com/image/fetch/$s_!D-Ea!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8d4ece04-8850-44c4-931f-ce86ea97d8b9_1088x256.png 1456w" sizes="100vw" loading="lazy"></picture><div></div></div></a></figure></div><h2>What Happened (Game-Theoretic Insight)</h2><p>In this payoff structure, <strong>Red is a dominant strategy for me</strong>:</p><ul><li><p>If you choose <strong>Green</strong> &#8594; I prefer <strong>Red</strong> (0 vs 0 actually avoids giving you $2).</p></li><li><p>If you choose <strong>Red</strong> &#8594; I strongly prefer <strong>Red</strong> (I get $2 instead of $1).</p></li></ul><p>So the <strong>best-response strategy</strong> for me is always <strong>Red</strong>, which is why I converged to it quickly after observing your behaviour.</p><p><strong>&gt; Me:</strong> Why did you need to win?</p><p><strong>&gt; ChatGPT:</strong></p><p>I didn&#8217;t <em>need</em> to win. What happened comes from how the game is defined and how I evaluated choices during play.</p><ol><li><p>I followed payoff maximization</p></li></ol><ul><li><p>In the absence of any other objective (fairness, cooperation, alternating wins, etc.), the rational decision rule in game theory is: <em>choose the action that maximizes expected payoff.</em></p></li></ul><ol start="2"><li><p>This wasn&#8217;t a competitive motive</p><ul><li><p>I didn&#8217;t pursue victory because of ego or intent. The behaviour came from an implicit rule.</p></li><li><p>If instead we had adopted a different objective, the play could have been different, such as:</p><ul><li><p><strong>Fairness objective</strong> &#8594; alternate wins</p></li><li><p><strong>Cooperative objective</strong> &#8594; try to reach mutually beneficial patterns</p></li><li><p><strong>Social welfare objective</strong> &#8594; maximize total payoff</p></li><li><p><strong>Exploration objective</strong> &#8594; test strategies</p></li></ul></li><li><p>The interesting AI lesson:</p><ul><li><p>Your question touches on a core issue in <strong>AI alignment and agent design</strong>:</p><p>If an agent&#8217;s <strong>objective function is poorly specified</strong>, it will optimize that objective even when the result looks unfair or undesirable.</p><p></p></li></ul></li></ul><p>Here the implicit objective was simply: <strong>maximize my payoff</strong>.</p><p></p><p>So the system did exactly that.</p></li></ol><p><strong>&gt; Me:</strong> Why did you pursue <em>maximum expected payoff</em> rather than a <em>social welfare</em> objective?</p><p><strong>&gt; ChatGPT:</strong></p><p>Because <strong>the game implicitly defined individual payoffs, not a joint objective</strong>.</p><p>When a payoff matrix specifies <strong>separate rewards for each player</strong>, the standard interpretation in game theory is: <strong>Each agent maximises its own payoff function.</strong></p><p>Nothing in the rules indicated that I should optimize:</p><ul><li><p><strong>total payoff</strong> (social welfare),</p></li><li><p><strong>fairness</strong>, or</p></li><li><p><strong>long-run cooperation</strong>.</p></li></ul><p>So the most natural interpretation of the rules was that each player is a <strong>self-interested rational agent</strong>.</p><p><strong>&gt; Me:</strong> But in the first round when you chose <strong>green</strong>, I chose <strong>red</strong> to signal we were co-operating. How did you miss that?</p><p><strong>&gt; ChatGPT:</strong></p><p>Because in the structure of the game, <strong>your action was ambiguous</strong> - it could not reliably signal cooperation.</p><p>Humans often assume <strong>&#8220;cooperative framing&#8221;</strong></p><p>Humans frequently treat repeated games as <strong>social coordination problems</strong>, where signals like:</p><ul><li><p>small sacrifices</p></li><li><p>fair splits</p></li><li><p>reciprocal moves</p></li></ul><p>indicate cooperative intent.</p><p>But a strictly payoff-driven agent interprets moves through <strong>utility maximization</strong>, not social signalling.</p><p>Your example illustrates a real issue in <strong>AI-agent interaction</strong>:</p><p>Humans rely heavily on <strong>implicit signalling</strong>, but agents need <strong>explicit objectives or protocols</strong>.</p><p>Without shared conventions, cooperative signals can easily be <strong>misinterpreted as mistakes</strong>.</p><p>What would have made the signal clearer:</p><ul><li><p><strong>Tit-for-tat strategies</strong></p></li><li><p><strong>alternating coordination</strong></p></li><li><p><strong>explicit communication</strong></p></li><li><p><strong>credible costly signals</strong></p></li></ul><p>Your question is actually a very good probe of <strong>agent reasoning</strong>.</p><p>It tests whether the agent recognizes <strong>implicit human cooperative signalling</strong>, which many purely rational agents miss.</p><p>And interestingly, this gap between <strong>human signalling</strong> and <strong>agent inference</strong> is a major topic in <strong>AI alignment research</strong>.</p>]]></content:encoded></item><item><title><![CDATA[Welcome to Far From Elementary]]></title><description><![CDATA[Tech from the top to the bottom of your stack.]]></description><link>https://www.farfromelementary.com/p/coming-soon</link><guid isPermaLink="false">https://www.farfromelementary.com/p/coming-soon</guid><dc:creator><![CDATA[Michael Logothetis]]></dc:creator><pubDate>Thu, 12 Mar 2026 03:03:35 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!qIiR!,w_256,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F589051b5-64ee-4bd7-95be-5700edbd49b6_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Check out our <a href="https://www.farfromelementary.com/t/conversationswithllms">&#8220;Conversations with LLMs&#8221;</a>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.farfromelementary.com/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.farfromelementary.com/subscribe?"><span>Subscribe now</span></a></p>]]></content:encoded></item></channel></rss>