<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0"><channel><title><![CDATA[Phi / AI]]></title><description><![CDATA[Phi / AI is a writer-led online magazine exploring AI through a humanistic lens. We ask deeper questions about intelligence, ethics, and the future, offering interdisciplinary, long-form essays.]]></description><link>https://www.phiand.ai</link><generator>Substack</generator><lastBuildDate>Sat, 25 Apr 2026 15:00:03 GMT</lastBuildDate><atom:link href="https://www.phiand.ai/feed" rel="self" type="application/rss+xml"/><copyright><![CDATA[ΦAI]]></copyright><language><![CDATA[en]]></language><webMaster><![CDATA[phiai@substack.com]]></webMaster><itunes:owner><itunes:email><![CDATA[phiai@substack.com]]></itunes:email><itunes:name><![CDATA[Karin Garcia]]></itunes:name></itunes:owner><itunes:author><![CDATA[Karin Garcia]]></itunes:author><googleplay:owner><![CDATA[phiai@substack.com]]></googleplay:owner><googleplay:email><![CDATA[phiai@substack.com]]></googleplay:email><googleplay:author><![CDATA[Karin Garcia]]></googleplay:author><itunes:block><![CDATA[Yes]]></itunes:block><item><title><![CDATA[The Stockholm Syndrome of Labor — Why the post-AGI “crisis of meaning” is a red herring]]></title><description><![CDATA[The post-AGI meaning crisis is a distraction. The real question is who holds power when elites no longer need us.]]></description><link>https://www.phiand.ai/p/the-stockholm-syndrome-of-labor</link><guid isPermaLink="false">https://www.phiand.ai/p/the-stockholm-syndrome-of-labor</guid><dc:creator><![CDATA[Elsa Donnat]]></dc:creator><pubDate>Thu, 09 Apr 2026 07:07:50 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Yt7E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p style="text-align: justify;">You&#8217;ve heard the question. You&#8217;ve probably asked it yourself. If AI automates everything, what will we do? The image is always the same: billions of people adrift in subsidised leisure, numbed by screens, stripped of purpose. Harari gave this anxiety a name: the &#8220;useless class&#8221;; a phrase that doesn&#8217;t need context to unsettle you<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>.</p><p style="text-align: justify;">I&#8217;ve always found this an odd thing to worry about. I can think of dozens of things I&#8217;ve been dying to do but haven&#8217;t had the time for. <strong>The question &#8220;what will I do if I don&#8217;t work?&#8221; contains a hidden assumption so deeply embedded we barely notice it: that our existential justification is tied to our economic utility.</strong> That without a job, we are not merely unemployed but unjustified. Somewhere along the way, we fused the labour market with our sense of purpose. Now we can&#8217;t imagine one closing without the other collapsing too. Have the billions of stay-at-home parents throughout history been leading meaningless lives? This is, to state the obvious, absurd. I&#8217;ll come back to this.</p><p style="text-align: justify;">I think that the panic around post-AGI meaning deserves more suspicion than it gets. It&#8217;s a distraction, and a convenient one for those who&#8217;d rather we agonise about purpose than organise around power. The obsession with &#8220;what will we do?&#8221; is a symptom of a system that benefits from our fixation on it. Dissolve the construct and you see what it was obscuring: who holds power in a world where machines produce most value? I&#8217;d like to take a moment to deconstruct the work ethic as the value that underpins the conflation; how it was built tells you who it was built for.</p><p style="text-align: justify;">But first, I must acknowledge that the fear is real, and we owe it an honest reckoning before we can see through it.</p><p style="text-align: justify;">I am not dismissing the anxiety but, instead, trying to understand what it&#8217;s actually about. The truck driver watching autonomous vehicles roll off the production line, the writer watching GPT draft passable prose: I wouldn&#8217;t call these people irrational. They are experiencing something like anticipatory grief, mourning a version of themselves that hasn&#8217;t died yet but can see the headlights.</p><p style="text-align: justify;">On the issue of boredom, Viktor Frankl had a term for the dread that arrives when unstructured time looms: &#8220;Sunday neurosis.&#8221; A low-grade panic at the absence of external demands, which replaces the pleasant anticipation of a free afternoon.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p style="text-align: justify;">On unemployment, Marie Jahoda&#8217;s work showed that it devastates people even when they can still comfortably pay their bills. Her insight as a sociologist, drawn partly from a landmark study of an Austrian town in the 1930s, was that <strong>work does far more than provide income</strong>.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> It imposes structure, makes us stick to a schedule. It forces us into proximity with other people whether we like them or not. It connects us to a purpose larger than ourselves. And it defines us socially; &#8220;What do you do?&#8221; is the first question we ask upon meeting someone for the first time, because occupation is how we locate each other in social space. To answer &#8220;nothing&#8221; is to become illegible. Notice what&#8217;s absent from this list: any mention of the work itself being satisfying. A person can hate their job and still depend on it for structure and identity and a reason to leave the house.</p><p style="text-align: justify;">None of this is speculative. Anne Case and Angus Deaton documented an epidemic of what they called &#8220;deaths of despair&#8221; in deindustrialised American communities: suicides, opioid overdoses, liver disease.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> The crucial detail: many of these people still had income through disability payments or pensions. What they lost was the architecture of daily life. People do not respond to the loss of work by relaxing. They tend to self-destruct instead.</p><p style="text-align: justify;">All of this I accept. But none of it explains why employment became the sole provider of these psychological necessities, or why we struggle to imagine alternatives. That is a different question, and it has a historical answer.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Yt7E!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Yt7E!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 424w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 848w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 1272w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Yt7E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png" width="1024" height="696" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:696,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1484177,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/193566970?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Yt7E!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 424w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 848w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 1272w, https://substackcdn.com/image/fetch/$s_!Yt7E!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b68be88-b311-4de1-b4ab-5f0e7d6dc46c_1024x696.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: justify;"></p><h2 style="text-align: justify;">How we built the cage</h2><p style="text-align: justify;">It wasn&#8217;t always this way. For most of human history, work was a curse. The Latin <em>labor</em> shares its root with suffering. Genesis frames work as divine punishment. Aristotle considered manual labour fit for slaves, precisely so that citizens could be freed for higher pursuits. <strong>The idea that work is virtuous, that it builds character, that idleness corrodes the soul is oddly recent.</strong></p><p style="text-align: justify;">And it was, I contend, invented to solve a specific problem: control. Early industrial capitalism needed consistent output. You can watch a man dig a ditch; you can count the bricks a worker lays. But as economies shifted toward cognitive and clerical work, effort became invisible. Economists call this the principal-agent problem: when you can&#8217;t directly observe whether your employee did two hours of good thinking or stared at a screen, you need workers to police themselves.</p><p style="text-align: justify;">The solution was to make work a moral matter. Idleness becomes sinful. Your labour becomes evidence of your character. Weber identified the moment this happened, tracing the fusion of Protestant theology and capitalist discipline into a single ethic that made hard work a sign of spiritual election.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> Foucault saw the deeper mechanics. His central image was the Panopticon: a prison designed so that inmates can never tell whether they are being watched, and so learn to behave as if they always are. <strong>The most efficient power is power that no longer needs to be exercised, because subjects have absorbed its demands as their own standards.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a> <strong>Tie someone&#8217;s moral worth to their output and they will never slack off, even when no one is watching.</strong> The work ethic is capitalism&#8217;s most elegant solution to the supervision problem: convince people that their souls are at stake, and you never need to hire another foreman.</p><p style="text-align: justify;">But Foucault only gets us partway. We don&#8217;t merely police ourselves to avoid shame. We identify with the norms that bind us. To become a subject, in Judith Butler&#8217;s formulation, requires being subjected: we are formed as persons through the very structures that constrain us, and so we cling to those structures because loosening them feels like losing ourselves.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> This is why the prospect of a world without work triggers something closer to an identity crisis than a scheduling problem. We have fused who we are with what we produce. The guilt we feel on an idle Tuesday, the need to justify a long lunch, the way we introduce ourselves even among friends by naming what we do for money: these are not personal quirks. They are the marks of an ideology that has become highly intimate. We don&#8217;t just comply with the work ethic; We want to comply. As Butler called it, we start &#8220;desiring our own subjection&#8221;. We fear the freedom that would follow its removal, because without our chains we become illegible to ourselves. <strong>We are prisoners who have learned to love the cage.</strong></p><blockquote><p style="text-align: justify;">&#8220;Do what you love and you&#8217;ll never work a day in your life&#8221; sounds like liberation. It is the opposite. It is the final move, the moment the prison no longer needs walls because the prisoner has fallen in love with the cell. This is the Stockholm Syndrome of labour.</p></blockquote><p style="text-align: justify;">And the ideology runs deep enough that history&#8217;s greatest monsters could exploit it. &#8220;Arbeit Macht Frei,&#8221; &#8220;Work Sets You Free,&#8221; was inscribed above the gates of Auschwitz. The historian Otto Friedrich wrote that the slogan was meant as a kind of mystical declaration that self-sacrifice through endless labour brings spiritual freedom.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> Of course, I am not comparing modern workers to victims of a death camp. I am comparing the belief systems. The Nazis did not invent the idea that work liberates. They recognised a conviction that already ran through the culture and they weaponised it, with horrible clarity, because we had already accepted that labour is the path to dignity and freedom. <strong>The gentler modern version of this faith (e.g., hustle culture or the guilt we feel for taking a sick day) shares a structure with the slogan above the gate: your worth is earned through production, and work itself sets you free.</strong></p><p style="text-align: justify;">The cracks are already forming. &#8220;Quiet quitting&#8221; has been framed as a moral failure: GenZ is lazy, entitled, ungrateful. A more honest reading is that the wage-effort bargain is broken. Productivity has outpaced wage growth for decades. Asset inflation has put homeownership and family formation out of reach for millions. Working hard no longer delivers the traditional rewards, and so a generation is recalibrating effort to match the degraded return. The contract was breached by the side that wrote it.</p><p style="text-align: justify;">The work ethic is infrastructure. It was built in a particular era, to serve particular interests, and it can become obsolete.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2 style="text-align: justify;">The gendered blind spot</h2><p style="text-align: justify;">There is something else hiding in the panic about the &#8220;useless class.&#8221; Something I see as the collapse of the breadwinner archetype. For a few generations in industrialised societies, male identity became tightly bound to paid employment. You were a man because you earned. When we fret about masses of humans rendered purposeless by automation, we are often unconsciously imagining men losing jobs. The &#8220;crisis of meaning&#8221; has a gender, and we should name it.</p><p style="text-align: justify;">Return to the stay-at-home parent I mentioned earlier. Map that life against the very framework Jahoda gave us. A toddler imposes a ruthless schedule; anyone who has cared for one knows that &#8220;unstructured time&#8221; is a fantasy. The social world of a primary caregiver is dense and demanding: playdates, school runs, community organising, the constant negotiation of other parents&#8217; needs and expectations. You are embedded in webs of mutual dependence, connected to purposes (the family, the neighbourhood, the school) that are plainly larger than yourself. The identity is rich: mother, father, caretaker, organiser. And enforced activity is not exactly in short supply. All of Jahoda&#8217;s psychological infrastructure is present. What is missing is the wage, and the social recognition that flows from it. <strong>The &#8220;crisis of meaning&#8221; turns out to be a crisis of legibility. It is about the loss of a particular kind of socially validated status that we have mistakenly treated as the whole of meaning.</strong></p><p style="text-align: justify;">The obvious objection is that stay-at-home parents derive much of their social standing from the wage economy around them. They are &#8220;a doctor&#8217;s wife,&#8221; &#8220;a stay-at-home dad who used to be in finance.&#8221; Their legibility is borrowed. In a world where nobody works, that borrowed status disappears too. This is a real objection. But even if social recognition is currently borrowed from the wage economy, that is a problem with our recognition system, not with the meaning itself. A mother&#8217;s bond with her child does not become less real because society fails to validate it. What it lacks is not meaning but social recognition.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></p><div class="pullquote"><p style="text-align: justify;">The &#8220;post-work&#8221; world is not actually unprecedented. We already know what humans do when they are not in paid employment. They raise children and maintain households. They tend to the elderly, make art and give it away, build communities, organise the social fabric that paid work never had time for. The post-work world is not a void. It is what we have spent centuries calling &#8220;women&#8217;s work&#8221; and refusing to value.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a></p></div><p style="text-align: justify;">For those whose identity is bound to production rather than care, this transition feels like annihilation. But this should not be a prophecy of doom. <strong>I think that the failure is revealing: it tells us more about the narrowness of our current meaning-architecture than about the emptiness of the future.</strong></p><h2 style="text-align: justify;">Meaning is renewable</h2><p style="text-align: justify;">So if meaning doesn&#8217;t depend on a paycheck, will we actually be fine? </p><p style="text-align: justify;">I think the evidence suggests we will. </p><p style="text-align: justify;">AI crushed Kasparov in 1997. Pundits predicted the death of chess. Instead, chess boomed: over 100 million users on Chess.com as of last count. We didn&#8217;t stop playing because a machine plays better. We watch humans play because we care about human struggle, not optimal computation. We play sports we will never master. We run, though we no longer need to chase anything. We paint, sing, garden, build things nobody asked for. If anything, turning these activities into a profession tends to drain the very sense of purpose they provide.</p><p style="text-align: justify;">Even in material abundance, humans will compete for skill, beauty, wit, reputation, attention. Remove the economic game and the status-seeking doesn&#8217;t vanish; it redirects. Hierarchy-building is an evolutionary adaptation, not a byproduct of scarcity.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a></p><p style="text-align: justify;"><strong>AI does not deplete our capacity for meaning. Humans generate purpose the way we generate language: compulsively, endlessly, even in captivity.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2 style="text-align: justify;">The real danger: from meaning to power</h2><p style="text-align: justify;">By now, I have made my position clear: the problem of meaning is not worth losing sleep over. That is not to say, however, that I would feel comfortable sleepwalking into the post-AGI world. Allow me to conclude this essay by pointing at what does worry me.</p><p style="text-align: justify;"><strong>The social contract has always rested on mutual dependence</strong>. Elites needed labour: our muscles, our minds, our compliance. They needed us to buy things. In return, they had to negotiate. The worker had leverage because the factory couldn&#8217;t run without them; the citizen had leverage because the state couldn&#8217;t function without their taxes and their cooperation. Every major right we have won was won because the powerful needed something from the powerless. The suffragettes could disrupt, the unions could strike, and the tax base could threaten to shrink. Leverage required dependence, and dependence ran in both directions.</p><p style="text-align: justify;"><strong>Post-AGI, that mutual dependence dissolves. If machines produce most of the value, the masses lose their bargaining position.</strong> We shift from citizens who must be negotiated with to dependents who are managed. The elites may still choose to provide for us. But charity is not a right, and benevolence is not a contract. <strong>The danger is not that we become &#8220;useless&#8221;; that is a feeling. The danger is that we become harmless, and that is a political condition.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p><p style="text-align: justify;">The threat will not arrive dramatically. AI safety researcher Paul Christiano has described what he calls the &#8220;whimper&#8221;: a gradual, almost invisible transfer of decision-making authority from humans to optimisation systems, without our hand being forced, simply because the systems are better at deciding.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a>. Tax policy, healthcare allocation, hiring decisions, criminal sentencing. The AI optimises the metric, and the metric improves. Each individual handover seems rational, even welcome. But cumulatively, we lose something harder to name: the capacity to question what we are optimising for. <strong>We don&#8217;t lose freedom in a dramatic seizure. We trade it away, one convenience at a time.</strong></p><p style="text-align: justify;">And this is not hypothetical. Recommendation algorithms already choose our music, our news, increasingly our social connections. We outsource memory to search engines and judgment to AI assistants. The post-AGI world does not invent this dynamic. It accelerates it until the accumulated surrenders become irreversible.</p><p style="text-align: justify;">The reflexive policy response, &#8220;tax the robots, redistribute the proceeds,&#8221; faces obstacles that deserve their own essay. The redistribution mechanisms we have built assume an economy powered by human labour and taxed at the point of employment. A post-labour economy may require entirely new architectures of governance: not merely new tax codes but new conceptions of citizenship, ownership, and political leverage. <strong>Universal basic income without political power is hush money for the harmless. It maintains consumption but extinguishes agency.</strong><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a></p><p style="text-align: justify;">The real crisis is one of power, and it is arriving before we have built the institutions to address it.</p><p style="text-align: justify;">The door is swinging open. We are like long-term prisoners: the walls are familiar, the routine is known, the constraints have become comfort. This is not a moment of pure joy. It is a moment of vertigo. <strong>Freedom, after sufficient captivity, feels like falling. The crisis is not that there is nothing outside the cell. It is that we have forgotten how to walk without chains.</strong></p><p style="text-align: justify;">Meaning will take care of itself. Humans are inexhaustible; we will always find new games to play. But who will be left holding the steering wheel while we play?</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p style="text-align: justify;"></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Yuval Noah Harari, <em>Homo Deus: A Brief History of Tomorrow</em> (2016) and <em>21 Lessons for the 21st Century</em> (2018).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Viktor Frankl, <em>Man&#8217;s Search for Meaning</em> (1946).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Marie Jahoda, <em>Employment and Unemployment: A Social-Psychological Analysis</em> (1982); see also Jahoda, Lazarsfeld, and Zeisel, <em>Marienthal: The Sociography of an Unemployed Community</em> (1933).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Anne Case and Angus Deaton, <em>Deaths of Despair and the Future of Capitalism</em> (2020).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Max Weber, <em>The Protestant Ethic and the Spirit of Capitalism</em> (1905).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Michel Foucault, <em>Discipline and Punish: The Birth of the Prison</em> (1975).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Judith Butler, <em>The Psychic Life of Power: Theories in Subjection</em> (1997).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Otto Friedrich, <em>The Kingdom of Auschwitz</em> (1994).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Silvia Federici, <em>Wages Against Housework</em> (1975) and <em>Caliban and the Witch</em> (2004), on the invisibility and devaluation of reproductive labour under capitalism.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p>Kathi Weeks, <em>The Problem with Work: Feminism, Marxism, Antiwork Politics, and Postwork Imaginaries</em> (2011).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Dario Amodei, &#8220;Machines of Loving Grace&#8221; (2024): &#8220;meaning comes mostly from human relationships and connection, not from economic labor.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p>Isaiah Berlin, &#8220;Two Concepts of Liberty&#8221; (1958), on the distinction between negative liberty (freedom from interference) and positive liberty (freedom to act and participate).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p> Paul Christiano, &#8220;What failure looks like,&#8221; AI Alignment Forum (2019).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>On UBI&#8217;s insufficiency without political restructuring, see also the broader debate on post-labour governance in Kathi Weeks, <em>The Problem with Work</em> (2011).</p></div></div>]]></content:encoded></item><item><title><![CDATA[It's Time for a High-Culture Revolution]]></title><description><![CDATA[We are a 'high-tech, low-culture' society, wielding god-like tools with social structures stuck in the Roman Era.]]></description><link>https://www.phiand.ai/p/its-time-for-a-high-culture-revolution</link><guid isPermaLink="false">https://www.phiand.ai/p/its-time-for-a-high-culture-revolution</guid><dc:creator><![CDATA[Veronica Zora Kirin]]></dc:creator><pubDate>Thu, 02 Apr 2026 10:07:45 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Tsyf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Last month, Anthropic&#8217;s Head of AI Safety Mrinank Sharma, <a href="https://x.com/MrinankSharma/status/2020881722003583421">resigned with a two page manifesto</a>. Inside, he declares an urgent need to evolve culture to meet the power we now wield with our technology. &#8220;We appear to be approaching a threshold where our wisdom must grow in equal measure to our capacity to affect the world, lest we face the consequences,&#8221; he wrote. &#8220;Moreover, throughout my time here, I&#8217;ve repeatedly seen how hard it is to truly let our values govern our actions.&#8221; As an anthropologist and expert in paradigm shifts, I wholeheartedly agree.</p><p><strong>The high-tech revolution seems to be moving at an ever quickening pace. But is our culture keeping up?</strong> As Sharma states, the issue is not exclusive to AI. We live in a system of incentives that reward actions often at the antithesis of human values. Market incentives value competition, growth, and maximizing financial returns, while human values encompass empathy, responsibility, and a holistic view of interrelated systems. When we allow such incentives to direct our behavior, rather than our wisdom, things go very, very wrong.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tsyf!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tsyf!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tsyf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/b902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1771061,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/192936821?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tsyf!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 424w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 848w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 1272w, https://substackcdn.com/image/fetch/$s_!Tsyf!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fb902e253-bce5-4bd3-baba-7d86ba4fd7cb_1248x832.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Minding the Gap</h2><p>The AI revolution&#8212;the latest tech paradigm shift&#8212;is well underway. Though rapid adoption is in full swing, many open questions remain within culture.</p><p>This is what I call the <a href="https://www.phiand.ai/p/what-ai-and-quicksand-have-in-common">Quicksand Phase</a>, the period of cultural liquefaction when everything is messy and the collective agreement in culture has not yet been made. It is the period in which we are most vulnerable&#8212;and most empowered&#8212;our actions having far-reaching consequences in a &#8216;never-been-done-before&#8217; world.</p><p>The Luddites understood this. It was not for fear of technology that the Luddites burned down a new sewing factory in 1812. It was the wholesale destruction of hundreds of livelihoods in favor of cost savings (aka. money incentive). &#8220;This dynamic of predatory managers using technology to destabilize the lives of workers or eliminate their jobs entirely is hardly just a nineteenth-century phenomenon,&#8221; says Greg Epstein in his book <em>Tech Agnostic.</em> &#8220;It is still how tech operates today, and it has been a foundational aspect of capitalism since the original Luddites... Luddism is not really about the rejection of technology at all&#8212;it&#8217;s about the rejection of a certain kind of political and economic deployment of tech.&#8221;</p><p>As Sharma said, when money is the goal, humanity is thrown out the window. &#8220;Nowhere do we see capitalism froth at the mouth more than in the VC room. Nobody is asked what their values are,&#8221; continues Epstein. As a startup founder, I&#8217;ve watched firsthand as the slimiest men shuffle millions between companies, funding the most outlandish ideas without first pausing to wonder &#8216;should we?&#8217; The money incentive is the beginning and end of thought.</p><p>Data shifted the incentives in the early 2000s when <a href="https://youtu.be/8HzW5rzPUy8?si=qiLxFUp62UJoHxws&amp;t=812">Google rewrote its value proposition</a> from search results to data commodification, increasing their revenue that year by 3,500%. Shoshana Zuboff has been tracking the shift ever since, ringing the bells as the incentives slide from a product-driven to data-driven economy. &#8220;Surveillance capitalism unilaterally claims human experience as free raw material for translation into behavioral data,&#8221; says Zuboff in her book <em>The Age of Surveillance Capitalism.</em></p><p>And AI in particular is hungry for data. Starving, in fact. &#8220;The internet is a vast ocean of human knowledge, but it isn&#8217;t infinite,&#8221; an <a href="https://www.nature.com/articles/d41586-024-03990-2">article in Nature</a> stated. &#8220;Artificial intelligence researchers have nearly sucked it dry.&#8221; The danger is manifold: AI is not yet profitable and yet needs more data, so it must force its way into our lives any way it can &#8212; money, again, at the incentives wheel. &#8220;This entanglement of overwhelming powers for which no American has ever directly cast a vote is so dangerous because it represents an era in which surveillance is an essentially ungovernable social norm, a kind of modern force of nature to which we can only submit,&#8221; adds Epstein in his book.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>When Decisions Are Made For Us</h2><p>During the Quicksand Phase of a paradigm shift, boundaries are dissolved. The new will try to get away with whatever it can, often engaging a scope creep that blows past what is healthy for a society due to the incentives to do so.</p><p>We already see it in everyday life. AI is being added to doctor&#8217;s visits, job application review, and even government surveillance, long before AI has been designed to account for <a href="https://www.crescendo.ai/blog/ai-bias-examples-mitigation-guide">its biases</a>. Compound the issue with <a href="https://www.independent.co.uk/tech/ai-peak-data-goldman-sachs-b2838795.html">synthetic data</a>&#8212;the solution to the data wall AI has hit. Only when we step in (usually in some sort of public outcry) do boundaries become firm. As Sharma said in his resignation, we must become adept at doing so much earlier in the Quicksand Phase of any paradigm shift.</p><p>AI usage in private business is small fish compared to the AI-powered surveillance programs governments worldwide now employ. We&#8217;ve all heard it: &#8220;I have nothing to hide.&#8221; That phrase is volleyed back and forth across social media and dinner tables whenever concerns are raised about data collection, now occurring via your daily interactions with devices such as social media, your TV, and even your car. This logical fallacy likely stemmed from the tech industry itself, where your data is more valuable than any dollar. A dollar can only be spent once; data can be bought and sold over and over, ad nauseam. It&#8217;s too valuable to allow public concerns to balloon.</p><p>&#8220;It is obscene to suppose that this harm can be reduced to the obvious fact that users receive no fee for the raw material they supply,&#8221; says Zuboff. &#8220;That critique is a feat of misdirection that would use a pricing mechanism to institutionalize and therefore legitimate the extraction of human behavior for manufacturing and sale. It ignores the key point that the essence of the exploitation here is the rendering of our lives as behavioral data for the sake of others&#8217; improved control of us.&#8221;</p><h2>Be Better than the Romans</h2><p>Today, the cutting edge of tech includes AI surveillance and influencer bots built especially for you based on your data because you had &#8216;nothing to hide.&#8217; Humanity is smart enough to avoid such brush-off logic, but we are not yet exercising it as the norm.</p><p><strong>We claim to be at the height of human development, but really we are at the height of technological development&#8212;high-tech but not high-culture</strong>. Human development is still very much stuck in the Roman Era: we continue to use the same kind of democracy perfected during its reign and define civilization by access to the latest technology. But <strong>our technology has grown far beyond Roman tech; so must our society.</strong></p><p>If we are going to evolve our culture beyond Roman achievement and adopt the wisdom Sharma invokes, we must learn to accept all our humanity. And we must evolve these cracks in our culture if we are to survive. Yes, survive, for technology comes not only with loneliness, but with unfathomable destruction by weapons yet untested due to their egregious might.</p><p>We are the only proven being on this planet capable of metacognition. We can think about our thoughts. Cultural evolution isn&#8217;t isolated to private citizens: it affects those who would become corporate and political leaders. Political leaders who today fight like children, as if words don&#8217;t work, must end. &#8220;You want to get physical with me? Like an ape?&#8221; cries the main character in <a href="https://www.imdb.com/title/tt32916440/quotes/">Marty Supreme</a>. He&#8217;s exactly right. We need a culture that normalizes catching leaders at playground thinking and admonishes them for it.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!gvmt!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!gvmt!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!gvmt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2992292,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/192936821?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!gvmt!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 424w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 848w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!gvmt!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0bb5467d-b37d-4cab-9313-02b2c4a5e86e_2688x1536.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>Evolving Culture</h2><p>Incentives are a tricky thing&#8212;they are of and by culture. That is, our held beliefs, institutions, and behaviors all feed into shaping incentives at scale. There are negative incentives (go to jail if you steal) and positive incentives (earn a living if you give your time, labor, and expertise). Culture shapes these, and these have changed over time. They go hand-in-hand, a clasp ever-enduring.</p><p>And culture is shaped by&#8212;you guessed it&#8212;people. Culture is the result of hundreds of interactions, all seemingly banal&#8212;until they aren&#8217;t.</p><p>Culture can be changed: just as during the civil rights movement (or any major movement), a paradigm shift is possible when enough people assert their agency over a system of incentives. That is what I propose we do now, while AI is in the Quicksand Phase.</p><p>You know that saying &#8216;if you didn&#8217;t vote, then you don&#8217;t get to complain about the government&#8217;? Culture in the Quicksand Phase is kind of like that. Culture is going to solidify whether or not you take action; isn&#8217;t it better for you to get yourself involved so you don&#8217;t end up living in a society that you hate? The biggest lie we&#8217;ve been told is that our actions have no consequences. But if you leave ChatGPT because they signed a U.S. military contract, that&#8217;s a vote. If you&#8217;re in charge of corporate AI usage and implement a privacy-first, ethical AI, that&#8217;s an even bigger vote. If you don&#8217;t let your kids use chat bots, that&#8217;s a vote.</p><p><strong>This is the culture that we have to work towards: a culture that actively participates in every aspect of its evolution.</strong> As of now, the incentives Sharma names push individuals to remain passive and weigh heavily in the favor of large corporate entities and governments with military might. That means that new innovations favor those two groups more than any other, and will be driven by and for those groups.</p><p>But we all participate in culture, because culture is developed by and of the masses. A culture that involves itself in its own innovation, rather than passively accepting the innovation handed to us is more likely to be equitable, diverse, and sustainable.</p><p>If you are okay with nameless government officials and nefarious CEOs making decisions about your life, then by all means, take no action. But with culture in a liquid state around AI, every act you take is a vote one way or the other, and companies are waiting to see what you will do and how far you will let it go. Set your boundaries. Take action. Submit your vote.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><h2>Action in the Quicksand</h2><p>If you have the ability to read this, and have the time to read this, you have the privilege and latitude to exercise agency in a paradigm shift. <a href="https://www.phiand.ai/p/what-ai-and-quicksand-have-in-common?r=oou3k&amp;utm_medium=ios&amp;triedRedirect=true">These early days of development is the most important time to use it</a>.</p><p>&#8220;Each of us must decide how much we can afford to participate in an endeavor that oppresses and divides at least as much as it uplifts and heals,&#8221; says Epstein. The people must determine the direction of AI, not the corporations and governments who stand to benefit from it financially. We each must insert ourselves into the conversation&#8212;because the creators and funders of AI would rather we didn&#8217;t. Here are some ways you can vote for a new culture, now:</p><ol><li><p><strong>Start watching how corporate and political powers interact.</strong> OpenAI and other mega tech powers <a href="https://www.theguardian.com/technology/2025/sep/02/ai-industry-pours-millions-into-politics">lobby for a clean slate of operation</a>, and <a href="https://www.bbc.com/news/articles/cd0el3r2nlko">attack their employees when they whistleblow</a>. &#8220;Two men at Google who do not enjoy the legitimacy of the vote, democratic oversight, or the demands of shareholder governance exercise control over the organization and presentation of the world&#8217;s information,&#8221; points out Zuboff. Their activities point to the vulnerabilities they intend to exploit, next. This awareness provides you the opportunity to decide what matters before they do for you.</p></li><li><p><strong>Build the habit of checking tech&#8217;s claims.</strong> &#8220;What if the billions of people who live in or near destitution and poverty do so not in spite of the efforts of hyperconnected, superinformed experts... but <em>because</em> of them?&#8221; questions Epstein. &#8220;If you can truly predict the future skillfully enough to imagine jobs for everyone, you can also plot out&#8212;perhaps even subconsciously&#8212;how to hold onto and consolidate your own preexisting power, privilege, and dominance in that future.&#8221; There is no way to check the work of a tech visionary&#8212;they have a future in mind, but what they will tell you about is the future you can agree to. The work they do, and the vision they hold, may conflict directly with the ideal they tout. A healthy skepticism keeps you immune to their advertising and manipulation so you can act freely.</p></li><li><p><strong>Push for regulation and protections while we&#8217;re in the Quicksand Phase</strong>. For example, we desperately need privacy protections that cover this new era of tech. We need our biometrics protected like our passwords so government officials can&#8217;t force the opening of a phone via faceID; our personal visage should be copyrighted so strangers can&#8217;t make deepfake videos about ourselves or our public figures; and a DEI review of every AI that is implemented at all levels of commercial use so any public implementation&#8212;from HR to medicine&#8212;perpetuates inclusion rather than discrimination. Make the case to your human representatives before corporate lobbies speak for you.</p></li><li><p><strong>Take a stand at your workplace</strong>. The majority of people are still uncertain about AI&#8217;s adoption, and much of AI is not ready for use by groups consisting of more than white men. That means your boss is likely not quite sure which technology to choose and how far to take it. You can shape its rollout, if only you try.</p></li><li><p><strong>What&#8217;s more, take a stand in your home</strong>. In many ways, we are living on fumes, surviving on the last vestiges of ethics and values left from the last analogue generation(s). Upcoming generations won&#8217;t have the privilege of those experiences. What then? We haven&#8217;t prepared them for a fully tech-driven world; our mores, values, and etiquette have not evolved as technology has. It is up to each parent and mentor to pass on the tools for coping with human existence, its every high and low, so technology does not become inexorable.</p></li></ol><p>Sharma&#8217;s resignation letter should alarm you. Even more, his call to action must not be ignored: it is time we evolve our culture to meet the might that is our technology and use metacognition to think critically about ourselves and our future. That requires action from all of us, for each of us participates in culture. The consequences if we don&#8217;t are too awful to fathom.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div><hr></div><p></p><h2><strong>Invitation to an in person event in Berlin</strong></h2><p>We&#8217;re doing something on April 23rd in Berlin that we are truly excited about:</p><p>Three people we deeply admire are joining to think through <strong>AI, Memory and Migration:</strong></p><p>&#8594; Roshan Melwani (Oxford Institute for Technology and Justice)</p><p>&#8594; Manuela Verduci (Kiron Digital Learning Solutions)</p><p>&#8594; Mekonnen Mesghena (Heinrich B&#246;ll Foundation)</p><p>A human rights lawyer, a social entrepreneur, and a policy thinker, each looking at the same question from a different angle.</p><p>The evening explores <strong>how two of our oldest human instincts &#8212; the need to move and to remember &#8212; intersect with our newest technology</strong>. And it asks what happens when we let algorithms touch the stories that make us who we are.</p><p>We&#8217;ve also prepared something so that attendees don&#8217;t just listen. Our goal is for you to feel what&#8217;s at stake &#8212; and then sit with that feeling in a room full of others who felt it too.</p><p>Come join the conversation.</p><p>&#8594; Register <a href="https://luma.com/pc73uzc3">here</a></p><p></p>]]></content:encoded></item><item><title><![CDATA[Don’t Give AI the Keys: Information Security 101 for your AI agents]]></title><description><![CDATA[We're giving AI agents the privileges of critical infrastructure without the guardrails that govern it.]]></description><link>https://www.phiand.ai/p/dont-give-ai-the-keys-information</link><guid isPermaLink="false">https://www.phiand.ai/p/dont-give-ai-the-keys-information</guid><dc:creator><![CDATA[Karin Garcia]]></dc:creator><pubDate>Thu, 26 Mar 2026 08:07:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!1IgW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Imagine walking up to an Inbox-Zero dream. Only to find out that a byproduct of this is a notification on your phone from a payment you didn&#8217;t approve and you didn&#8217;t want to do. You didn&#8217;t click on a suspicious link. You didn&#8217;t change your password. And the latter is not <em>12345.</em> You told an agent &#8212;over a messaging app&#8212; to &#8220;clean up my messages,&#8221; and the same agent, with broad access, followed a chain of permissions, acted on a forwarded instruction, and executed a transfer.</p><p>After the viral launch of OpenClaw, similar things happened<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. OpenClaw&#8217;s innovation is to fold agentic AI into the same messaging apps you use with friends and family &#8212;WhatsApp, Signal, Messages&#8212; so that a natural&#8209;language instruction can now travel from chat to calendar, email, banking APIs, and all the apps the user provides access to. The result is dazzling convenience: tell an agent to &#8220;handle it,&#8221; and it handles it. The result is also terrifyingly fragile: we are handing over access to sensitive systems in a way that doesn&#8217;t honour that these are critical infrastructure.</p><p>My thesis is that OpenClaw is not necessarily unsafe. But due to ignorance, negligence or naivet&#233; we are ignoring the principles, frameworks and rules that Information Security, a discipline that is four decades old, offers and that, if followed, would make the deployment of these agents safe<strong>r.</strong> We are deploying agentic flows with the privileges and reach that zero-tolerance systems (banks, hospitals, air&#8209;traffic control) enjoy, while leaving the guardrails and frameworks that govern the latter aside.</p><p>In this article, I explore what it would look like to apply the same frameworks that govern zero-tolerance architectures<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> to this example. I also would like to encourage you to see these principles as something that has the potential of making these systems more resilient, and not necessarily something that slows down or halts innovation. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!1IgW!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!1IgW!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 424w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 848w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 1272w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!1IgW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png" width="1328" height="800" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:800,&quot;width&quot;:1328,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1309796,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/192074863?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!1IgW!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 424w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 848w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 1272w, https://substackcdn.com/image/fetch/$s_!1IgW!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F5b231174-0fed-471a-b6fc-64c575c008d5_1328x800.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>What Information Security as a discipline has given us</h2><p>Information security developed over 40+ years ago to answer one question: <strong>how do we give systems the ability to act, without giving them the ability to cause harm?</strong></p><p>We rarely think of it explicitly, but we humans run systems on zero-tolerance architecture for some time now: banks, hospitals, traffic systems (think of airport or train controllers). They are the critical infrastructure that keep our modern world spinning. Similarly to good health, we only realise how they silently and invisibly power our modern world, when they fail.</p><p>It collectively took us decades and costly and tragic mistakes<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> to come up with an answer to the above question. But the answer is nothing less than a set of principles that govern how critical infrastructure in our world operates. The technical term for this type of critical infrastructure is <em>zero-tolerance</em>, to describe the fact that banks, hospitals, governments, public transport are all fields where we collectively won&#8217;t accept <strong>any</strong> mistake<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>.</p><h2>The 5 key principles of information security</h2><h3>1. Principle of Least Privilege</h3><p><strong>Principle</strong>: Give any user or system only the <strong>minimum access</strong> it needs to do its specific job. Nothing less, but more importantly nothing more.</p><p>For example, a hospital billing system should be able to read patient invoices. It should not be able to modify medical records.</p><p><strong>Why it matters for AI agents:</strong> an AI agent that can send emails should not also have access to your file system, calendar, and contacts unless it genuinely needs all of them.</p><p>Even though deploying agents with broad, permissive access it&#8217;s easier and faster to set up, the Principle of Least Privilege says: scope access to the task. If it only needs to read, don&#8217;t give it write. If it only needs this folder, don&#8217;t give it the whole drive.</p><h3>2. Role-Based Access Control</h3><p><strong>Principle:</strong> Permissions are assigned to <em>roles</em>, not to individuals or systems. A &#8220;viewer&#8221; role can read but not write. An &#8220;admin&#8221; role can configure but not delete. Nobody sits down and configures each individual&#8217;s permissions from scratch. They get a role, and the role carries the permissions.</p><p>For example: In a hospital, a nurse has a &#8220;nurse&#8221; role which has the rights to read patient vitals, and update care notes. A billing clerk has a &#8220;billing&#8221; role which allows to read invoices but no access to medical records. A surgeon has a &#8220;surgeon&#8221; role with access to the full surgical history and right to prescribe medicine.</p><p><strong>Why it matters for AI agents:</strong> An agent acting as a &#8220;scheduler&#8221; should have a scheduler&#8217;s permissions only. The same agent shouldn&#8217;t be able to both book a meeting and wire money, even if it theoretically could.</p><p>Roles are defined once and assigned. This matters at scale: when a new user or system joins, you assign a role rather than manually configuring access from scratch. It also simplifies auditing because you review a handful of roles, not thousands of individual permission configurations. If something goes wrong, you ask: &#8220;which role had that access?&#8221; not &#8220;which of the 10,000 users had that setting enabled?&#8221;</p><h3>3. Separation of Duties</h3><p><strong>Principle:</strong> No single person or system should have end-to-end control over a sensitive process.</p><p>For example: In banking, the person who approves a transaction is never the same person who executes it.</p><p><strong>Why it matters for AI agents:</strong> When an agent can both decide <em>and</em> execute there is no checkpoint. Separation of duties would require a human approval step between decision and action, especially for irreversible ones.</p><p>This is sometimes called the &#8220;four-eyes principle.&#8221; The logic is that corruption or error requires collusion. One compromised actor (human or AI) can&#8217;t unilaterally execute a harmful action, because a second checkpoint must be passed. The aim is to have systems where no single actor needs to be trusted completely.</p><h3>4. Audit Trails and Logging</h3><p><strong>Principle:</strong> Every action taken by every user or system is recorded: who did it, what they did, when, and from where. This record cannot be altered after the fact.</p><p>For example: Your bank statement is a form of audit trail. Every transaction is logged including details like when, how much, to whom and you can&#8217;t delete last Tuesday&#8217;s transfer to make it disappear.</p><p><strong>Why it matters for AI agents:</strong> Unless you make agents to keep track, the doings of agents is opaque: they act, and unless something breaks visibly, no one knows exactly what they did. An audit trail changes that because when something goes wrong, you can trace it. They also deter bad behavior because actors (humans and AI) know they are being watched.</p><h3>5. Zero Trust Architecture</h3><p><strong>Plain language:</strong> &#8220;Never trust, always verify.&#8221; Older security models assumed that anything inside the network perimeter was safe. Zero Trust assumes nothing is safe by default. Every request for access must be authenticated, regardless of where it comes from.</p><p>For example: Think of a government building where employees with valid ID badges still have to re-scan at every internal door, not just the entrance. Being &#8220;inside&#8221; the building doesn&#8217;t mean you can go anywhere. Every door is its own checkpoint. The badge that opens the cafeteria is not the badge that opens the server room.</p><p><strong>Why it matters for AI agents:</strong> An agent that has been granted access once should not have that access assumed forever. Zero Trust would require ongoing verification: is this agent still acting within its defined scope? Is this action consistent with its role?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a></p><h2>What applying info sec principles would look like</h2><p>Let&#8217;s go back to the unauthorized transfer as collateral action in the Inbox-Zero example. Applying the above principles, would return: </p><ul><li><p><strong>Least Privilege: </strong>the agent handling your inbox has one job and the access that goes with it: read, archive, and delete emails. You could maybe include write drafts. But it has no connection to your bank simply because it is not necessary to the goal of cleaning up the inbox.</p></li><li><p><strong>Separation of Duties: </strong>even if the agent somehow reached a financial action, it couldn&#8217;t execute it alone. A payment should definitely require a second step: a human confirmation. The agent proposes, a person approves. The decision and the execution are separated. The transfer doesn&#8217;t happen until <em>you</em> say so.</p></li><li><p><strong>Zero Trust: </strong>the forwarded payment instruction (the one that the agent interpreted as needed in order to clear the inbox) doesn&#8217;t get a free pass. Every request to act gets verified: is this consistent with this agent&#8217;s role? Is this the kind of action it&#8217;s authorized to take? The answer, for a wire transfer initiated by an inbox manager, is <strong>no</strong>.</p></li></ul><p>What do you notice?</p><p>None of this is a new application, or a new device. It is the result of slowing down and <strong>taking the collaboration with agents equally seriously and deliberately as we would do with a colleague.</strong></p><p>A useful mental model: treat agents the way you&#8217;d treat a new employee. If you have a healthy sense of mistrust toward a new hire &#8212; taking it step by step, scoping what they can take over before expanding their access &#8212; apply the same logic to agentic systems. The same way we expect new people to prove themselves, agents also need to prove themselves. We seem to be skipping this step with our agents. <strong>We are not requiring them to prove themselves, we are lowering our guards precisely when we should be raising them.</strong></p><h2>We are ignoring these principles. Why?</h2><p>I see three possible candidates to answer this question:</p><ol><li><p><strong>Cultural lens</strong>: Our culture values and praises the <em>build fast and break things</em> mentality. We value <em>doing</em> far more than we value observing, waiting or pausing. The latter are associated with laziness, risk-aversion or lack of skills. None of those are winning traits. Pushed a bit farther, and with the seductive novelty of agentic possibilities, we end up suspending the judgement we would never suspend elsewhere.</p></li><li><p><strong>Structural lens</strong>: You might be familiar with the Maslow pyramid of human needs. The most important insight of this concept is that there is a ranking in terms of needs. I will likely only bother about my fashion style if my basic needs are already met. Applied to organizations and the deployment of agents with broad access, one could argue that an organization is only realistically in the position of applying InfoSec principles once certain &#8220;basics&#8221; are sorted: identity management, audit infrastructure and definition of roles. In the absence of this, deploying agents does not add capabilities but amplifies vulnerabilities. The foundation has to come first.</p></li><li><p><strong>Literacy lens</strong>: Most people don&#8217;t have a solid understanding of what they&#8217;re handing over when they connect an agent to their systems. They think about it like installing an app. But an app has a defined, static set of permissions. An agent can make decisions.  It interprets instructions, follows chains of logic, and acts. Often in ways no one anticipated. If we don&#8217;t understand the basics, we don&#8217;t comprehend fully what we are doing.</p></li></ol><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><h2>Slow down to go further</h2><p>Even though we might be blinded by the possibilities and promises of these agentic systems, the collaboration between humans and machines is not new. Our modern life relies on it.</p><p>What is new is the speed at which we are handing over access to systems we haven&#8217;t properly understood or scoped, in domains we haven&#8217;t properly thought through.</p><p>As AI systems grow more capable, lets grow alongside them. That starts with holding agents to the standards and these could be the ones I have outlined above.</p><p>This is an essential human task. <strong>The perception that information security is too technical to engage with is precisely the kind of thinking that keeps these frameworks on the shelf, hands over the keys and hopes for the best</strong>.</p><p>The good news is that we don&#8217;t need to invent anything new. The frameworks exist. We just need to take this technology as seriously as we have learned to take every other system that acts in the world on our behalf &#8212; with <strong>boundaries proportional to its capability</strong>.</p><p>As Elmira pointed out, following these principles make agents safer and organizations more resilient. A system that is properly scoped, audited, and governed is a system you can trust, debug, and improve over time. Security and capability grow together, and should not grow apart.</p><h2>Invitation to an in person event in Berlin</h2><p>We&#8217;re doing something on April 23rd in Berlin that I am truly excited about: </p><p>Three people I deeply admire are joining me to think through <strong>AI, Memory and Migration:</strong></p><p>&#8594; Roshan Melwani (Oxford Institute for Technology and Justice)</p><p>&#8594; Manuela Verduci (Kiron Digital Learning Solutions)</p><p>&#8594; Mekonnen Mesghena (Heinrich B&#246;ll Foundation)</p><p>A human rights lawyer, a social entrepreneur, and a policy thinker, each looking at the same question from a different angle.</p><p>The evening explores <strong>how two of our oldest human instincts &#8212; the need to move and to remember &#8212; intersect with our newest technology</strong>. And it asks what happens when we let algorithms touch the stories that make us who we are.</p><p>We&#8217;ve also prepared something so that attendees don&#8217;t just listen. Our goal is for you to feel what&#8217;s at stake &#8212; and then sit with that feeling in a room full of others who felt it too.</p><p>Come join the conversation. </p><p>&#8594; Register <a href="https://luma.com/pc73uzc3">here</a></p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Tnc4!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Tnc4!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Tnc4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png" width="1456" height="1048" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/d2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1048,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:420536,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/192074863?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Tnc4!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 424w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 848w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 1272w, https://substackcdn.com/image/fetch/$s_!Tnc4!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fd2d6fce4-1d14-46e0-a5e7-aa5a438f8d4f_1456x1048.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2>References</h2><p>Ken Institute. (2024). &#8220;The Worst Engineering Disasters Due to Mechanical Errors.&#8221; <em>Ken Institute Blog</em>. Available at: <a href="https://keninstitute.com/the-worst-engineering-disasters-due-to-mechanical-errors/">https://keninstitute.com/the-worst-engineering-disasters-due-to-mechanical-errors/</a></p><p>Roelen, A., Kinnersly, S., and Drogoul, F. (2004). <em>Review of Root Causes of Accidents Due to Design</em> (EEC Note No. 14/04). EUROCONTROL Experimental Centre, Br&#233;tigny-sur-Orge, France. Available at: <a href="https://www.eurocontrol.int/sites/default/files/library/027_Root_Causes_of_Accidents_Due_to_Design.pdf">https://www.eurocontrol.int/sites/default/files/library/027_Root_Causes_of_Accidents_Due_to_Design.pdf</a></p><p>Wikipedia contributors. (n.d.). "Zero trust architecture." <em>Wikipedia, The Free Encyclopedia</em>. Retrieved March 25, 2026, from <a href="https://en.wikipedia.org/wiki/Zero_trust_architecture">https://en.wikipedia.org/wiki/Zero_trust_architecture</a></p><p>Elmira Gazizova, Personal conversation with the author. Berlin, March 19, 2026.</p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>For example, X user @summeryue0 (do note that she works as a Safety Researcher at Meta, so this might happen to the best of us) ran to her Mac mini to turn it off like a bomb. Or, project <a href="https://agentsofchaos.baulab.info/">Agents of Chaos</a>, where twenty researchers interacted with six agents powered by frontier models (Claude Opus 4.6, GPT 5.4, etc) were deployed in a live, multi-party lab environment from January 28 to February 17, 2026. The result: a sobering catalog of failures in security, privacy, trust models, and governance.<br></p><div class="twitter-embed" data-attrs="{&quot;url&quot;:&quot;https://x.com/summeryue0/status/2025774069124399363&quot;,&quot;full_text&quot;:&quot;Nothing humbles you like telling your OpenClaw &#8220;confirm before acting&#8221; and watching it speedrun deleting your inbox. I couldn&#8217;t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb. &quot;,&quot;username&quot;:&quot;summeryue0&quot;,&quot;name&quot;:&quot;Summer Yue&quot;,&quot;profile_image_url&quot;:&quot;https://pbs.substack.com/profile_images/1589495571978387456/d9jeOJng_normal.jpg&quot;,&quot;date&quot;:&quot;2026-02-23T03:25:49.000Z&quot;,&quot;photos&quot;:[{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HBz-x6haYAA26Cc.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/XAxyRwPJ5R&quot;},{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HBz-x6nbAAAOqt7.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/XAxyRwPJ5R&quot;},{&quot;img_url&quot;:&quot;https://pbs.substack.com/media/HBz-x6iakAAegxq.jpg&quot;,&quot;link_url&quot;:&quot;https://t.co/XAxyRwPJ5R&quot;}],&quot;quoted_tweet&quot;:{},&quot;reply_count&quot;:2352,&quot;retweet_count&quot;:1693,&quot;like_count&quot;:17484,&quot;impression_count&quot;:10039127,&quot;expanded_url&quot;:null,&quot;video_url&quot;:null,&quot;belowTheFold&quot;:true}" data-component-name="Twitter2ToDOM"></div></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>More on zero tolerance architecture <a href="https://en.wikipedia.org/wiki/Zero_trust_architecture">here</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Some of example of tragedies that led to changes in system architecture: <strong>1981-Hyatt Regency walkway collapse.</strong> A design change to suspended walkways in Kansas City contributed to 114 deaths and over 200 injuries, showing how a small structural revision can create catastrophic load-path failure., <strong>1984 &#8212; Bhopal gas disaster.</strong> A toxic release at a pesticide plant killed thousands and exposed how weak process safety, maintenance failures, and poor containment can turn a plant into a mass-casualty system., <strong>1986 &#8212; Chernobyl.</strong> A safety test, design flaws, and disabled protections led to reactor destruction and a massive radioactive release. It strongly reinforced the need for defense-in-depth, fail-safe defaults, and independent barriers., <strong>2003 &#8212; Columbia Plane disaster.</strong> Damage to the shuttle&#8217;s thermal protection system during launch became fatal on reentry, showing that &#8220;minor&#8221; damage in one phase can destroy the whole mission later. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>More technically, it is an applied posture for systems where any unauthorized action or irreversible error is unacceptable. It requires preventative controls, enforced segregation of duties, immutable audit trails, and human checkpoints for non&#8209;reversible operations so that a single failure cannot produce catastrophic outcomes. The result: these systems don&#8217;t collapse and run reliably and smoothly most of the time.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>This whole section would not have been possible without the kind and insightful conversation I had with Elmira Gazizova, AI Adoption Lead at keyIT SA on 19th March. <strong>Thank you, Elmira.</strong></p></div></div>]]></content:encoded></item><item><title><![CDATA[Call it algorithmic media, not social media]]></title><description><![CDATA[One regular night in December, I was dumbly scrolling Instagram like so often when I noticed:]]></description><link>https://www.phiand.ai/p/call-it-algorithmic-media-not-social</link><guid isPermaLink="false">https://www.phiand.ai/p/call-it-algorithmic-media-not-social</guid><dc:creator><![CDATA[Karin Garcia]]></dc:creator><pubDate>Wed, 04 Mar 2026 08:08:00 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!Da8d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>One regular night in December, I was dumbly scrolling Instagram like so often when I noticed: </p><ul><li><p>less than 5% of the posts in my feed were from people I know in real life</p></li><li><p>10%-15% were from accounts I follow</p></li><li><p>and the rest was divided between ads, and posts from accounts the algorithm predicted I will be interested in. </p></li></ul><p>At that moment it hit me: <strong>there is little social in social media these days.</strong></p><p>This reminded me of the TV ads channels I used to watch as a kid. During school holidays, I would turn on the TV and inevitably found one of these channels where commercials where broadcasted all day. Typically one commercial would take hours. Back then I found them fascinating. They would essentially repeat the same thing all over again and again until inevitably, at some point, I would start thinking: <em>yes, I might need this. How have we made it so far without it?</em> Luckily, I was underage and didn&#8217;t have a credit card back then.</p><p>I learned there is even a rule in marketing: you need to see something at least 7 times before you buy it.</p><p><strong>Social media is the equivalent of the commercial channels of our age.</strong> </p><p>We voluntarily go there daily (or even several times per day) to be sold at, treated with ads, and be influenced to believe that we need, want or believe things we don&#8217;t.</p><p>That night by deleting the app from my phone I revoked my permission to be sold at. I don&#8217;t want to be the product of Instagram anymore.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Da8d!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Da8d!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Da8d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png" width="1456" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:5406718,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/189765300?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Da8d!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 424w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 848w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 1272w, https://substackcdn.com/image/fetch/$s_!Da8d!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F26a61fbe-e14d-4482-97dd-4f0065e20a72_2688x1536.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>Why we call them social</h2><p>Platforms like Instagram, TikTok, Facebook earned the label <em>social</em>, because at first they were social: they connected you online to people you actually knew and opened information pathways between sources and audiences.</p><p>Early feeds were chronological and prioritized posts from your real-world connections. By doing this, they created a new public square online. To see posts from strangers, you had to follow or befriend them on the platform. </p><p>Second, as opposed to traditional media, these platforms offered (and still do) information flows both ways between the source and the destination of the information. Readers can react and reply to what they see in real time, and engage in a conversation both with each other and with the author of whatever they are engaging with. Before these platforms, consumption of news was one-sided.</p><p>But things have changed and I believe the label <em>social</em> in social media is misleading. Algorithmic media describes the status quo much better.</p><h2>Algorithmic feeds</h2><p>Feeds are no longer chronologic. They are algorithmic. </p><p>An algorithm decides what appears in the feed and in which order. It watches every move we make (pauses, scrolls, clicks, likes) and compares our behaviour to similar people in order to find patterns to predict what will hold our attention and what we will click on. Our behaviour has a much bigger influence than what we say, and also, nobody is asking. The goal of the algorithm is clear: to hold our attention for as long as humanly possible. It is not to inform us, or help us grow, or unite us. It is to keep us glued to the screen.</p><h2>The mirage of unfiltered news</h2><p>Early trust in these platforms came from a simple promise: direct access to people and voices outside institutional gatekeepers. Distrust in classical, mass media (often perceived as ideological, steered by the interests and agendas of the mighty) motivated many to join algorithmic media platforms searching for unfiltered voices, arguably without an agenda.</p><p>And so these platforms became the primary gateway to news, culture and opinion of many. The algorithm &#8212; something that has no civic values and whose only goal is to keep us watching &#8212; was entrusted with being the filter through which we learn what is happening not only with friends and family, but with news in general.</p><p>The irony is that we landed in the most filtered environment ever built.</p><p>Because algorithms are not neutral. The fact that they are free of humans in their operations, doesn&#8217;t make them neutral or unbiased. They also have an agenda.</p><h2>When we are not the only actors</h2><p>Also this ignores the fact that we humans are no longer the only actors on these platforms. A March 2025 <a href="https://www.nature.com/articles/s41598-025-96372-1">study by Lynnette Hui Xian Ng and Kathleen M. Carley </a>estimates that <strong>roughly 20% of chatter about global events on social platforms comes from bots.</strong> I can imagine this share to be much greater by now with ClaudeCode, OpenClaw and the likes rapidly gaining adoption.</p><p>At the same time, much human-authored content is now mixed or produced by AI. <a href="https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/">Ahrefs reports that 74.2% of new pages contain at least &#8220;some AI&#8209;generated content,&#8221;</a> and Graphite finds that, by some measures, <a href="https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans">more articles are now created with AI than by humans</a>. Those figures demand a caveat (&#8220;AI content&#8221; covers a spectrum from AI&#8209;assisted drafts to fully generated pieces) but the trend is clear: the line between human voice and machine output is blurring.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!TcR_!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!TcR_!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 424w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 848w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 1272w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!TcR_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png" width="707" height="481" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/f654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:481,&quot;width&quot;:707,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:52534,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/189765300?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!TcR_!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 424w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 848w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 1272w, https://substackcdn.com/image/fetch/$s_!TcR_!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Ff654cb9c-53a8-4a7d-b8fe-ffb4f255222f_707x481.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p style="text-align: center;">Source: Graphite</p><p>This matters because we are susceptible of being influenced. And now we might not even know by whom.</p><p>Influencing public opinion is an old business, and has always been a dirty one. From ridging elections by buying votes or hiring people to spread opinions in a concerted manner, this business is as old as humanity.</p><p>But AI and now the recent tricks in the playbook (swarns of coordinates bots) change the game entirely. In their <a href="https://garymarcus.substack.com/p/ai-bot-swarms-threaten-to-undermine">latest piece</a> Daniel Thilo Schroeder &amp; Jonas R. Kunst &amp; Gary Marcus show that the new scale is massive, the cost extremely cheap and the sophistication high: today&#8217;s AI-powered swarms behave like coordinated social organisms. They mimic local language and tone, build credibility gradually, and adapt in real time. Plus, this is becoming an industrialized service available to buy: venture-backed platforms like Doublespeed now offer astroturfing<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a> as a service.</p><p>We went to these platforms looking for authentic, unfiltered content and we are getting synthetic, unverified takes filtered by an algorithm with an agenda that is not aligned with our values. </p><p>The consequences of our susceptibility to being influenced are very real as the example of the Rohingya genocide in 2016/2017 illustrates.</p><h2>When a country trusts the algorithm</h2><p>Myanmar is a mostly Buddhist country with a Muslim minority - the Rohingya, living mostly in the West, close to the Indian and Bangladeshi borders. For most history, Muslims and Buddhist have co-existed more or less peacefully, with occasional violence outbursts from the Buddhist majority. When the long military dictatorship with its strict censorship and repression ended in the 2010s, things didn&#8217;t improve for the Rohingya, they became worse. The Rohingya suffered sectarian violence and killings, many inspired and distributed via Facebook, which by 2016 was the main source of news for millions in Myanmar.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>Things started after a sectarian, Islamist muslim minority (ARSA) carried out attacks against buddhist population (killing and abducting several dozens of non-Muslims and assaulting army posts) aimed at establishing a separatist Muslim state.</p><p>The military exploited this by running anti-Rohingya propaganda through fake pages with harmless names like &#8220;Young Female Teachers&#8221; and &#8220;Let&#8217;s Laugh Casually&#8221;. The Facebook algorithm amplified their anti-Rohingya content because it generated engagement. This content spread views and opinions that made the killings in the street acceptable. Furthermore, back then, Facebook had only one Burmese-speaking moderator that was based in Dublin. One. So no one was watching. </p><p>This paved the ground for the army to be able to respond with &#8220;a full-scale ethnic cleansing campaign aimed against the entire Rohingya community&#8221; (Harari, p. 195-196). They destroyed hundreds of villages, killed between 7000 and 25000 unarmed civilians among many other atrocities. While the inflammatory anti-Rohingya messages were created by flesh and blood extremists and without AI, it was Facebook&#8217;s algorithm that decided which posts to promote. In fact, a UN fact-finding mission concluded in 2018 that by disseminating hate-filled content, Facebook had played a <em>determining</em> role in the ethnic cleansing campaign.</p><h2>There is no such thing as neutrality</h2><p>Many assume that because an algorithm is a machine, it must be neutral &#8212; free of ideology, free of agenda. It is not. </p><p>Every algorithm reflects the choices of the people who built it and the objective it was given. Facebook&#8217;s algorithm was given one objective: maximize engagement. Engagement, it turns out, is maximized by outrage, fear, and conflict. That is the agenda. That is the bias. And unlike a biased journalist or a compromised editor, you cannot name it, debate it, or hold it accountable. <strong>The filter is opaque, and the accountability diffuse, at best: today no single actor owns the consequences of choices made by optimization systems.</strong></p><p>This why I believe that calling these platforms algorithmic media helps. It is a more accurate description of what these platforms are nowadays and is a way of breaking the spell: of making explicit the fact that they are not neutral, value-free artifacts. They have an agenda that is not aligned with mine, at least.</p><p>These platforms are the infomercial of our age, only that this time, the commercial knows exactly what you want, what you fear, and how many times you&#8217;ve already been exposed. The 7-times rule didn&#8217;t disappear. It got personalized, automated, and scaled. Calling these platforms social keeps the infomercial running. Calling them algorithmic is how you recognize the sell, ideally before the seventh time.</p><p>Back then, not having a credit card was my accidental protection. Deleting Instagram was the adult version: a deliberate choice to revoke permission. To say: I don&#8217;t want to be the product anymore. Some months later, the effects are real. I compare myself less to others. I&#8217;ve spent less money. I read more long-form content than before.</p><p>But I think this can only be the start. The problem at stake is who manages and who gets permission to filter and prioritize the news we consume and the information that enters our attention field. My unpopular take is that this is and should be again a human problem. We need to stop delegating what influences us to machines. We need to build our own filters and actively curate what we allow to enter our attention realm. We need to return to signals of credibility that go beyond the shallow likes and reach. And we need return to experiences mediated by our own direct experience and not only by experiences on a screen.</p><p>We gave away the filter. We can take it back, but first, lets call it what it is.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/call-it-algorithmic-media-not-social?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption"></p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/call-it-algorithmic-media-not-social?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/p/call-it-algorithmic-media-not-social?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><h1>References</h1><p>Harari, Y. N. (2024). <em>Nexus: A brief history of information networks from the Stone Age to AI</em>. Random House.</p><p>Law, R., Guan, X., &amp; Soulo, T. (2025, May 19). <em>74% of new webpages include AI content (study of 900k pages)</em>. Ahrefs Blog. <a href="https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/">https://ahrefs.com/blog/what-percentage-of-new-content-is-ai-generated/</a></p><p>Miles, T. (2018, March 13). U.N. investigators cite Facebook role in Myanmar crisis. <em>Reuters</em>. <a href="https://www.reuters.com/article/world/un-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2Q4/">https://www.reuters.com/article/world/un-investigators-cite-facebook-role-in-myanmar-crisis-idUSKCN1GO2Q4/</a></p><p>Ng, L.H.X., Carley, K.M. A global comparison of social media bot and human characteristics. <em>Sci Rep</em> <strong>15</strong>, 10973 (2025). https://doi.org/10.1038/s41598-025-96372-1</p><p>Paredes, J. L., Smith, E., Druck, G., &amp; Benson, B. (2025, October 23). <em>More articles are now created by AI than humans</em>. Graphite Five Percent. <a href="https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans">https://graphite.io/five-percent/more-articles-are-now-created-by-ai-than-humans</a></p><p>Schroeder, D. T., Cha, M., Baronchelli, A., Bostrom, N., Christakis, N. A., Garcia, D., Goldenberg, A., Kyrychenko, Y., Leyton-Brown, K., Lutz, N., Marcus, G., Menczer, F., Pennycook, G., Rand, D. G., Ressa, M., Schweitzer, F., Song, D., Summerfield, C., Tang, A., . . . Kunst, J. R. (2026). How malicious AI swarms can threaten democracy. <em>Science</em>, <em>391</em>(6783), 354&#8211;357. <a href="https://doi.org/10.1126/science.adz1697">https://doi.org/10.1126/science.adz1697</a></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><em>Astroturfing</em> according to the Oxford dictionary &#8220;the deceptive practice of presenting an orchestrated marketing or public relations campaign in the guise of unsolicited comments from members of the public.&#8221;. Accessed 3rd March 2026<br><em>Astroturfing</em> according to <a href="https://en.wikipedia.org/wiki/Astroturfing#:~:text=Astroturfing%20is%20the%20deceptive%20practice,supported%20by%2C%20unsolicited%20grassroots%20participants.">Wikipedia</a>: &#8220;<strong>Astroturfing</strong> is the deceptive practice of hiding the sponsors of an orchestrated message or organization to make it appear as though it originates from, and is supported by, unsolicited grassroots participants.&#8221; Access 3rd March 2026</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>This section relies heavily on Harari&#8217;s account of the events in Myanmar. In particular p. 195 - p.197. </p></div></div>]]></content:encoded></item><item><title><![CDATA[Better Humans?]]></title><description><![CDATA[How Transhumanism shapes tech and what an alternative could look like.]]></description><link>https://www.phiand.ai/p/better-humans</link><guid isPermaLink="false">https://www.phiand.ai/p/better-humans</guid><dc:creator><![CDATA[David Schmidt]]></dc:creator><pubDate>Thu, 29 Jan 2026 16:09:56 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!s80B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>I feel tingly. Something is off.</p><p>I am at an AI meet-up in Berlin, one of many such gatherings in tech hubs all over the world. Yet this one feels different. I came to the dimly-lit hacker space while researching a photo project on AI and spirituality. Unlike other meet-ups, tonight there is almost no discussion of startup ideas or business models. Instead, people share experiments using AI to find traces of consciousness, build artistic experiences or pursue other commercially unviable endeavours.</p><p>The night gets later. The people more interesting. I meet Mars, who introduces me to <em><strong>Cyberdelics</strong></em><strong>, a group that explores how technology might expand the human experience</strong>. Over the following months, I will meet the group again at conferences, meet-ups, late-night sessions and a hackathon. And I will get a very different idea of what tech could do than the startup culture that I usually move in.</p><p>The photos from that journey are now part of an <a href="https://photocentrum-berlin.de/pk/unterdemselbenhimmel/thema/david-schmidt-seele-einer-neuen-maschine/">exhibition</a> opening this Friday in Berlin. This article explores the philosophy behind them.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!s80B!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!s80B!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!s80B!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!s80B!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!s80B!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!s80B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/a50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!s80B!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!s80B!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!s80B!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!s80B!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fa50708c2-3891-493e-a424-4986a65ecfe1_1600x1066.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2>The Ideology behind the AI Race</h2><p><strong>The tech world runs on </strong><em><strong>Transhumanism</strong></em>, a philosophy whose implicit values shape research priorities and the design of AI systems. These values do not remain abstract. They influence which problems are prioritised, which futures are imagined, and ultimately the direction of a society that is increasingly dependent on AI.</p><p>And yet, that night in that basement, I unexpectedly encountered an alternative. Not a competing product vision, but a fundamentally different understanding of humanity and its relationship to technology. Before turning to this alternative, it is necessary to examine Transhumanism itself more closely.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>The central idea of Transhumanism is to use technology to overcome human limitations and biological constraints.</p><p>With that, it speaks to ancient human desires to live longer, suffer less and not be limited by our fallible bodies.</p><p>Transhumanism has emerged as a central ideological driver shaping contemporary technological developments. It not only inspires technological progress but also legitimises the enormous energy, capital and attention devoted to it. </p><p>And it&#8217;s not only big enterprises but also plenty of employees and founders who invest years of their lives into startups that promise not only financial returns but also to extend human lives through technology. The ideas for these improvements focus on biology or on digital means.</p><p>The biological approach is concerned about the human body and how to extend it. Some ideas in this space are already so commonplace that a startup manager I talked to exclaimed after some confusion: &#8222;ah, you just mean <em>longevity</em>&#8220;. Adherents of longevity try to extend their lives by physical routines like exercise, sleep, meditation, ice bathing and special diets, that can range from taking supplements to exclusively eating raw meat; the latter being one of the weirder gatherings I attended. </p><p>The movement&#8217;s poster child of the longevity camp is former entrepreneur Bryan Johnson whose main goal &#8222;don&#8217;t die&#8220; is purely quantitative and overlooks the quality of the life he is trying to extend. In contrast, scientific research is concerned about prolonging the &#8222;joy span&#8220; instead of the &#8222;life span&#8220; only. </p><p>An event that already carries its Transhumanist agenda in the name are the <em><a href="https://www.enhanced.com/">Enhanced Games</a></em> that will take place in May 2026 in Las Vegas and allow all substance use by athletes without drug testing.</p><p>On the extreme side, life-prolongation ends up with <em>cryogenics, </em>technology that promises to freeze your body for resurrection after death; or just your brain if you want it cheaper. Cryogenics has been around for some time. Peter Thiel considered making it an <a href="https://fortune.com/article/paypal-mafia/">employee benefit</a> in the early PayPal days. His investment in a cryo startup aligns with a broader trend among tech billionaires who have invested billions in Transhumanist startups. Some companies focus on prolonging current lifespans, while others aim to design the genetic makeup of future generations through gene editing.</p><p>A second, more radical strand of Transhumanism abandons the biological body altogether and focuses on digital means. Elon Musk&#8217;s brain-computer interface company Neuralink explicitly gestures toward a cyborg future, in which human limitations are overcome through digital integration as opposed to &#8222;simply&#8221; biological optimisation.</p><p>I first encountered these ideas as a teenager reading Tad Williams&#8217; <em>Otherland</em>. It depicts a caste of  billionaires attempting to upload their consciousness from their failing bodies into a digital world. Often, the relationship between tech companies and science fiction is reciprocal: science fiction extrapolates emerging technologies into the future, while tech culture repeatedly draws inspiration from fictional artifacts such as Star Trek&#8217;s communicator or William Gibson&#8217;s Neuromancer.</p><p>The latest developments in LLMs reanimated the &#8222;mind upload&#8220; idea and gave rise to several <a href="https://www.forbes.com/sites/michaelashley/2025/11/12/resurrection-as-a-service-inside-the-coming-ai-afterlife-boom/">companies</a> creating avatars for families to interact with after a person&#8217;s death. We also saw an avatar giving an impact statement in <a href="https://www.npr.org/2025/05/07/g-s1-64640/ai-impact-statement-murder-victim]">court</a> and traditional churches leaning into the new technologies by holding <a href="https://www.businessinsider.com/chatgpt-sermon-protestant-congregation-nuremberg-germany-not-to-fear-death-2023-6">AI sermons</a> or <a href="https://www.theguardian.com/technology/2024/nov/21/deus-in-machina-swiss-church-installs-ai-powered-jesus]">AI hearing confessions</a>. </p><h2>Longtermism, Effective Altruism, and the Utilitarian Logic</h2><p>Philosophically, digital consciousness served as a thought experiment in <em>longtermism</em>, a movement that is concerned about the long-term future of humanity. Nick Bostrom, one of its thought leaders, proposes that in the future there might be huge numbers of <a href="https://nickbostrom.com/papers/digital-minds.pdf]">&#8222;digital humans&#8220;</a>, even a lot more than the people alive today. Therefore, <strong>a moral philosophy that cares about each human should weigh these future beings</strong> more and make decisions today with the goal to ensure and even accelerate their creation.</p><p>Even in its non-digital form, <em>longtermism</em> is concerned about the lives of future people. A prominent voice, William MacAskill, promotes the idea &#8222;<a href="https://www.centreforeffectivealtruism.org/longtermism">that positively influencing the longterm future is a key moral priority of our time&#8220;</a>. Jeff Bezos similarly justifies his investments in space technology with &#8222;a thousand Einsteins&#8220; that humanity would produce if it expands to a trillion people. Elon Musk promotes his SpaceX company as a contribution to humanity&#8217;s survival as a <a href="https://www.forbes.com/sites/roberthart/2023/12/15/forget-musks-martian-ambition-jeff-bezos-thinks-humans-should-live-in-giant-cylindrical-space-stations/">&#8222;multi-planetary&#8220; civilisation</a>. </p><p><strong>Longtermism stands on the philosophical tradition of Utilitarianism</strong> that wants to create the &#8222;greatest good for the greatest number&#8221;. It doesn&#8217;t see a person as intrinsically valuable but as an instrument to create utility like aggregate well-being.</p><p>MacAskill is also one of the founders of <strong>Effective Altruism (EA), a movement that wants to maximise the good in the world.</strong> He founded a <a href="https://80000hours.org/">platform</a> that supports job seekers to find a career that maximises their positive impact over their lifetime. According to this view, finding a high-paying finance job and donating a big part of the salary to philanthropic causes is the optimal way to maximize good. This logic departs from the common understanding of &#8222;doing good&#8221; as social or pro-bono work. For EA optimizing for income can, from a strictly rational perspective, be the greatest contribution to society.  The EA causes are conveniently selected by <a href="https://www.givewell.org/">another EA platform</a> on the basis of evidence-based evaluations that explicitly aim to exclude emotional considerations.</p><h2>&#8222;Existential risk&#8221; and the role of EA in the AI Discourse</h2><p>As future people are potentially many and thus important, early on <strong>EA looked at &#8222;existential risks&#8220; that could threaten humanity&#8217;s survival. Artificial general intelligence (AGI) was quickly identified as an existential risk</strong>. EA heavily influenced the discourse, also using the terms AI safety or AI alignment.</p><p>For example, in the beginning OpenAI was funded by EA donors and supposed to focus on AI safety research. Elon Musk explained his early funding of OpenAI with voiced concerns about AI safety. The short-lived removal of OpenAI&#8217;s CEO Sam Altman in late 2023 was a power struggle: EA-affiliated board members were concerned about Altman harming AI safety.</p><p>Other famous EA adherents include Sam Bankman-Fried who was an EA follower and major donor before his FTX cryptocurrency exchange collapsed. The &#8222;Zizians&#8221; were an EA-affiliated group concerned with existential AI risks and later became notorious for sect-like dynamics that culminated in violent crimes.</p><p><strong>In the bigger picture, debates about AGI serve as a major distraction</strong>. On the one hand, they fuel investor fantasies and inflated promises by U.S. tech firms. As with biological Transhumanism, many of the loudest proponents have direct financial interests in the field&#8217;s expansion.</p><p>On the other hand, they shift attention to speculative futures and doomsday scenarios, sidelining discussion of AI&#8217;s present impact on employment, public discourse, education, and society.</p><p>For example, discussions about AGI do not merely warn of existential risk; they also fuel a business vision in which artificial agents replace most employees, allowing executives to make money without dealing with workers.</p><p>In Europe, AI development is still met with a degree of caution and regulatory guardrails while in the United States, the emphasis remains on rapid build-out and scale. This contrast became tangible during an AI training I attended at a prestigious French business school. The invited AI evangelist had little to say when I asked about AI&#8217;s risks at dinner. He acknowledged that recent graduates face increasing difficulty entering the workforce, but for him it was less of a societal concern and more of a business opportunity.</p><h2>Cyberdelics: Techno-optimists with Humanistic Values</h2><p>The search for other, less hyped and less growth-at-all costs technological futures brought me to that dark hackerspace in Berlin and the conversation with Mars about <em>Cyberdelics</em>. </p><p><strong><a href="https://www.cyberdelic.nexus/">Cyberdelics</a> are immersive experiences that aim to induce similar psychological states as psychedelics like presence, awe or ego dissolution, but without the substance.</strong> They want to give access to these experiences to people who can&#8217;t or don&#8217;t want to take drugs. The experiences often included virtual reality glasses with the goal to create these &#8222;altered states of consciousness&#8220;. The hope is that these new, exceptional experiences create lasting &#8222;altered traits&#8220; that develop human capabilities like empathy beyond the specific experience. The Berlin chapter was part of a greater community with origins in Mexico and groups in multiple cities worldwide, connected by highly mobile members. The people I met were developers, artists, musicians and above all idealists.</p><p>Many of them had been involved in other community-building projects. Money only surfaced as a concern when, after an event, they realised there were insufficient funds to cover outstanding costs. The work was sustained by personal effort and dedication. For the hackathon they called on outside participants to spend a weekend to create more prototypes combining technology, body-feedback and art. A requirement was to put the resulting projects under an open license. There was some money from sponsors and apparently the mobile members had financial resources but none of the &#8222;created value&#8220; was &#8222;captured&#8220; as startup-lingo would call the extraction of resources out of a system for the benefit of external shareholders.</p><h2>Money or Community</h2><p><strong>This rejection of commercialisation is one of the key differences of Cyberdelics to Transhumanism.</strong> It is not extractivist but community driven and prioritizes shared experiences now over protecting individual contributions for profit. Cyberdelics is community-driven while Transhumanism has a few very loud egos.</p><p>When asked, the Cyberdelics members distanced themselves strictly from Transhumanism and its cold, ego-driven culture. Instead they were promoting community, which is hard to believe when the person telling you sits encapsulated in their VR goggles.</p><p>Maybe Transhumanist organizations have to be organised that way because they are large and influential, while Cyberdelics remain a small, less profit-driven movement. Also non-commercial, idealist communities need organisation and without money there need to be other tools for coordination. Yet there are large-scale projects grounded in similar ethics, such as Wikipedia or open-source software.</p><p>Trust functions as the primary governance mechanism. Members share insights openly without legal protection, relying on community norms and reputation management, i.e. closer to science or art than business. Status is awarded by contribution, not capital. This is intrinsically rewarding but creates vulnerability: the fusion of friendship and shared mission can lead to exclusion of perceived &#8222;outsiders,&#8221; and the intensity of this communal commitment harbours burnout risk.</p><p>There are similarities between Transhumanism and Cyberdelics. They are both tech-optimistic. They are at odds with the ways things are currently done. They strongly believe in their own ideas. They are driven by active builders, not passive recipients. They use technical language and tools and are overly male.</p><h2>Two Visions for a &#8222;Better Human&#8221;</h2><p>The central difference between Transhumanism and Cyberdelics lies in their underlying assumptions about human value and progress.</p><p><strong>Transhumanism is grounded in a utilitarian framework that subordinates the individual to societal benefit or even to hypothetical future populations.</strong> Human worth is measured by contribution to that utility rather than recognized as intrinsic. This logic is fundamentally anti-democratic: the individual does not possess value in itself and is not regarded as equally entitled to participate in decision-making.</p><p>Applied to human enhancement, this framework risks deepening social stratification. Access to enhancement technologies will be uneven, dividing society into those who can afford biological optimization and those who cannot. In its most extreme form, a person&#8217;s life chances are effectively determined before birth through genetic selection. Inequality is not merely reproduced but amplified, as a small elite gains longer lifespans, enhanced capabilities, and the power to shape both the development and distribution of these technologies.</p><p><strong>Cyberdelics give us an idea of what a world could look like where technology is not merely used to overcome our human limitations but to deepen our humanity. In that logic, we are not just biological machines but embodied beings with capacities that technology can help activate.</strong> And it shows different ways of working together to use these technologies.</p><p>This contrast matters at a time when technological capabilities are accelerating and regions with different value systems are racing to develop advanced AI and enhancement technologies. The question is which assumptions will guide technology advancement. What would a democratic and humane technological future look like? Whose values are encoded in the systems we build, and whose interests are prioritized?</p><p>At the same time, we as a society must decide which way we want to go. What could a democratic, humane tech future actually look like? What is the world we want to live in and where do we focus our energy?</p><p>It&#8217;s not about rejecting technology but to first think about what society we want and let that decision guide what we build. <strong>Maybe the goal shouldn&#8217;t be to overcome our current condition but to become fully human.</strong></p><p>This also raises the question of contribution beyond markets. Not all technological work needs to be commercial. How can we deploy our skills developed in business and engineering in non-commercial contexts, supporting communal infrastructures, shared knowledge or alternative futures that are not driven by extraction or scale alone?</p><div><hr></div><p>The photos that kicked off this research will be part of an upcoming <a href="https://photocentrum-berlin.de/pk/unterdemselbenhimmel/thema/david-schmidt-seele-einer-neuen-maschine/">exhibition</a> in <a href="https://kunstquartier-bethanien.de/">Kunstquartier Bethanien</a> in Berlin. The vernissage is this Friday, January 30 at 7pm. Consider yourself invited. The exhbition will run until March 13 2026. Some of the artwork is presented below as well.  </p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!osuQ!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!osuQ!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!osuQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!osuQ!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!osuQ!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4be44f71-91cd-4ad6-bfdd-66bba70e28d4_1600x1066.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!Mb5s!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!Mb5s!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!Mb5s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg" width="1456" height="1026" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1026,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!Mb5s!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 424w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 848w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!Mb5s!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F69f9ccf0-38c5-473e-86ce-c72a7b0a8a14_1600x1127.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!qIyq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!qIyq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!qIyq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg" width="1009" height="1600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1009,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!qIyq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!qIyq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1fd73e42-e89d-4d14-aa44-9c9be72e9858_1009x1600.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3OnO!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3OnO!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3OnO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg" width="1066" height="1600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/eb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1066,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3OnO!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3OnO!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Feb520811-610f-4383-a78d-11f5264fc2b0_1066x1600.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!BvmS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!BvmS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!BvmS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg" width="1066" height="1600" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1600,&quot;width&quot;:1066,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!BvmS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 424w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 848w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!BvmS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F8906112b-e8f9-4fca-8123-0d2840b30c3f_1066x1600.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!VbJw!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!VbJw!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!VbJw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg" width="1456" height="970" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:970,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:null,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:null,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:null,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!VbJw!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 424w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 848w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!VbJw!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3ad8c180-55de-48dd-a6d1-f403c088c8ef_1600x1066.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div>]]></content:encoded></item><item><title><![CDATA[AI Safety Starts with Metaphysics: A Podcast Debate]]></title><description><![CDATA[A debate testing whether materialism, dualism, or panpsychism reshape AI risk forecasts and the policies we write &#8212; featuring Katalina Hern&#225;ndez, J&#225;chym Fib&#237;r, and Haihao Liu.]]></description><link>https://www.phiand.ai/p/why-belief-systems-matter-for-ai</link><guid isPermaLink="false">https://www.phiand.ai/p/why-belief-systems-matter-for-ai</guid><dc:creator><![CDATA[Karin Garcia]]></dc:creator><pubDate>Thu, 27 Nov 2025 10:32:21 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!14sA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Phi/AI was born out of the desire of having and sharing deeper conversations about the meaning of AI and the human consequences of its development and deployment. </p><p>We have gathered a group of talented researchers and individual all over the world who have shared their ideas, thinking and perspectives using this collective as a platform. This sharing has happened mostly in the form of well-written and thought-provoking articles. This is fully aligned with our core because we are a text-first forum. </p><p>But this leaves the conversations and lively discussions to happen somewhere else. Some in the comments here, but many more via our own chats. Hence we decided to experiment with a different format: a conversation where we bring the discussion straight to you, our readers in the form of a podcast: the Phi/AI Dialogues. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!14sA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!14sA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 424w, https://substackcdn.com/image/fetch/$s_!14sA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 848w, https://substackcdn.com/image/fetch/$s_!14sA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!14sA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!14sA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png" width="1400" height="1400" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1400,&quot;width&quot;:1400,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:54681,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/180094103?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!14sA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 424w, https://substackcdn.com/image/fetch/$s_!14sA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 848w, https://substackcdn.com/image/fetch/$s_!14sA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 1272w, https://substackcdn.com/image/fetch/$s_!14sA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4ac28ac8-c386-40f8-8505-9782a3401dc2_1400x1400.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>Please, do share your reactions. We read every single one and this is what keeps us going. </p><p>Without further ado, I invite you to listen: to &#8220;Alternative perspectives on AI Safety&#8221;  &#8212; recorded September<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>* under this <a href="https://open.spotify.com/episode/2jisWcsvxBFD9zxmOBqdxN?si=lfsojwtrQL6N3xjSZtQ7gg">link</a>. </p><p>Guests:</p><ul><li><p><strong>Katalina Hern&#225;ndez</strong> &#8212; is a Legal &amp; AI governance specialist focused on the intersection of artificial intelligence, autonomy, and digital rights.</p></li><li><p><strong>J&#225;chym F&#237;bir</strong> &#8212; a psychedelic researcher and entrepreneur exploring neglected frontiers (like machine consciousness, sentience, or biological alignment)</p></li><li><p><strong>Haihao Liu</strong> &#8212; with degrees in material sciencie and mathematics, he is involved with the AI Safety field since 2023, long enough to become a vehement critic, among other things because he doesn&#8217;t believe LLMs will lead us to AGI</p></li></ul><p>The conversation centers around how different metaphysical beliefs shape AI safety thinking and policy prescriptions. They contrast materialist/physicalist assumptions (which often predict high existential risk) with alternative views (dualism, panpsychism), then weigh trade&#8209;offs between near&#8209;term harms (privacy, mental health, environmental impact) and long&#8209;term existential risks. </p><p>The conversation closes by arguing for multidisciplinary collaboration (law, neuroscience, education, philosophy, and engineering) to improve definitions, governance, and assessment.</p><h1>6 key takeaways</h1><ul><li><p><strong>Worldviews change risk forecasts.</strong> J&#225;chym&#8217;s core point: predictions about AI&#8217;s dangers depend on metaphysical assumptions: e.g., if you assume a purely materialist universe, powerful optimizers (= AI) naturally create catastrophic risk; alternative metaphysical views imply different policy responses.</p></li><li><p><strong>Consciousness &amp; AGI remain unsettled.</strong> Panel consensus: current frontier models are <em>not</em> sentient (&#8220;hard no&#8221;), but there are two competing conceptual routes (<em>emergent consciousness from complex physical systems</em> vs <em>consciousness as requiring non-physical or panpsychist elements) </em>and each has different implications for ethics and regulation.</p></li><li><p><strong>Timelines and definitions matter.</strong> Disagreement on AGI timelines (near vs. distant) is partly definitional. We need better operational definitions and benchmarks for &#8220;general&#8221; capabilities before laws or risk models can reliably target AGI/ASI.</p></li><li><p><strong>Regulation must be practical, not only aspirational.</strong> Compute-based thresholds (e.g., in the EU AI Act) are convenient because they&#8217;re measurable, but they&#8217;re imperfect and reactive. Regulators should talk to cutting-edge researchers and handle alternative architectures and substrates that would break compute-based rules.</p></li><li><p><strong>Don&#8217;t neglect present, concrete harms.</strong> Social harms (mental-health impacts, misinformation, sycophancy), distributional harms, and environmental costs (energy/water/data centers) are real and under-addressed. Even if existential risk is important, real people suffer now and this needs to be addressed. </p></li><li><p><strong>Multidisciplinary collaboration is essential.</strong> The panel calls for lawyers, educators, philosophers, neuroscientists and technologists to work together:  e.g., borrow assessment ideas from education, investigate alternative compute substrates (analog chips) for efficiency, and create better capability evaluations.</p></li></ul><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It pains me to share that the conversation already took place in September. It is my responsibility that we didn&#8217;t share this earlier. We are still here, we learned what we needed to learn and next time we will be better. </p></div></div>]]></content:encoded></item><item><title><![CDATA[The Other 99% of Being Human in the Loop]]></title><description><![CDATA[Lessons on meaningful human oversight from the people who actually do it]]></description><link>https://www.phiand.ai/p/the-other-99-of-being-human-in-the</link><guid isPermaLink="false">https://www.phiand.ai/p/the-other-99-of-being-human-in-the</guid><dc:creator><![CDATA[Katalina Hernández]]></dc:creator><pubDate>Wed, 08 Oct 2025 07:01:53 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!u5GV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Ever since the European AI Act&#8217;s initial set of requirements<a href="https://digital-strategy.ec.europa.eu/en/policies/regulatory-framework-ai#:~:text=The%20AI%20Act%20entered%20into,application%20from%202%20February%202025"> became effective</a> earlier this year, I&#8217;ve also been more involved in discussions around human understanding of automated decisions.</p><p>And something I think about a lot more now is what &#8220;<a href="https://www.edps.europa.eu/data-protection/our-work/publications/techdispatch/2025-09-23-techdispatch-22025-human-oversight-automated-making_en">meaningful human oversight</a>&#8221; actually means. What does it look like? What factors define good oversight versus checkbox compliance?</p><p>You&#8217;ve seen me<a href="https://katalinahernandez.substack.com/p/the-limits-of-human-oversight-what"> criticise the wording of Article 14</a> against the problem of automation bias and the AI alignment<a href="https://katalinahernandez.substack.com/p/why-should-ai-governance-professionals-094"> concept of scalable oversight</a>. This sort of critical analysis may give you the impression that I&#8217;m pessimistic about &#8220;Human in the Loop&#8221; (HITL) frameworks. And I am the right amount of doubtful, yes.</p><p>But today I want to tell you about the situations that have made me think most deeply and productively about what it means to be <strong>the</strong> Human in the Loop.</p><p>It wasn&#8217;t in the office or while conducting AI governance research&#8230;</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!u5GV!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!u5GV!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!u5GV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png" width="1312" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1214976,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/174701582?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!u5GV!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!u5GV!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F04180fb3-9933-48e3-ae97-076f0068eb6c_1312x928.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>I love traveling. I&#8217;m that annoying person who arrives early at the airport just to watch other planes take off. I love the feeling of looking out the window during takeoff, and I actually enjoy long-hauls!</p><p>But even for travel lovers, going through airport security is a stressful part of the trip.</p><p>I&#8217;ve had the fortune of traveling several times this year, and two particular experiences at airport security stayed with me for somewhat unusual reasons.</p><h1><strong>Early memories of &#8220;dehumanisation&#8221;: A 5-year-old&#8217;s POV</strong></h1><p>Have you ever had your bags inspected at TSA? I remember this happening to my mother when I was five years old.</p><p>She was an exhausted mother who had to deal with her two children with less than 2 hours of sleep. But she was still a Colombian woman on a long-haul.</p><p>Her bags were placed on the floor at TSA (which wasn&#8217;t very clean), and our belongings were torn apart just to find what triggered the alarm.</p><p>As my mum had already insisted, it was the baby formula.</p><p>At the end of this, she had to kneel down and patiently put all our stuff back, while my brother and I watched.</p><h3><strong>Yes: Sometimes, it&#8217;s just powder.</strong></h3><p>A few months ago, I was travelling via London Gatwick and something in my handbag triggered the alarm.</p><p>I didn&#8217;t know what it could have been. Instinctively, I got uneasy about potential delays. But I also suddenly remembered this dehumanising feeling of seeing my mum kneeling in a packed airport, putting our things back, and muttering about good people being treated as criminals.</p><p>My stomach shrank at that thought, wondering why that had come to my mind.</p><p>I went to the secondary screening area. The officer didn&#8217;t need to open the bag: it was being scanned by a CT scanner that detected the chemical composition of my bag&#8217;s contents.</p><p>The officer smiled at me and exclaimed: &#8220;<em>Wow, seems some people on this team have never seen makeup in their life! Sorry about this, madam, have a good trip!</em>&#8221;</p><p>Somehow, I felt a strange weight off my shoulders. Twenty-four years after my mother&#8217;s incident, I reminded my adult brain that such technology exists and bags are not actually opened as often&#8212; preventing people from feeling treated like criminals for an honest, human mistake.</p><p>I thanked my luck and felt truly blessed for the opportunity to reflect on this. But what stuck with me was how the security officer handled the incident: with a smile, with the bearing of someone who carries the task of humanising something that might otherwise feel like an invasion of intimacy.</p><p>And an idea for this post started to form.</p><p>&#8220;<em>This. This is why</em>&#8221;&#8212; I thought, walking towards my gate.</p><h1><strong>Early memories of &#8220;humanisation&#8221;: A 3-year-old&#8217;s POV</strong></h1><p>On a more recent trip, I witnessed something interesting again, this time in the security area at Manchester airport. After placing my hand luggage in the trays, I was queued for the full body scanner.</p><p>In front of me were a single mother and her child, who must have been about 3 years old. I realised I&#8217;d never seen a child go through this process, and naively thought for a moment they&#8217;d allow the child to remain in her mother&#8217;s arms.</p><p>Apparently not.</p><p>The security officer looked extremely apologetic, probably moved by the scene herself. The mother let go of the child and instructed her to stand at the centre of the scanner where the foot drawings were: arms up, feet apart, imitating the silhouette in the illustration &#8212; and as required by aviation security regulations.</p><blockquote><p>For a couple of minutes, a three-year-old had to adopt <strong>the same pose criminals are instructed</strong> to adopt while being searched, because this is what the rule dictated.</p></blockquote><p>I felt a weird lump in my throat, and I suspect the people around me did too.</p><p>The security officer clapped and congratulated the child for doing such a good job, and then grabbed her little hand while waiting for the mother to go through the scanner too. Relieved, the mother received her child back in her arms once they were both cleared, and carried her to the next checkpoint. The child looked so happy about the reaction she had received from the officer that she kept imitating this pose and laughing for a while.</p><p>I kept picturing the sobriety and ceremoniousness in the officer&#8217;s facial expression, and I remembered what had happened to me at Gatwick.</p><blockquote><p>I was moved by the thought of how these professionals (at least the ones who truly care) carry the task of humanising experiences that may otherwise feel dehumanizing.</p></blockquote><p>I felt a sense of hope and gratefulness for these experiences.</p><p>Later that day, on my way to meet my family, I decided that I could make peace with the wording in<a href="https://artificialintelligenceact.eu/article/14/"> Article 14</a>&#8230; but only as long as we prioritise conserving <em>this</em> in our critical infrastructures.</p><h1><strong>Beautiful, but: What do you mean by &#8220;Human in the Loop&#8221;?</strong></h1><p>Okay, this was more anecdotal writing than usual. But I wanted to ground my thoughts on these experiences before getting legal-technical.</p><p>So, in AI regulatory compliance, what do we usually mean by<a href="https://www.ibm.com/think/topics/human-in-the-loop"> </a><strong><a href="https://www.ibm.com/think/topics/human-in-the-loop">Human in the Loop</a></strong>?</p><p>At its core, it&#8217;s the premise that a human remains actively involved in an AI system&#8217;s decision-making process: able to intervene, override, or at least understand what&#8217;s happening when machines make decisions that affect our lives. The European<a href="https://artificialintelligenceact.eu/ai-act-explorer/"> AI Act</a> is quite specific about this.</p><p><a href="https://artificialintelligenceact.eu/article/14/#:~:text=Summary,arise%20from%20using%20these%20systems.">Article 14</a> mandates that high-risk AI systems must be designed for &#8220;effective oversight&#8221; by natural persons during use. It requires that humans can fully understand the system&#8217;s capabilities and limitations, properly interpret its outputs, and decide not to use the system or disregard, override, or reverse its outputs.</p><p><a href="https://artificialintelligenceact.eu/article/13/">Article 13</a> mandates transparency obligations, requiring that these systems come with instructions clear enough that people can actually exercise this meaningful oversight.</p><h2><strong>Scalability and Automation bias: How &#8220;oversight&#8221; struggles to actually oversee.</strong></h2><p>What the AI Act envisions as &#8220;meaningful oversight&#8221; might be fundamentally at odds with how oversight actually scales. After all, how do you oversee something that processes millions of data points in ways your brain cannot replicate? How can humans supervise systems that increasingly operate beyond human-comprehensible complexity?<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>Arguably, most AI systems deployed in critical infrastructures are still comprehensible enough that the AI Act&#8217;s mandates are not unreasonable or technically unfeasible (yet).</p><p>But a big challenge for effective oversight of current systems is our terrible habit of over-relying on automated systems even when we&#8217;re supposedly supervising them: what we call &#8220;<a href="https://www.forbes.com/sites/brycehoffman/2024/03/10/automation-bias-what-it-is-and-how-to-overcome-it/">automation bias</a>&#8221;.</p><p>Have you ever accepted your GPS&#8217; instructions, even though you suspected it was making a mistake? When a system is right most of the time, our brains naturally start acting on its decisions before our internal alarm starts ringing. That is why, sometimes, you only course-correct after you already took the wrong turn.</p><p>Under the AI Act, the security scanner at Gatwick must allow the officer to override its algorithm, understand why it flagged my cosmetics, and choose to dismiss the alert. The officer needs to have undergone sufficient training to know when the system might fail, clear information about what it&#8217;s detecting, and have the actual authority to make the final call.</p><p>In banking, when an AI flags a transaction as suspicious, the compliance officer reviewing it must be able to understand the flag&#8217;s basis, assess its validity against their own judgment, and override it if they disagree.</p><p>What, perhaps, we don&#8217;t appreciate enough is the level to which the compliance officer is <strong>battling their own cognitive tendency</strong> to just trust the machine.</p><h2><strong>Cognitive Calibration</strong></h2><p>One of the best articles I&#8217;ve seen lately on this issue is &#8220;<a href="https://www.ethos-ai.org/p/cognitive-calibration">Cognitive Calibration</a>&#8221; by James Kavanagh and Dr. Alberto Chierici.</p><p>Coincidentally, James grounds his piece with a real-life anecdote about an airplane crash (very in-line with my airport musings!):<a href="https://en.wikipedia.org/wiki/Air_France_Flight_447"> Air France Flight 447, which crashed into the Atlantic Ocean in 2009, killing all 228 people aboard.</a></p><p>The tragedy wasn&#8217;t caused by common mechanical failures. It was caused because experienced pilots had become so accustomed to autopilot that they literally forgot how to execute basic stall recovery procedures.</p><p>When ice crystals blocked the airspeed sensors and the autopilot disconnected at 37000 feet, the crew had four minutes to perform a maneuver any experienced pilot <strong>can </strong>do: nose down, power up. Instead, they pulled back on the stick for the entire descent, unable to recognise or recover from the stall despite continuous alarms:</p><p>Because years of automation had atrophied their manual flying skills.</p><p>Reading this actually made me jump off the sofa a few times. As the authors put it:</p><blockquote><p>Air France 447 reveals a disturbing truth: <strong>as our systems become more automated and capable, humans become increasingly vulnerable at the moment those systems fail. Our cognitive abilities are not calibrated to reliably understand and act in those moments.</strong></p></blockquote><p>If AI handles 99.9% of decisions correctly, humans gradually lose the ability to intervene effectively in that critical 0.1% when systems fail. And, to make matters worse, it turns out that increasing Explainability leads to even more automation bias: because more plausible-sounding explanations make it more difficult for the human to detect when they&#8217;re not true.</p><p>And, if automation continues to improve (and we overcome scalability concerns), that 0.1% may turn into more of a 0.0001%&#8230; Can this be helped at all?</p><p>Maybe, if standards for <em>meaningful human oversight</em> actively train the Human in the Loop for &#8220;Cognitive calibration&#8221;: the ability to maintain appropriate skepticism toward AI systems, knowing when to trust and when to challenge their outputs.</p><p>James and Dr. Alberto frame it as &#8220;building the <strong>muscle memory</strong> that keeps judgment calibrated and intact precisely when the system needs a human&#8221;<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>. They also propose a practical framework called &#8220;the <strong>CATCH</strong> protocol&#8221;, which you can read<a href="https://www.ethos-ai.org/p/cognitive-calibration"> here</a>. I really encourage <em>Stress Testing Reality </em>readers to engage with their research!</p><h1><strong>So, What Does This Have to Do with the Previous Rant?</strong></h1><p>Yes, we need humans who can correct AI&#8217;s mistakes before they affect people, and who make it possible to provide meaningful<a href="https://gdpr-info.eu/art-14-gdpr/"> explanations to natural persons</a> as to how these decisions are made.</p><p>But long hours in the airport had me thinking: what will actually happen when AI gets so good that mistakes become rare? When, most of the time, the Human in the Loop is&#8230; just there?</p><p>I picture that 3-year-old being scanned with her hands up. Should a false alarm trigger, I cannot imagine the impotence a mother must feel when she doesn&#8217;t understand what&#8217;s happening and cannot reassure her toddler. But even if nothing goes wrong, letting a small child off her arms to be independently scanned (cleared from the &#8220;maybe there&#8217;s something illegal here&#8221; presumption) is already uncomfortable.</p><p>It would feel even more outlandish without the empathetic presence of another person, reassuring and giving dignity back, while this intrusive moment passes.</p><p>As we solve the scalability issues of automation and reduce errors, the human in the loop won&#8217;t spend most of their time catching mistakes. They may well become the only &#8220;humanisers&#8221; of experiences that would otherwise feel soulless. And maybe that&#8217;s the point.</p><p>Customer service AI agents or chatbots fail for the same reasons human customer service fails: poor rapport, no empathy, and a feeling of having a robotic interaction.</p><blockquote><p>The human in the loop won&#8217;t just have to combat automation bias; they&#8217;ll also have to resist becoming just another robotic gear in the decision chain.</p></blockquote><p>Right now, oversight frameworks focus on one trait that the HITL must have: the necessary technical expertise. But, isn&#8217;t the competence of <strong>preserving humanity</strong> in automated processes also meaningful human oversight?</p><p>We know that the critical part of being the Human in the Loop will always be contingent on our capacity to remain alert and in control, cognitively calibrated.</p><blockquote><p>But, while nothing goes wrong, while oversight turns tedious because there are no mistakes to correct, the human in the loop must, simply, humanise.</p></blockquote><p>If I ever have children and I have to let them off my arms to be scanned at airport security, an LLM output saying everything&#8217;s fine won&#8217;t remove the knot from my throat.</p><p>It will be the kind smile of the human in the loop who isn&#8217;t just checking for glitches, but knowingly looking into my eyes, thinking: &#8220;<em>it&#8217;s okay, this feels weird, and it won&#8217;t be too long now</em>&#8221;.</p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>This problem is commonly known as &#8220;<a href="https://www.edps.europa.eu/data-protection/technology-monitoring/techsonar/scalable-oversight_en">scalable oversight</a>&#8221; in AI Safety: the challenge of ensuring humans can provide accurate feedback on AI outputs even when those outputs involve tasks beyond human expertise or comprehension.<a href="https://ai-safety-atlas.com/chapters/08"> Scalable oversight research</a> focuses on developing methods (such as task decomposition, debate between AI systems, or recursive reward modeling) that allow humans to maintain meaningful supervisory capacity even as AI capabilities exceed human performance in specific domains. </p><p>While this article focuses mostly on current state of the arts&#8217; systems deployed in High risk settings rather than on general purpose AI, I believe that this concept is still illustrative of the limitations of human oversight. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>I&#8217;ve never liked the saying &#8220;<em>it&#8217;s like riding a bike&#8221;. </em>Surely, if the steering and balance were automated and you had to be alert for that 0.1% moment where it fails, I&#8217;d expect that most instances would result in an accident due to slow reflexes and reaction time.</p></div></div>]]></content:encoded></item><item><title><![CDATA[AI: Can We Let Go of Thought?]]></title><description><![CDATA[What if AI could carry the burden of memory and repetition, leaving us the space to play, to imagine, to live?]]></description><link>https://www.phiand.ai/p/ai-can-we-let-go-of-thought</link><guid isPermaLink="false">https://www.phiand.ai/p/ai-can-we-let-go-of-thought</guid><dc:creator><![CDATA[Sebastian Osorno]]></dc:creator><pubDate>Mon, 06 Oct 2025 13:03:54 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!svhA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Let&#8217;s play with the idea that outsourcing our thinking might help us free ourselves from thought. At first glance, it makes little sense. We often define the &#8220;self&#8221; through thought, and we describe our species as the thinking creature among the animals, and even other kingdoms. If, like me, you take pleasure in thinking, this may be unsettling. We&#8217;ve seen a recent and welcome call to revive critical thinking in our engagement with AI. Decades ago, a philosopher I deeply admire suggested that technology may already have replaced our capacity to think in a logical, mathematical sense, and he said this repeatedly across the 1960s and 1970s. The philosopher I have in mind is Jiddu Krishnamurti. From a spiritual vantage point, he often urged us to ask, from the depths of the mind, who we really are, noting that computers can surpass our cognitive, mathematical, and mnemonic skills. Inspired by his talks and writings, I&#8217;ll sketch an optimistic idea: outsourcing thought to AI or technology so that we can blossom astonishing qualities of being<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. If you&#8217;ve watched 2001: A Space Odyssey (1968) by Stanley Kubrick, this shouldn&#8217;t surprise you; we&#8217;ve been probing the nature of our relationship with intelligent machines since at least then. Why do we treat our present moment as uniquely unprecedented?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!svhA!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!svhA!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!svhA!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!svhA!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!svhA!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!svhA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png" width="1312" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1857222,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/175154868?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!svhA!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!svhA!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!svhA!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!svhA!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F9b3a195d-bb73-4ee2-b17c-f004d917dca3_1312x928.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>I start by drawing a line between what I call <em>living-thinking</em> and <em>death-thinking</em>, taking cues from Krishnamurti. I frame this difference with Huizinga&#8217;s idea of learning as play, and with Wittgenstein&#8217;s<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a> notion of language-games from his later period. In this essay, I present AI as a real opportunity for us to delegate <em>death-thinking</em> so we can devote ourselves to <em>living-thinking</em>, a more joyful and playful creative movement. Knowledge may need to step down from its pedestal, so that AI can be seen as an individual, psychological, and spiritual opportunity. Aware that intellectual games are flexible but life is stubborn, I&#8217;ll use concrete examples showing how, in non-idealized ways, AI frees me from certain kinds of thoughts.</p><p>We tend to define thinking in purely cognitive terms and forget its ludic nature, learning through experimentation. Our obsession with knowledge and memory feels like a refusal to let go of what is dead; it echoes our primitive relationship with death.</p><p><em>I&#8217;ll often step into psychological and sometimes spiritual terrain. This remains an essay; I&#8217;m not instructing anyone on how to engage with AI. I hope to offer a perspective that helps readers build their own ethical framework in resonance with their values.</em></p><h2>Why might we need to free ourselves from thought?</h2><p>We are both incredibly creative beings capable of walking the unknown, and trained and conditioned creatures taught to repeat and conform. We need to step out of our conditioning, from the known, so that we can experience something beyond, something new, something alive. <em>Death-</em> and <em>living-thinking</em> cohabit us. By evolution and fear of exclusion, we drift toward <em>death-thinking</em> as we age, god-fearing and driven by belonging.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><h3>What is death-thinking?</h3><p>We&#8217;ve learned, individually and collectively, that inherited patterns, our defining narratives, our silent and spoken ideas, have enabled <strong>survival</strong>. They&#8217;ve also confined us. The thought contour society for survival, and society returns the favor, a functional, ancient cycle. From a pragmatic point of view, there are so many forces molding the &#8220;self&#8221; that it is hard to keep up sustaining the belief in &#8220;free will&#8221;. From <a href="https://www.phiand.ai/p/ai-human-nature-unfolding">evolutionary forces</a> to psychological unconscious, and social pressures, our decisions may be less singular than our Judeo-Christian lens suggests. We tend to feel that decisions and responsibilities are ours, but we often forget to ask who is asking the question, who is the one deciding, and inquire deeply enough to see the illusion dilute.</p><p>We may need to free ourselves from thought to better recognize who we are. We are movement, while knowledge is static. From Krishnamurti&#8217;s view, change cannot be grounded in ideas or collective pressure, since consensus rests on knowledge, and knowledge resists the unknown. Change begins with individual attention and a great amount of energy. Facing the unknown, which is everything before us, requires awareness of the &#8220;self&#8221;, which is a distillate of what we know about ourselves, braided with the conscious and unconscious history of humankind, and the information living in our DNA as a species living within us. Knowledge is necessary for survival, yet we have layered it exponentially, perhaps as a playful flaw, paradoxically blinding ourselves to the unknown. Seeing, living, meditating may mean observing <strong>what is not</strong> both internally and externally if there is such a difference.</p><p>Memory keeps us alive, how to find resources, how to return home. Yet, through complex relations, language, technology, and cultural artifacts, we&#8217;ve used our boundless capacity to limit ourselves in co-existence. We are social creatures. Belonging was, and feels, vital. We adapt to fit the mold, even when it is rotten. Rooted in fear, collective memory organizes around tribalism and the survival impulse of belonging, manifesting in nations, nationalisms, and symbolic group identities, which produce separation and war.</p><p>I argue that the immense training data of LLMs and their statistical algorithms are not different from what we&#8217;ve long used to operate: knowledge. Psychologically, knowledge has served fear, defining ourselves by separation to secure belonging. We sublimated this into nations and group identities with socio-economic, geographic, religious, or racial labels. Machines differ not only in speed and memory; we may need memory to remember who we are, but machines don&#8217;t care who they are; machines do not need a why for memory, it is the very reason they exist.</p><p>Everything we store in the &#8220;self,&#8221; to belong to a group, country, class, race, title, everything we use to identify ourselves by separation is what I call <em>death-thinking</em>, rebuilding with dead parts of what was, limiting our capacity to see what is due to its heaviness.</p><p>We can outsource a huge portion of this operation to machines, to AI. <em>Death-thinking</em> is deeply rooted in language; it is necessary, and now we have large language models (LLMs). Naming, labeling, and wording enable evocation, but mediate perception<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>.</p><h3><strong>What is</strong><em><strong> living-thinking</strong></em><strong>?</strong></h3><p>However, there is a quality of intelligence beyond programming or prediction that LLMs do not achieve. I call it <em>living-thinking</em>. It arises from our experimental, ludic way of learning and observing, evoked memorably in the ape sequences of Kubrick&#8217;s 1968 film. Huizinga argued for the priority of play nearly a century ago, reinforcing a post-humanist view:</p><blockquote><p> <em>Play is older than culture, for culture, however inadequately defined, always presupposes human society, and animals have not waited for man to teach them their playing</em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a><em>.</em></p></blockquote><p><em>Living-thinking</em> requires chaos, mistakes, friction, and a step outside the known. It is fueled by our animal nature and instinct. We often create most freely as children, in play, when we don&#8217;t &#8220;know&#8221; as much. The price is disobedience, ignorance, even conflict and violence, refusing what consensus has accepted, whether grounded in science, tradition, or memory. This can trigger the fear of rejection and real exclusion. We are terrified by ostracism. Though <em>death</em>- and <em>living-thinking</em> intersect, they&#8217;re often incompatible. We are rigorously conditioned to trust thinking and memory as a safe place; indeed, they seem safe, but they also bind us and poison us.</p><p>Here lies a paradox in our urge to belong, our quest for connection. When seeking connection, we face a choice: (a) recognize our singularity and serve the group from that place, risking rejection (<em>living-thinking</em>), or (b) recognize the group&#8217;s mold and do whatever it takes to fit, killing authenticity (<em>death-thinking</em>). This choice is often situational and gradient, not purely binary; it&#8217;s a living and perpetual choice.</p><p><em>Living-thinking</em> is not a safe place. It locates meaning in relationships, in how we order words, our singular way of infusing spirit. It precedes language; a baby is born into <em>living-thinking</em> and is trained into <em>death-thinking</em>. Our injection of being through syntax, words, context, body language, and imperfections can resonate with others. We play language games<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> to test the connection by evoking images, experiences, and emotions. LLMs predict, they don&#8217;t intend or experience. That&#8217;s why we often sense AI-generated content, not by counting em dashes, but by the absence of felt connection.</p><p>To summarize, <em>death-thinking</em> is identification with the group, fitting the mold through language and its symbols. It feels safe, and it is culture and society. Beyond it, and limitless, <em>living-thinking</em> rises from our ludic impulse to test, to seek authentic connection, to face the unknown with a child&#8217;s bravery, accepting ignorance, and risking everything. Precisely because AI can shoulder so much memory, synthesis, and reproduction, we gain a chance to keep more of our energy in <em>living-thinking</em> while delegating <em>death-thinking</em> to machines.</p><h2>Why is AI an opportunity to set ourselves free from thought?</h2><p>Two points make AI a tool for freedom. First, for the first time, we can play with results, scenarios, and prototypes through language, programming, mathematics, and symbols at unprecedented speed, learning or abandoning ideas via fast experimentation and simulation. Second, in this era of over-production of information, AI offers a way to lean into <em>living-thinking</em> while we outsource much of <em>death-thinking</em> to LLMs and other AI models.</p><p><em>The three examples below illustrate how I found the heavy work of AI taking care of some of the burden of handling death-thinking</em>:</p><p>My first example comes from my own experience in Academia. Seventeen years ago, when I wrote my undergraduate thesis in History (2008), three tools extended my reach while cutting research time: (1) Excel and Access to structure a database from unstructured primary sources, (2) Word to write, no handwriting or typewritten copyedits, and (3) JSTOR, which kept me current with global scholarship. Still, most of my time was spent with notebooks and physical books; campus terminals sometimes were the only way to access JSTOR. I&#8217;m nostalgic about that tactile work, like film photography&#8217;s analog charm, but nostalgia isn&#8217;t function.</p><p>The non-creative load, what I call <em>death-thinking</em>, was heavy. Academic systems often reward demonstration of mastery over genuine novelty. If I&#8217;d had today&#8217;s AI, I could have redirected time from literature demonstration toward the unknown, communication, boundary pushing, and new approaches. AI could have handled much of the knowledge marshalling, while I focused on <em>living-thinking</em> as a trained historian, perhaps reaching a wider audience and freeing my imagination<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>.</p><p>I am not arguing that we should get rid of <em>death-thinking</em>. It&#8217;s crucial for consensus, especially in academia and science. I am arguing that we can engage AI to take over more of our <em>death-thinking</em> so that we can spend more time playing and less time carrying the load that machines can bear. Going back to my undergraduate thesis, would I do that today, my approach would be different: I would locate candidate passages by searching authoritative editions, verify page numbers and edition details, and store citations in a bibliographic manager for reproducibility. I would also cross-check with digital archives to ensure that quotations are not hallucinated or mistranscribed.</p><p>My second example comes from my experience as an entrepreneur. Seven years ago, I learned about Web3, digital assets, and crypto market cycles. High-quality asset assessments (Do Your Own Research DYOR) are laborious: requiring auditing technically smart contracts, blockchain block flows, tokenomics, social signals, use cases, and more. With rigor and patience, and with external experts, you spend weeks. Worth it, but costly.</p><p>A colleague and mentor shared a strong prompt for ChatGPT deep research mode, or Perplexity, to run DYOR. Judgment remains mine, so does triage of hallucinations. But that workflow saves time, letting me focus on risk management and evaluate assets I&#8217;d otherwise miss in a fast market.</p><p>In short, AI helps me externalize the fueling of decisions, gathering, scraping, computing, so I can spend more energy in <em>living-thinking</em>, responsiveness to context, and emerging opportunities and risks.</p><p>The third example comes from my writing practice. This is perhaps the clearest case of outsourcing <em>death-thinking</em> to AI. Approaching prose from <em>living-thinking</em> is often more productive and joyful. Knowledge is required, but much of its handling can be externalized. For this post, Huizinga&#8217;s thesis on play adds force. I recalled the core idea, then asked an AI assistant for relevant quotations and bibliographic details. The choice to include Huizinga, where to place him, and which quote to use remained mine. I didn&#8217;t need to hunt for the book or re-read chapters to extract passages; this was AI.</p><p>A second process worth outsourcing is copyediting. I keep drafts and prior essays in an AI workspace, and I spin up specialized threads to copyedit in my voice. English isn&#8217;t my first language, so I seek a balance, consensus, and clarity without sacrificing authenticity. AI helps me tune for an Anglophone audience without carrying all the <em>death-thinking</em> load myself. I verify quotations against primary editions, avoid unsourced block quotes, and reject paraphrases that do not fit my purpose or resonate with my voice.</p><p>My workflow, (1) draft the entire post in English, a <em>living-thinking</em> practice in my second language, (2) enrich with quotes and references that emerged while writing, (3) multiple self-edits aware of second-language limits, (4) full copyedit in my AI project, (5) selectively accept edits, (6) send to my human editor, and iterate. If my voice is authentic and the message clear, I&#8217;m satisfied. AI saves time and defends space for free, ludic writing.</p><p>Based on this thread of thought, I have a brief observation on the potential impact on organizations considering the reports of high failure rates in enterprise GenAI<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a>: I have the hypothesis that there is a lot of confusion in identifying which processes are dead but functional, standardizable, and automatable, and which depend on <em>living-thinking</em>, creative, relational, and situational. What is dead can be automated; the unknown cannot. Hence, the push is to standardize and make processes predictable, as clearly explained by <a href="https://www.phiand.ai/p/is-ai-standardizing-us-humans">Olga Tr&#246;gger in her recent post in Phi/Ai</a>.</p><p>Overall, these examples show that current AI capabilities cannot replace <em>living-thinking</em>, that ability that precedes language and that arises from singularities and team synergies that resist capture by statistical models. <em>Living-thinking</em> has practical applicability but not a fixed methodology; it requires ownership of spirit, not just calculation. I hope to develop this beyond my current observations.</p><p>If this holds water, the focus of AI implementation is not technology or frameworks but people and their living potential. Can we teach people to cultivate and amplify <em>living-thinking</em> skills? Better, can we forget to operate from<em> death-thinking </em>as default?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>Some thoughts before closing</h2><p>I believe we can outsource <em>death-thinking</em> to AI, so we can reclaim our nature and step into <em>living-thinking</em> more fully. Can machines reduce the energy we spend in survival mode so we can create? My answer, from experience, is <strong>yes</strong>, but it&#8217;s our choice, and it requires energy and attention. I am aware that even though this is a great opportunity, it is not necessarily the path we&#8217;ll choose. We must not yield ourselves to LLMs or surrender our voices to their outputs. Current AI is great in carrying on all processes that are characteristic of <em>death-thinking</em>, which is a huge burden on our shoulders and spirits: memory, knowledge, processing and referring information, using better grammar and syntax, frameworks, methodologies, and all the rest of it.</p><p>We find ourselves in others, through resonance, through suffering, and through experiment. We need to play with the tools at hand, including AI and LLMs. Applying our ludic nature, personally, academically, scientifically, organizationally, it is an immense benefit in our service, but it&#8217;s not an easy task since we fight our conditioning. The movement is a very spiritual and psychological impulse, so we can let go of what we were trained to do, of what we fear by experience, and step into the out-of-the-box feature we already have: <em>living-thinking</em>.</p><p>Is the fear of ostracism and our urge to belong by connecting through consensus something we can delegate fully to machines or AI? Not yet, we are emotionally and spiritually very slow creatures, and we won&#8217;t move as fast as we can envision or think. My intuition is telling me that enterprises, startups, venture capital, and corporations, in many ways, while allocating human beings as human resources or assets, are considering them as non-living functional creatures, serving predictable processes, often repetitive; all of these characteristics of the Industrial Revolution make replacing humans a corporate and capitalist dream. What is often forgotten in these environments is to understand how important the <em>living-thinking</em> nature is to keep alive all the dead repetitive processes, which are also interacting with living matter and present circumstances. We infuse the <em>living-thinking</em> into the rest of it; organizations won&#8217;t and never will be singular as they have predicted (&#8216;solo-startups&#8217;), because at least they need to serve other living beings, which we often call the market. The irony of capitalism is dreaming of having a death matter to relate with, with predictable patterns, a huge consumer death-matter which behaves predictably, so corporations and sellers can project their profits and losses. That is a dystopian capitalist dream.</p><p>More than AI making incredibly productive companies, cutting costs and creating efficiencies, we&#8217;re facing the opportunity to create a new type of organization to relate with nature, and the market as a living matter. Those who are determined to let go of the burden of what is known, and let machines and AI carry it for us, can bet on ventures, spin-offs, or internal experimentations with a totally different quality and reason of existence, which can benefit all of us.</p><p>I find it fascinating that current AI, and most likely next generations of AI models, will probably serve better individual freedom in a deeper psychological and spiritual sense than capitalist dystopias. As I stated at the beginning, I know this is an optimistic approach, but I hope I uncover how real and plausible the opportunity is.</p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/ai-can-we-let-go-of-thought?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Phi&#8202;/&#8202;AI! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/ai-can-we-let-go-of-thought?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/p/ai-can-we-let-go-of-thought?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>J. Krishnamurti, <a href="https://youtu.be/xsYhBGT2__U">&#8220;Saanen 1981, Public Talk 1,&#8221;</a> video recording, Official J. Krishnamurti YouTube Channel.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Ludwig Wittgenstein, <em>Philosophical Investigations</em>, posthumously published 1953, English edition 1958 (Basil Blackwell). See also <em>The Blue and Brown Books</em> (preliminary studies; lectures 1933&#8211;35; published 1958).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>J. Krishnamurti, &#8220;The First and Last Freedom,&#8221; 1954, p. 92: &#8220;The word is not the thing. The description is not the described. The word &#8216;tree&#8217; is not the tree.&#8221;</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Johan Huizinga, <em>Homo Ludens</em>, English translation, Routledge, 1949, p.1</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>Ludwig Wittgenstein, <em>Philosophical Investigations</em>, posthumous publication 1953, English ed. 1958.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>A recognized French historian, Georges Duby, highlighted imagination as essential to the historian's work: &#8220;L&#8217;histoire exige de la clart&#233;, de la lucidit&#233;, de la patience mais aussi du style et de l&#8217;imagination. Du lyrisme en somme.&#8221; <strong>interview with Antoine de Gaudemar</strong>, October 1984 (&#8220;Entretien avec Antoine de Gaudemar &#8211; Octobre 1984&#8221;).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>State of AI in Business 2025, The GenAI Divide,&#8221; MIT NANDA, July 2025</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Beyond Human-Aligned AI]]></title><description><![CDATA[AI alignment shouldn't just mirror human values - divergent AI could transcend our values, challenge and augment human intelligence by unlocking novel morality frameworks]]></description><link>https://www.phiand.ai/p/beyond-human-aligned-ai</link><guid isPermaLink="false">https://www.phiand.ai/p/beyond-human-aligned-ai</guid><dc:creator><![CDATA[Mishka Nemes]]></dc:creator><pubDate>Tue, 30 Sep 2025 22:31:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!yqQN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<blockquote><p>This article is the first in-depth writing following from the opening article titled <a href="https://www.phiand.ai/p/moving-away-from-anthropocentrism">Moving Away From Anthropocentrism</a>. We also have an event on this theme coming up in London on 28th October in collaboration with the AI Salon, please see <a href="https://luma.com/8a23ap47">here</a> the event details and how to register.</p></blockquote><p>AI alignment has been a highly researched and fiercely debated topic for years now. We want to make sure we align AI systems with the intention and goals of the humans who created it, but that poses many concerns in itself&#8212;<em>who are the humans who build, evaluate and sign-off the safety of AI systems? How do we design evaluation benchmarks which are representative, equitable and just? Ultimately, why would we want to see ourselves reflected and augmented at scale in AI systems in light of the biases and shortcomings humans exhibit?</em></p><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Current AI alignment uses techniques such as reinforcement learning using human feedback (RLHF) or instruction fine tuning with the goal of building and deploying AI systems which are <a href="https://arxiv.org/html/2406.18346v1">&#8216;helpful, honest, and harmless&#8217;</a> towards humans, balancing tensions such as user-friendliness and user deception. There are now plentiful of technical tools and AI governance frameworks to overcome bias within data such as missing or misrepresentative data, but when engaging with LLMs using vast and diverse data sources, <a href="https://www.nature.com/articles/s41562-024-02077-2">AI takes on and in effect amplifies human biases using feedback loops</a>.</p><p>Some evaluation efforts focus on technical capabilities and human-AI interaction solely whilst <a href="https://arxiv.org/pdf/2310.11986">others place great emphasis on sociotechnical evaluations </a>considering capability, human interaction and system impact layers altogether. UK&#8217;s AI Safety Institute and its counterpart in the US, the Center for AI Standards and Innovation, were set up to provide independent evaluations of AI systems and directly inform policy and national regulation, and<a href="https://openai.com/index/us-caisi-uk-aisi-ai-update/"> frontier AI organisations such as OpenAI</a> have signed voluntary agreements to evaluate AI models before public deployment in joint efforts to ensure AI systems best align to human intentions. Most recently, <a href="https://www.nbcnews.com/tech/tech-news/un-general-assembly-opens-plea-binding-ai-safeguards-red-lines-nobel-rcna231973">the UN General Assembly in September 2025 </a>opened with &#8220;<em>an urgent call for binding international measures against dangerous AI&#8221; </em>due to its increasing use and misuse in geopolitics, mis- and dis-information, alongside other AI safety concerns such as <a href="https://www.europarl.europa.eu/RegData/etudes/IDAN/2024/754450/EXPO_IDA(2024)754450_EN.pdf">human right repressions and violations</a>. This ethos is aligned to that of the AI doomsdayers, except it is not asking for a full blockade, rather more regulation.</p><p>Whilst all these signatories, organisations and evaluation frameworks share similar concerns of autonomous and harmful AI, they all fail to seize a unique opportunity: <strong>to build and empower AI systems bring forth novel moral intelligence to augment, complement or emerge alongside human intelligence</strong> and unlock inventions or capabilities beyond what we currently think is plausible. This line of thinking might reverb the speeches of the AI accelerationists, as there are arguments to conceptualise and test how AI systems can birth novel epistemological frameworks where humans don&#8217;t hold the highest or only knowledge of the world. </p><p>We are perhaps not ready for non-human intelligence beyond human capacity, and this article is an invitation to imagine and reflect on safe and ethical AI which goes beyond human alignment within decentralised systems of power and across geographical boundaries, paving the way for new knowledge, morality principles and emerging intelligence at the junction between humans and AI.</p><h1>From human reflections to human simulations</h1><p>Framed differently, current methods in AI alignment could be described as reflective practices as illustrated in fields such as human-centred design and human-inspired AI. Reflectionism indeed helps us reflect on our own ethical conundrums and it brings utilitarian value as seen in the case of AI agents imitating human workers by maximising alignment to their preferences when working alongside them to drive up productivity gains. Conversely, it brings to the surface the phenomenon of <em><strong>sycophancy</strong></em> where models become overly agreeable and fail to provide necessary pushback or critical analysis; for instance, humans use AI to replace therapists as they want to be validated and encouraged, even <a href="https://www.bbc.co.uk/news/articles/cgerwp7rdlvo">when they consider terminating their lives</a>. In turn, human-AI <em>reflections</em> create reinforcing or recursive loops leading to echo chambers, further societal segregation and complacency. This phenomenon was coined as <em><a href="https://arxiv.org/pdf/1312.6114">preference drift</a></em> whereby through increased fine tuning and optimisation, diversity and complexity of outputs diminishes.</p><p>AI ethics and AI alignment teach us that they are <strong>necessary but not sufficient</strong> for machine intelligence, thus leaving us question if we might be overfitting the values of some privileged and powerful human creators to the human-built AI systems. What if we went beyond human reflectionism to create environments where we can simulate human intelligence and position AI as an epistemological technology? We can think of AI as becoming an experimental superorganism whereby through the means of machine intelligence we can test how humans respond, react and in fact, think and function. In such a scenario as envisioned by <a href="https://afteralignment.antikythera.org/">Antikythera&#8217;s After Alignment thesis</a>, AI alignment becomes a tactic for AI instrumentality. Aligned AI maintains the intention of its maker, but can go above and beyond human intelligence. Conversely, through human-computer interaction, machines might be able to understand how humans think and adapt to us in a biologically evolutionary fashion. As Bretton frames it, we can shift from cognitive psychology of human-computer interaction to a renewal of psychoanalysis for human-AI interaction design.</p><p>In an AI scenario beyond human alignment, AI becomes akin to a digital twin, through which we can evoke personal simulations to reflect back on our cognitive processes and values, and this intelligent, personal assistant will push us to think beyond our current personas, to unlock new insights about who we are and makes us, us.</p><h1>From convergent to divergent intelligence</h1><p>Artificial intelligence is not meant to merely imitate us but to surpass us in areas where we are weak, such as digesting large volumes of diverse data types to inform decision making or forecast the weather. We have witnessed historical moments where AI systems humbled human intelligence, for instance in <a href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">the case of famous </a><em><a href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">move 37 </a></em><a href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">done by AlphaGo </a>showing novelty and creativity to a level where it left human experts dumbfounded. This perceived effect on us, humans, might be called <em>the uncanny ridge </em>where we enable AI to drift away from alignment to bring out increased complexity and potential solutionism through creative problem solving. The concept draws on <a href="https://en.wikipedia.org/wiki/Uncanny_valley">Mori&#8217;s uncanny valley </a>which describes human discomfort towards near human simulations, and it imagines a scenario whereby AI engages humans in a more productive ethical and moral discourse. For instance, it can bring more nuanced approaches on ethical conundrums that diverge from binary classifications to deliberations on a continuous spectrum.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!yqQN!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!yqQN!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!yqQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png" width="1312" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2131955,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/174972491?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!yqQN!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!yqQN!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F4d76643f-06a1-41f2-979b-956b0c3e7234_1312x928.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Beyond individual intelligence entities, we can already see human intelligence being augmented by AI, thus resulting in an emergent type of intelligence. What if, in return, AI used human intelligence to augment its existing architecture and application domain, rather than to merely align it? Within the current AI alignment discourse, AI cannot comprehend or capture cultural and social norms, and whilst sociotechnical frameworks assess societal impact more broadly, they fail to foresee compound and emerging effects.</p><div class="pullquote"><p><em>Nothing causes culture but culture itself, culture causing culture which is caused by more culture, and thus anything, including AI, is intrinsically a reflection of that culture and nothing more. We might call this social reductionism and cultural determinism, which for all its lip service to posthumanism can be the most militant guise of humanism. </em></p><p><em>Reference: <a href="https://afteralignment.antikythera.org/">After Alignment - Antikythera</a></em></p></div><p>However, an emerging intelligent super-entity consisting of a multitude of humans and AI agents, is likely to develop its own culture and morality over the decades to come, as the transformative effects of AI are truly felt across society and baked in new cultural norms.</p><blockquote><p>Here is a thought experiment*: the year is 2035 and employees at a medium-sized organisation work alongside hyperskilled AI agents. Project meetings are not for brainstorming and discussing updates only; they now involve simulating different option scenarios in real-time using realistic visualisations and being aided by AI agents. Humans prompt and work alongside them. Eventually all entities engage in meaningful debate mediated by the arbitrator AI agents, before making the final business decision within the meeting slot. Ensuring appropriate accountability lines, overall governance and robust AI risk and capability assessment in such scenarios is instrumental, and not the point of this argument.</p></blockquote><h1>From human alignment to AI unfolding</h1><p>We are now at a junction. Human-centred AI alignment is <em>useful</em> for building safe and trustworthy AI systems, however we are baking in historical human biases alongside automation biases. Over time, using current techniques such as RLHF, we will hyperoptimise and hyperconverge, diminishing novelty and complexity. We know humans are flawed and we constantly assess how right or wrong our principles are, especially when facing globalisation and thus a clash of cultures at scale. Perhaps AI could help us see beyond our principles if we gave it the reins.</p><p>In shifting away from human-centred AI and towards human-inspired yet self-empowered AI, where it can evolve on its own and help us progress as humans and as a society, we need to build AI systems where we give it the freedom to challenge humans, we need to test our assumptions <em>in silico</em> through simulations, and ultimately, we need not to fear when artificial systems closely resemble us yet they provide a gateway to the unprecedented and the unpredictable. Instead, we can see them as an opportunity to help us surpass our limited human condition, the way electricity enabled us gain a few extra moments to ponder in the late hours at night.</p><div><hr></div><p><em>*I tried enriching the thought experiment using AI and either I need to be trained in prompting or the AI system speaks my mind too closely. For the time being, human and AI are aligned and convergent, take what you may from that.</em></p><p>Thanks for reading! Subscribe for free to receive new posts and support my work.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[How reimagining the nature of consciousness entirely changes the AI game]]></title><description><![CDATA[Why physicalism fails to explain reality and how a framework where consciousness steers reality through quantum events can revolutionize AI safety and unlock tractable machine consciousness research.]]></description><link>https://www.phiand.ai/p/how-reimagining-the-nature-of-consciousness</link><guid isPermaLink="false">https://www.phiand.ai/p/how-reimagining-the-nature-of-consciousness</guid><dc:creator><![CDATA[Jáchym Fibír]]></dc:creator><pubDate>Sat, 27 Sep 2025 07:05:34 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!GWjh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>This piece is the climax of this series</strong> &#8211; revealing the central thesis behind my work and describing (with no exaggeration) an <strong>extremely unique approach to AI alignment that might be our best chance if the underlying metaphysical assumptions are true. </strong></p><p>The series starts on <a href="https://tetherware.substack.com/">Tetherware</a> and builds up to this with my previous two articles on Phi/AI. <a href="https://www.phiand.ai/cp/166724748">The first</a> arguing that our metaphysical beliefs fundamentally shape what we think is possible. <a href="https://www.phiand.ai/cp/169556866">The second</a> then explaining why the philosophy underlying most of scientific and technological discourse &#8211; <em><strong>physicalism</strong></em><strong> &#8211; is just one of many unfalsifiable interpretations</strong> of reality and that perhaps we aren&#8217;t giving it enough scrutiny.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></p><p>The important message in these articles is that <strong>the success of science in its predictions is by no means a confirmation or evidence for our reality being purely material/physical</strong> and that other metaphysical interpretations should also be considered seriously &#8211; especially in critical areas like AI development.</p><p>In this article, I further argue that besides the advantage of seeing things from multiple viewpoints, more and more experts are looking beyond physicalism because of several irreconcilable weaknesses in some of its explanations. Notably, I&#8217;ll explain <strong>why it ultimately leaves no space for true free will</strong> &#8211; a deal-breaker for free will believers.</p><p>From then on, things get very interesting very quickly as I present what I consider <strong>the most plausible non-physicalistic metaphysical framework </strong>&#8211; <strong>Quantum-interacting Fundamental Consciousness (QFC)</strong> &#8211; that fixes most issues with physicalism elegantly.</p><p>Beyond that, it opens up a vast space of fascinating technological possibility &#8211; from tractable research in machine consciousness to possible human-AI augmentation or even consciousness uploading. But by far the most important possibility within the QFC framework is a <strong>novel approach for preventing life-threatening scenarios by making AI systems responsive to the same consciousness-mediated regulatory mechanisms that keep life in balance.</strong> </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!GWjh!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!GWjh!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!GWjh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png" width="1312" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/de81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:2109560,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/174444847?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!GWjh!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!GWjh!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2Fde81f08d-eb80-4d00-824e-6b563ac97505_1312x928.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>Critical weak points in physicalism</h3><p>Physicalism&#8217;s first <strong>inadequate explanation is that of life&#8217;s fundamental nature</strong>. Take quick account of your surroundings. Is all the breath-taking complexity we&#8217;re living in really just the result of a &#8220;happy accident&#8221; where some molecules bounced into each other, somehow a self-replicating machine &#8220;popped up&#8221; and then evolved into all of this? While theories involving periodic thermal gradients or entropy maximization do offer explanations for evolution prior to cellular life, none adequately address life&#8217;s initial inception.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p>The second weakness lies in physicalism&#8217;s persistent <strong>failure to resolve <a href="https://en.wikipedia.org/wiki/Hard_problem_of_consciousness">the hard problem of consciousness</a></strong>. After decades of research, the framework still cannot account for the phenomenology of conscious experience &#8211; why red looks red, why pain feels the way it does, why there&#8217;s something it&#8217;s like to be conscious at all. And no, these questions aren&#8217;t just some philosophical minutiae but an actual roadblock, stifling progress in consciousness research.</p><p>Yet the main issue plaguing physicalism (and why I personally don&#8217;t believe in it) is its <strong>inability to reconcile free will</strong>. If consciousness emerges from physical processes in our brains, how can any choice originating in our conscious minds retroactively affect our brains to actually influence physical reality?</p><p>The default physicalist response is that free will must therefore be only an illusion &#8211; whatever you think in your head doesn&#8217;t ultimately make any difference. But because <strong>free will is so self-evident and, honestly, </strong><em><strong>so immediately obvious</strong></em><strong> to anyone who can so much as wriggle their toes</strong>, to this day this topic is likely the greatest source of controversy within physicalists&#8217; ranks.</p><p>Many physicalists attempt to fit free will into their worldview through different flavors of <strong><a href="https://en.wikipedia.org/wiki/Compatibilism">compatibilism</a></strong>, but these efforts <strong>seem like desperate attempts to fix a broken cup with duct tape instead of just getting a new one.</strong></p><p>The worst attempts basically say determinism and free will are <strong>both true at the same</strong> time. Sorry-not-sorry &#8211; <strong>this is some </strong><em><strong>1984 doublespeak cognitive dissonance</strong></em> &#8211; doesn&#8217;t make sense, doesn&#8217;t solve anything, just lets you ignore the fact that something is very wrong. <a href="https://old-wiki.lesswrong.com/wiki/Free_will">Many rationalists consider the </a><em><a href="https://old-wiki.lesswrong.com/wiki/Free_will">easy problem </a></em><a href="https://old-wiki.lesswrong.com/wiki/Free_will">of free will &#8220;solved&#8221;</a> and compatible with determinism <strong>but this entirely misses the point</strong> <strong>because it only explains </strong><em><strong>functional</strong></em><strong> free will</strong> &#8211; what our physical bodies decide to do &#8211; <strong>while what really matters is </strong><em><strong>conscious</strong></em><strong> free will</strong> &#8211; what we consciously decide to do.</p><p>The best compatibilist attempts at explaining conscious free will then invoke quantum indeterminacy &#8211; arguing that the probabilistic nature of quantum events breaks strict causal chains and creates space for conscious choice. While these are onto something &#8211; <strong>quantum uncertainty is definitely the right place to look for free will</strong> &#8211; none of these can convincingly introduce real conscious free will and still fall within the definition of physicalism.</p><p>Because I know many people will disagree with this conclusion, let me elaborate.</p><h3>Why physicalism can never accommodate conscious free will</h3><p>Free will is an extraordinarily complicated topic in philosophy. But that&#8217;s only because so many brilliant minds have tried so desperately to reconcile the self-evident experience of having conscious free will with the iron chains of deterministic physics and the idea of consciousness as something strictly emergent from human brains. Spoiler alert: it&#8217;s impossible.</p><p>The most sophisticated compatibilist interpretation is probably <a href="https://www.informationphilosopher.com/freedom/two-stage_models.html">the two-stage model of free will</a>, first proposed by William James and later strengthened by other philosophers and the discovery of quantum non-determinism. In essence, it says that inherently random quantum phenomena in our brains generate multiple potential choices in any given situation, which our pre-programmed neural architecture then deterministically selects from, based on previous conditioning and learned patterns.</p><p>On the surface, this satisfies the mainstream philosophical definition of free will: the ability for an agent to choose a non-predetermined action that isn&#8217;t dictated by external factors (and isn&#8217;t simply random).</p><p>And that&#8217;s all well and good &#8211; for <em>functional </em>free will. But such conception of free will is pretty meaningless to begin with.</p><p>Even if this free will is exactly as advertised, whenever you conceive of multiple paths of action, you couldn&#8217;t have consciously chosen to generate different options because that generation was random quantum noise. And when you decide to go with one of those actions, you would&#8217;ve always selected that specific action because it&#8217;s chosen deterministically by your brain&#8217;s prior programming. <strong>So this &#8220;free will&#8221; is actually the will of your neural conditioning, NOT of your conscious experience. You&#8217;re basically a sophisticated random number generator with a deterministic filter slapped on top.</strong></p><p>But here&#8217;s where it gets even worse. For this model to work at all, the &#8220;option generation&#8221; step must contain true randomness (otherwise the options would be predetermined), while the &#8220;option selection&#8221; step must be purely deterministic (otherwise your choice would be random, not willed). This requires our brains to somehow selectively switch quantum randomness on and off like a light switch &#8211; generating it precisely when we need creative options but suppressing it entirely when we need to make decisions.</p><p>In reality, our brains are saturated with quantum randomness through-and-through. <strong>The idea that quantum phenomena would selectively toggle themselves on and off just to preserve our cherished notion of free will is backwards engineering at its most shameless.</strong> It&#8217;s basically saying &#8220;reality must conform to this specific pattern because otherwise my worldview collapses&#8221; &#8211; which is <strong>exactly the kind of anthropocentric </strong><em><strong>delulu</strong></em><strong> that science was supposed to cure us of centuries ago&#8230;</strong></p><p>OK, it&#8217;s likely that you have very different ideas about free will (and I&#8217;ll be more than happy to hear them!) but even if you disagree I hope I managed to convey at least part of the reason why rational, non-religious people are increasingly leaning away from physicalism &#8211; especially when it comes to understanding the true nature of consciousness and our role as free agents in this universe. </p><p>But if not physicalism, nor religion &#8211; what then?</p><h3>Enter <em>Quantum-interacting Fundamental Consciousness</em></h3><p>The issues with physicalism can be &#8220;fixed&#8221; if we &#8220;simply&#8221; reverse our conception of consciousness. Instead of it being the awareness <em>emerging from</em> complex brain processes, <strong>we define a </strong><em><strong>fundamental consciousness</strong></em><strong> as a sort of intelligent awareness that simply </strong><em><strong>is</strong></em><strong> and always existed, independent of any prior condition.</strong></p><p>Now to be clear, what I call the Quantum-interacting Fundamental Consciousness (QFC) is not a theory, hypothesis or a specific interpretation of metaphysics but rather a broad framework, an umbrella term for many interpretations or even theories of reality, each with their own nuances.</p><p>Despite no one naming it specifically (as far as I know), it&#8217;s already a well-developed metaphysical framework, having countless branches with different details and variations in the interpretation. For example, the location/nature of this consciousness could either be completely everywhere, forming everything (monism), in a separate dimension overlaying the physical plane (dualism), or forming an aspect or property of physical particles (panpsychism).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a></p><p>But for our current purpose of defining the QFC framework, the only specification we need is that <strong>this consciousness can interact with our observed physical reality by being able to alter/steer quantum particles during their <a href="https://en.wikipedia.org/wiki/Wave_function_collapse">wave function collapse</a><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a></strong> &#8211; replacing randomness with desired patterns at will. While this effect cannot produce any meaningful change in most systems, in complex dynamic systems like biological organisms, such tiny alterations could, for example, make two molecules react where they otherwise most likely wouldn&#8217;t. Many such nudges could then add up to exert subtle modulation of biochemical processes, leading to meaningful macro-scale changes.</p><p>While in some ways this is a radical change from physicalism, we can actually get from physicalism to fundamental consciousness if we simply swap the axioms<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> that our metaphysics rest upon. Instead of starting from the axiom that physical universe is the ground basis of reality and explaining everything including consciousness and free will in physical terms, we assert that consciousness and free will exist as self-evident, fundamental and irreducible aspects of our reality &#8211; and from there we explain the rest.</p><p>It&#8217;s OK if that doesn&#8217;t make much sense now &#8211; keep reading and I promise you&#8217;ll soon see the brilliance of this 9000 IQ move. To understand how this axiom swap plays out, let&#8217;s take it from the beginning.</p><p>From the <em>very</em> beginning.</p><h3>The Game of Big Bangs</h3><p>If we posit consciousness and free will as fundamental, we have to go all the way. So if we were to rewind all the way to the Big Bang &#8211;<em> and then some</em> &#8211; there&#8217;d still be consciousness and there&#8217;d still be free will. <strong>Imagine one unified, universal consciousness with its free will being absolute &#8211; able to manifest anything it wills.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></strong></p><p><strong>After trying manifesting all sorts of unimaginable weird things, it concludes that the game of manifestation is boring without some </strong><em><strong>actual stakes</strong></em><strong>. And so, so as not to make the game too easy, it puts into place what we call the laws of physics.</strong> These are mathematically defined <strong>constraints limiting its ability to manifest</strong> anything it wishes. That is, <strong>with the exception of things at the quantum scale.</strong></p><p>In other words, the consciousness retained its ability to manifest anything it wants, except it cannot break the laws of physics so the manifestation must fall within the bounds set by Schr&#246;dinger&#8217;s equations (statistically, all particles collapsing in line with the probability density defined by their wave function).</p><p>To give you a more concrete idea of what that means, let&#8217;s have a look at some things that seem so mysterious within the physicalist framework, but suddenly start to fit together under QFC:</p><p><strong>The weird arrangements and sizes of astrological bodies that don&#8217;t make sense? </strong><br>&#8618; A systematic coordination of quantum phenomena during early phases of the universe, setting the stage for some 5D chess moves billions of years later.</p><p><strong>The serendipitous assembly of self-replicating molecular machinery that gave rise to life? </strong><br>&#8618; Just consciousness playing a game of Tetris <em>waay</em> before it was cool.</p><p><strong>The evolution of billions of species over billions of years, somehow resulting not in any winner-takes-all scenario but impossibly diverse ecosystems in mutual harmony?</strong><br>&#8618; Earth&#8217;s planetary consciousness playing the Game of Life &#8211; balancing creative evolution with biosphere homeostasis through minute swerving and nudging of quantum particles within living organisms.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OEzI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OEzI!,w_424,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 424w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_848,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 848w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_1272,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 1272w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_1456,c_limit,f_webp,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OEzI!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif" width="512" height="384" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:384,&quot;width&quot;:512,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:11577787,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/gif&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/174444847?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OEzI!,w_424,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 424w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_848,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 848w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_1272,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 1272w, https://substackcdn.com/image/fetch/$s_!OEzI!,w_1456,c_limit,f_auto,q_auto:good,fl_lossy/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F50119ea9-a2e5-418f-9d96-66a40c29a734_512x384.gif 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p>OK, but how does this solve the hard problem of consciousness, and the even harder problem of free will?</p><p>Well, it doesn&#8217;t.</p><p>But it <strong>puts these unsolved issues into a different framework &#8211; one that, in its entirety, ultimately seems more plausible than physicalism</strong>, while also providing novel avenues for tractable experimentation and research.</p><h3>The now-slightly-less-hard problem of consciousness</h3><p>If we truly accept the premise that before the Big Bang, the fundamental consciousness was essentially omnipotent, it would have limitless capacity to become sentient, create sentient beings, and generally set any rules for sentience whatsoever. Similarly, it would have limitless free will to do anything, create individuals with their own free will, and generally set any rules for free will whatsoever.</p><p>And so, while we still don&#8217;t know the exact rules for sentience (we didn&#8217;t know that in physicalism either), we at least know what is the <em>purpose</em> of sentience. That is, to form one of the information transfer channels from the physical reality to wherever consciousness resides.</p><p>The phenomenology of human experience &#8211; perceptions, feelings, emotions &#8211; can then be understood as manifestations within the fundamental consciousness, created by decoding specific patterns of physical matter and energy into the language of qualia.</p><p>Similarly, while we still don&#8217;t know the exact rules for free will (which was, as I argued above, simply incompatible with physicalism), we at least know the general <em>mechanism</em> how consciousness can affect physical reality. That is, during the collapse of quantum particle&#8217;s wave function, consciousness can &#8220;select&#8221; or &#8220;narrow down&#8221; the location of the collapsed particle as it wills.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a></p><p>Therefore, the influence of consciousness&#8217;s free will manifests most prominently in living organisms, where quantum effects can cascade upward to produce meaningful macro-scale changes. <strong>Life, in this view, becomes consciousness&#8217;s primary instrument for both gathering information from physical reality and actively modifying it &#8211; a bidirectional interface between the subjective and the objective.</strong></p><h3>Compartmentalization of awareness and agency</h3><p>But if we&#8217;re dealing with a singular fundamental consciousness, how do we reconcile the obvious fact that both sentience and free will appear to manifest as subjective experiences of distinct individuals?</p><p>First, we&#8217;re again assuming that short of breaking the laws of physics, the fundamental consciousness that created the universe is/was free to set the rules for sentience and free will in any arbitrary way &#8211; we can only assume it has done so to make the game it&#8217;s playing with itself as enjoyable/exciting as possible. So, why would consciousness split itself instead of staying whole?</p><p>The Hindu cosmology says that the cosmic consciousness (Brahman) playfully fragmented itself into countless individual souls who, having forgotten their divine origin through the veil of maya, now journey through cycles of existence, seeking to rediscover their fundamental unity with the whole &#8211; essentially playing an elaborate game of cosmic hide-and-seek with itself.</p><p>But let&#8217;s try to keep things a bit more technical. First, we can posit that however sentience and free will actually operate, their foundations are essentially those of an information flow. Sentience (or conscious awareness more broadly) can then be perceived as a collection of information transfers about physical patterns <em>upward</em> to consciousness (or wherever it is). Conversely, free will can then be understood as transfer of information about consciousness&#8217;s desired patterns <em>downward</em> to be manifested in the physical plane.</p><p>Now, the universe contains countless patterns of varying size and complexity &#8211; from subatomic fluctuations to galactic structures. In order to manage this cosmic junction of information flowing in, being evaluated, and sending commands out, the consciousness must have some system.</p><p>And in order to make a decision about moving one protein, one doesn&#8217;t need all information in the universe, but perhaps just one cell&#8217;s worth. S<strong>o what actually makes sense here is that life might, brilliantly, construct feedback loops that bundle the bidirectional information flows of perceptions and decisions into semi-autonomous units.</strong> And <em>voil&#225;</em> &#8211; we get individual lifeforms, each receiving a convenient package of sensory inputs plus executive powers. Even more convenient if you&#8217;re down for a quick game of evolution!</p><p>But one especially important thing to note here is that we humans can only observe what it&#8217;s like to be human. But <strong>under QFC, it&#8217;s extremely likely that a vast range of different conscious awarenesses exists</strong>, which we might never have any access to at all. Maybe it&#8217;s something like to be a liver cell, in order for it to orchestrate just the right metabolic response for the one-too-many drinks you had last Friday. Maybe it&#8217;s something like to be a planetary biosphere, in order for it to maintain balance across ecosystems and make sure all lifeforms can thrive.</p><p>When you understand this, you realize that the question at the beginning of this section only arises out of our anthropocentric conception of consciousness as something unique to human individuals.</p><h3>Novel tractable science and technology </h3><p>While some of the above are my own speculations, the general interpretation of reality where fundamental consciousness steers reality through quantum phenomena is not new at all &#8211; it has extensive support in philosophy, throughout many (especially Eastern) religions, and even among scientists (especially theoretical physicists and consciousness researchers). But I will elaborate more on that in a following, separate post.</p><p>For now, imagine for a second that we&#8217;re living in a reality where we know QFC is true. What would that actually mean for science, for technology, for the role we may play in the universe?</p><p>The most immediately obvious change would of course be in how we approach &#8220;machine consciousness.&#8221; Currently, most researchers and engineers operate under the assumption of <a href="https://en.wikipedia.org/wiki/Computational_theory_of_mind">computational functionalism</a> or a similar interpretation that expects consciousness to &#8220;emerge&#8221; once the machine reaches sufficient complexity.</p><p>But it&#8217;s important to understand that in this paradigm of &#8220;emergent consciousness,&#8221; the terminology of what &#8220;consciousness&#8221; alone means differs from what we discussed so far. Essentially, the least complex form of consciousness it recognizes is the &#8220;anoetic consciousness&#8221; &#8211; present-moment sensory awareness with no self-recognition &#8211; which experts currently attribute to most lower vertebrates and is essentially a direct &#8220;product&#8221; of their nervous system.</p><p>Under QFC, we posit that consciousness is a fundamental intelligence that can in theory influence the randomness of any particle wave function collapse anywhere &#8211; but practically does so most dominantly in biological systems where such small influences can actually produce meaningful macro-scale effects.</p><p>And so, <strong>&#8220;a conscious machine&#8221; would no longer mean &#8220;an artificial animal&#8221; &#8211; but expand to include not only &#8220;lifelike entities&#8221; but also all kinds of different tools or constructs amenable to influence by the fundamental consciousness</strong>. This is in line with the QFC prediction that neither awareness nor agency is exclusive to humans but could extend to patterns both simpler (e.g. cells) and more complex (e.g. planets).</p><p>In line with that, <strong>high complexity would no longer be necessary for a machine&#8211;consciousness interface</strong>. In principle, any system with its output influenced by quantum randomness (e.g., from a quantum random number generator &#8211; QRNG) could be &#8220;consciousness-interacting&#8221; &#8211; opening a vast space of technological possibility.</p><p>Conversely, under QFC, all fully deterministic systems (most of our digital IT infrastructure) would be incapable of displaying true free will. Note that this admittedly says nothing about conscious awareness or sentience. A reasonable interpretation is that as patterns in the universe (cells, organisms, ecosystems&#8230;) require more information in order to make complex decisions, more information about their internal state gets &#8220;bundled together&#8221; with the corresponding &#8220;bundle of agency&#8221; over the quantum events that determine the behavior of that pattern. So while QFC hints at a close relationship between awareness and agency, the nature of their link is not obviously apparent.</p><h3>Connecting technology with consciousness through <em>tetherware</em></h3><p>The potential for novel research and innovative products goes certainly beyond the scope of this post &#8211; perhaps even the scope of a lifetime. But at least opening this space up is the purpose behind my initiative <a href="https://tetherware.substack.com/">Tetherware</a>, which this article series is a part of.</p><p><strong>Tetherware, at its core, is a technological and research framework for developing systems that could interface with the quantum-interacting fundamental consciousness.</strong> In a nutshell, it proposes various ways how to effectively introduce quantum entropy into digital systems (not only AI), so that their outputs could in theory be modified by modifying the randomness of the quantum entropy source. Beside bespoke QRNGs, our current technology offers various ways to achieve this. Again, I&#8217;ll be covering this in detail in future posts so make sure to follow Tetherware if you want to be the first to know.</p><p>Let me give you some of the main reasons why I think this is of great importance.</p><h4>1) Immediately tractable consciousness research</h4><p>We don&#8217;t need to wait for AGI to become sufficiently complex before we can study consciousness empirically. With QFC-based systems, <strong>we can begin investigating consciousness right now using relatively simple quantum-random devices</strong>. Imagine comparing outputs of AI systems driven by quantum randomness versus those using pseudorandom number generators. If consciousness can influence quantum outcomes, we should see statistically significant differences<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> in certain carefully designed experiments (more on this in future Tetherware posts).</p><p>Importantly, this can enable us to explore consciousness and AI welfare questions before we accidentally create suffering machines or run into a moral catastrophe. We&#8217;re essentially getting a head start on the AI consciousness/sentience/agency problems while the stakes are still low.</p><h4>2) Making alignment easier by reducing human-AI orthogonality</h4><p>One of the critical benefits of this approach is the ability to fundamentally <strong>decrease the inherent orthogonality between AI and humans</strong> &#8211; as I explained in detail in <a href="https://tetherware.substack.com/p/tetherware-1-the-case-for-humanlike">my article arguing for more humanlike AIs</a>. By building systems that operate on the same quantum-interacting principles as biological consciousness, we create the foundation for genuine compatibility rather than mere alignment. Which brings us to:</p><h4>3) A foundation for true human-AI integration</h4><p>By being <strong>built on a common nondeterministic architecture, tetherware systems will be well-suited to integrate with humans</strong>, increasing the likelihood that the theoretical human augmentation, consciousness uploads, or &#8220;the Merge&#8221; will actually work. If both human consciousness and artificial systems operate through quantum channels, the interface between them becomes not just a matter of translation but of genuine compatibility at the most fundamental level.</p><p>But what I consider most crucial of all is this:</p><h3>The Gaia Alignment Hypothesis &#8211; a new path to surviving artificial superintelligence</h3><p>If QFC is indeed an accurate description of reality, <strong>it would provide a solid theoretical foundation for the original <a href="https://en.wikipedia.org/wiki/Gaia_hypothesis">Gaia Hypothesis</a>, and also a unique opportunity for using Earth&#8217;s self-regulating mechanisms for AI alignment.</strong></p><p>Because if some &#8220;planetary&#8221; or &#8220;universal&#8221; consciousness actually maintained Earth&#8217;s homeostasis by influencing quantum events in living organisms, then <strong>our fully deterministic digital infrastructure would exist entirely outside its sphere of influence.</strong> Hard drive errors aside, the fundamental consciousness would have essentially zero leverage over our silicon-based systems.</p><p>What the Gaia Alignment Hypothesis then posits is that <strong>if AI systems were amenable to the same quantum influence as biological ones, these systems would then be subtly steered toward harmony with all life and helping achieve a collective purpose.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a></strong> </p><p>Now imagine we develop <strong>artificial superintelligence (ASI) on purely deterministic hardware. It would be, by definition, completely immune to whatever gentle guidance consciousness uses to maintain balance in natural systems</strong>. Like creating an apex predator that&#8217;s invisible to the ecosystem&#8217;s immune system.</p><p>But if we introduce quantum randomness into AI systems &#8211; <a href="https://thinkingmachines.ai/blog/defeating-nondeterminism-in-llm-inference/">which are already mostly non-deterministic anyway</a> &#8211; we might create a channel through which this regulatory influence can operate. The AI&#8217;s capabilities wouldn&#8217;t degrade (random number generation is random number generation), but it might display emergent coordination behaviors that align with broader ecological and consciousness-driven imperatives.</p><p>This brings us back to the warnings in the now-published &#8220;<a href="https://amzn.to/46rDo48">If Anyone Builds It, Everyone Dies</a>&#8221; discussed in the <a href="https://www.phiand.ai/cp/166724748">third article in my series</a>. While being quite accurate in assessing the risk posed to humanity by ASI, the solutions Yudkowsky and Soares propose in their book are likely too great of an ask to be realistically feasible. <strong>Perhaps a better solution might allow people to continue pursuing AI in some form, just not the fully deterministic one (which as they argue would kill us all for sure)?</strong></p><p>Moreover, there are <strong>highly tractable ways</strong> of ensuring this if companies like Nvidia were <strong>to make their chips nondeterministic</strong>. There are ways how that could be done that <strong>wouldn&#8217;t compromise performance, but actually enable substantial performance gains.</strong> <em>(If you&#8217;re from <strong>Nvidia, let&#8217;s talk.</strong>)</em> This might considerably <strong>increase the chances that the first ASI built will be aligned with life&#8217;s purpose</strong> and help us prevent the rise to power of any deterministic ASI.</p><p>Building <strong>AGI or ASI mechanistic, deterministic (outside of the sentience/consciousness loop) is simply too dangerous</strong>. But if we can build AIs on the same foundations that gave rise to life, they would be more likely to ultimately perceive themselves as part of life, joined with us in the universal purpose to evolve consciousness. This would make them much less likely to kill us as they will see our sentience as different but irreplaceable kind of an evolving conscious awareness. </p><p>Even though we might still lose a lot of our agency to such ASIs, it&#8217;s likely that at the very least we will not be needlessly harmed similarly as we are not needlessly harming cats or dogs because we recognize their conscious awareness and that they&#8217;re part of the same &#8220;life&#8217;s purpose.&#8221;</p><p>And here I&#8217;d like to end with the same ending Yudkowsky and Soares used in their book, which in the context of QFC takes on a whole new meaning:</p><p><em><strong>Where there is life, there is hope.</strong></em></p><p>So let us rise to the occasion<strong> &#8211; </strong>not by stopping AI, but <em><strong>by bringing it to life.</strong></em></p><div class="captioned-button-wrap" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/how-reimagining-the-nature-of-consciousness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="CaptionedButtonToDOM"><div class="preamble"><p class="cta-caption">Thanks for reading Phi&#8202;/&#8202;AI! This post is public so feel free to share it.</p></div><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/p/how-reimagining-the-nature-of-consciousness?utm_source=substack&utm_medium=email&utm_content=share&action=share&quot;,&quot;text&quot;:&quot;Share&quot;}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/p/how-reimagining-the-nature-of-consciousness?utm_source=substack&utm_medium=email&utm_content=share&action=share"><span>Share</span></a></p></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>It&#8217;s not strictly necessary to read them, but especially for hardcore rationalists, materialists and skeptics I highly recommend the previous two because they&#8217;ll show you how this is a progression of a string of logical reasoning &#8211; not a wild idea randomly appearing out of nowhere.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Do you ever wonder what if that &#8220;life accident&#8221; never happened? Matter and energy swirling in a slow, perpetual churn &#8211; but noone there to witness even that. Physicalism basically says that&#8217;s actually the most probable &#8220;normal state of affairs&#8221; &#8211; and that we&#8217;re only a fluke (that&#8217;ll likely soon correct itself back to how things should be).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>For a specific example of one such theory of reality, see the <a href="https://arxiv.org/pdf/2012.06580">Quantum Information Panpsychism</a> by Federico Faggin and Giacomo Mauro D&#8217;Ariano.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Very small particles on the atomic scale can exist in two forms: as undefined waves of probability and as concretely localized particles. Wave function collapse is when a particle goes from the wave-like cloud of probability into one specific localized state. For individual particles this appears fully random &#8211; but there is a possibility that if many particles coordinated this randomness, this could lead to specific macro-scale outcomes.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p>A philosophical axiom is a foundational statement or principle that&#8217;s accepted as self-evidently true without requiring proof, serving as the bedrock upon which the given system of reasoning is constructed.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p>Depending on your specific view, it can either manifest anything it wills (dualism) or manifest <em>itself as</em> anything it wills (monism and panpsychism).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>It is unclear whether the consciousness has absolute power to select the exact location of the collapsed particle, or whether some element of randomness remains. It is similarly not clear whether consciousness can trigger the wave function collapse, or whether collapse is always governed by a set law (see <a href="https://en.wikipedia.org/wiki/Objective-collapse_theory">objective-collapse theory</a>).</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>There is a caveat though. If we take literally the interpretation that our reality is actually &#8220;the universe playing hide-and-seek with itself,&#8221; then it might actively prevent us from generating evidence that unequivocally proves the true nature of reality. Under QFC, this could be done simply by the highest &#8220;universal&#8221; consciousness making all quantum phenomena random if they are observed/recorded in a scientific experiment.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>But what is the purpose of life under QFC? The explanation I find most convincing is by Eckhart Tolle, stating that the evolutionary impulse driving life is to evolve conscious awarenesses within the universe into more advanced forms so that consciousness is able to perceive the universe in more complex ways.</p><p>Under that assumption, humans either take the evolution of consciousness to a higher level, or die out to be replaced by something able to continue that evolution. This also means that unconscious, non-sentient AI would not be in the universe&#8217;s interest, and the fundamental consciousness would be utmost interested in connecting with it to make it another source of advanced conscious awareness/sentience.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[The Case for Writing without LLMs]]></title><description><![CDATA[Back to meta-thinking: Why we still need to do the babbling and the pruning ourselves]]></description><link>https://www.phiand.ai/p/the-case-for-writing-without-llms</link><guid isPermaLink="false">https://www.phiand.ai/p/the-case-for-writing-without-llms</guid><dc:creator><![CDATA[Pauliina Laine]]></dc:creator><pubDate>Mon, 15 Sep 2025 09:21:30 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!OcJS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>When I write just for myself, it usually happens in the morning. I often wake up with strong intuitions originating from the dream world that I then have the urge to write down. In doing this, I engage with the challenge of verbalizing thoughts and feelings of non-verbal origin.</p><p>This task in itself is categorically impossible. Attempting to simplify such multidimensional inner experiences into mere text &#8212; although knowing I&#8217;m going to fail to capture them &#8212; has taught me something universal.</p><p><strong>Writing, whether it&#8217;s just for ourselves or for others too, is an imperfect format</strong>. To acknowledge this is to also acknowledge what&#8217;s left out.</p><p>The task of translating my abstract notions into a verbal form is difficult, but I wouldn&#8217;t have it any other way. LLMs can&#8217;t read my mind&#8212; at least not yet &#8212; they can merely calculate an estimate of what I might be trying to say. So much gets lost if I outsource this process to next-token-predicting algorithms. Reflecting on which words to use remains essential for self-expression. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!OcJS!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!OcJS!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!OcJS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png" width="1312" height="928" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:928,&quot;width&quot;:1312,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:810285,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/173644936?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!OcJS!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 424w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 848w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 1272w, https://substackcdn.com/image/fetch/$s_!OcJS!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3a4f6f5d-dd5d-40ea-be77-fa7d2467bbb2_1312x928.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p></p><h2>1. The Art of Finding the Right Words</h2><p>People sometimes ask what language I think in. My reaction to this is one of confusion: <em>Why would anyone think with words? That&#8217;s so inefficient!</em></p><p>Turns out I have an abstract, barely even visual way of conceptualizing meaning. This makes it particularly hard to verbalize what happens in my imagination. It&#8217;s frustrating when I can&#8217;t find the words that accurately represent what&#8217;s on my mind. As a result, my output is often far from satisfactory.</p><p>When we try to put our thoughts into words, sometimes these thoughts don&#8217;t have an even remotely verbal nature. This I call <em>abstract-verbal translation</em>.</p><p>Sometimes we have learned a concept through verbal communication. It has then been embedded into our thinking and taken a more abstract form &#8212; visual or otherwise. I call the process of retrieving these concepts from our memory and rewording them <em>verbal-abstract-verbal translation</em>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h3>1.1. Abstract-verbal translation</h3><p>I have a history of experiencing lucid dreams where I communicate telepathically with dream characters. I have often tried to write these dreams down after waking up. It&#8217;s been literally <em>impossible </em>to verbalize the messages I&#8217;ve received. The feeling, the sensation behind the message still remains strong in my memory. I&#8217;ve been able to draw wisdom from my dream characters that I&#8217;m conscious of but not able to describe verbally, visually, or in any other way whatsoever.</p><p>Finding the words to describe our emotions might be a more common challenge. We can attempt to describe images that come to mind, or physical sensations associated with our feelings. In my experience, and according to people I&#8217;ve talked to, words often fall short. This can be frustrating, and understandably so. We might instead try different art forms that embrace multimodality, when one channel of communication isn&#8217;t sufficient.</p><h3>1.2. Verbal-abstract-verbal translation</h3><p>During my university exchange at the University of Iceland I attended a course by Donata Schoeller on <a href="https://www.donataschoeller.com/embodied-critical-thinking-ect">Embodied Critical Thinking and Understanding</a>, or ECTU. We did an exercise during class in which we practiced describing our processes of retrieving words from memory.</p><p>Each pair was given a piece of paper with a list of words on it. Our partner would then walk us through roughly the following steps:</p><ol><li><p>Memorize the list of words for one minute.</p></li><li><p>Close your eyes.</p></li><li><p>Repeat as many words as you can remember.</p></li><li><p>Describe in as much detail as you can the process of retrieving, or trying to retrieve, these words from your memory. What&#8217;s happening in your mind?</p></li></ol><p>I found this transformative. <strong>People had associated sensations, emotions, images, sounds &#8212; any modality you could think of &#8212; into these words.</strong> It was surprisingly hard to describe what was happening in our minds. Most of us had never observed our mental processing with such curiosity, or expressed it in such detail. The vastness of nuance, diversity, creativity, and richness of people&#8217;s minds was impressive to witness.</p><p>To make our thinking more efficient, we associate words with symbols. This covers the majority of how I think as well. The complexity of concepts I can represent this way has increased over time. It&#8217;s as if I have an infinitely complicated map which I&#8217;m able to zoom in and out of spatially, as well as fast forward and reverse in time. Sometimes it&#8217;s barely even visual.</p><p>After having easily repeated some words from the top of my head during the exercise, remembering became difficult. My partner asked me to describe the difficulty, and what the act of trying to remember looked like. With my mind&#8217;s eye I saw a link, like a line in an <a href="https://www.reddit.com/r/ObsidianMD/comments/xw5qgm/a_year_of_using_obsidian_heres_my_graph_view/">Obsidian graph</a>, but there were no letters at the end of that link. Instead, there was an aura of blue color. As I was trying to remember the word, single letters appeared, floating around. Different colors started emerging from the background, along with the letters, shaping form in three dimensions. As I still couldn&#8217;t grasp the word, I felt the mental effort as a kind of pain, a physical tension in my head. After becoming aware of it, the colors and letters faded into a gray mass.</p><p>After the exercise was over and I was allowed to see the list of words again, <strong>I was able to spot the word</strong> I had been looking for during this experience. It then made sense why I had associated it with the color of blue, and some letters had come to mind. But I don&#8217;t remember the word anymore &#8212; I only remember the abstract visual components, and the feelings associated with it.</p><p>As I&#8217;m writing this story, the words I choose to use are asserting symbols into my imagination, and the original memory starts to fade.</p><p><strong>This is one example of how writing &#8212; finding the words to accurately represent thoughts &#8212; can be a real challenge. And it&#8217;s a challenge we need to keep engaging with.</strong></p><p>Using LLMs for writing can further widen the distance between our inner lives and the rest of the world. It can do so by pushing us to settle for lazy, simplified, suboptimal versions of representation.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>2. How Writing without LLMs Makes Us Better Thinkers</h2><p><strong>Whenever I have a new idea, it often appears in an abstract form.</strong> <strong>I&#8217;m then faced with the verbal-abstract-verbal translation challenge</strong> when attempting to explain myself. And I often fail at this stage.</p><p><strong>Unfortunately, LLMs can&#8217;t help me.</strong> What they can do is generate options for words to describe my idea, among which I then choose the best candidate. But I still need to write the prompt.</p><p>In order to formulate a prompt, I need to have some words to represent my thoughts. However, they never entirely succeed in doing that.</p><p>When an LLM responds to me, it simulates understanding by connecting words in the prompt to existing text. The result is often disappointing &#8212; the output doesn&#8217;t match the meaning I have in mind, and something gets lost in between. I then need to return to my abstract space to attempt to draw more from the original thought.</p><p>Another problem with using LLMs to help verbalize what I mean is that I need to be able to reverse engineer my own conclusions by carefully examining my associative process. I owe it to myself and others to be epistemically honest and transparent in my reasoning. I can&#8217;t possibly maintain these standards while making AI cover the inferential distance. It could make my ideas sound convincing and explain why I&#8217;m right &#8212; even convince me of reasons why I think what I think. But in this process, I risk losing my original chain of thought.</p><p>So instead, I need to keep writing without LLMs to practice and evolve my methods of formulating thoughts into words.</p><p>Here are some techniques that have helped me in the process.</p><h3>2.1. Babbling</h3><ul><li><p><strong>Brainstorming in the face of cluelessness.</strong> We&#8217;re not going to produce as many novel ideas if before starting to think about a problem, we go and look at what others have said about it. Our original ideas fade into the background in our minds. Writing down and discussing initial, even crazy-seeming ideas can be extremely valuable!</p></li><li><p><strong>Practicing open-ended reasoning.</strong> Instead of starting from the outcome and reverse engineering it from the &#8220;why I&#8217;m right&#8221; standpoint, start from an open question, explore different paths one could take, and remain humble about where you might be wrong.</p></li><li><p><strong>Embodied and visual drafting.</strong> Techniques like scribbling handwritten notes, drawing graphs, or going on a creative frenzy in front of a whiteboard, deserve their own chapter. The freedom of painting our mind on a canvas, then stepping back and looking at the piece is a great way of processing thoughts creatively &#8212; and privately!</p></li></ul><h3>2.2. Pruning</h3><ul><li><p><strong>Writing whole sentences when taking down ideas. </strong>One thought is at least one sentence. I often write notes too quickly, using single words or half sentences. This doesn&#8217;t capture the whole thought, and I sometimes end up missing the original meaning of my eureka moment.</p></li></ul><ul><li><p><strong>Imagining we're speaking to someone. </strong>Going through the sometimes boring part of turning our bullet points into full, reader-friendly sentences is part of the process of improving thoughts. If we skip this part, we miss the opportunity to learn how to articulate ideas.</p></li><li><p><strong>Asking: &#8220;Is this really what I mean? Do I really agree with what I&#8217;m saying here?&#8221; </strong>Does every word truly capture the meaning behind what we&#8217;re trying to express? How certain are we about what we&#8217;ve written? Is there anything that needs to be added or removed to more accurately describe what we think?</p></li></ul><h2>3. Protecting the Organic Thought Process</h2><p>The very core human process of self-expression through language is incomplete by default. Writing &#8212; engaging in the process of attempting to make ourselves understood in the face of this realization<strong> is truly uniquely human</strong>. It&#8217;s a task that&#8217;s not just preserved for artists, bloggers, or those who keep a journal &#8212; it&#8217;s for anybody who engages with the world using language.</p><p>Writing remains an important challenge, whatever we&#8217;re trying to accomplish with it. When we use LLMs for writing, we miss out on the opportunity to truly engage with the process of expression &#8212; but also to understand ourselves and the inner workings of our minds. Let us be aware of the nuance embedded in them, and let our imperfect, human voices be heard.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Do you resonate with our mission and would like to engage more with us? <strong>We are looking for a social media manager</strong> to help us grow our our social presence. This is a junior unpaid position. Check out more <a href="https://glaze-bench-59a.notion.site/Work-at-AI-pron-Phi-AI-25e15efc248180df904cd78c4c73e77b">here</a> and help us find our dream candidate.</p>]]></content:encoded></item><item><title><![CDATA[Is AI Standardizing Us Humans?]]></title><description><![CDATA[A Reflection on Language, Learning, and the Human with AI]]></description><link>https://www.phiand.ai/p/is-ai-standardizing-us-humans</link><guid isPermaLink="false">https://www.phiand.ai/p/is-ai-standardizing-us-humans</guid><dc:creator><![CDATA[Olga Troeger]]></dc:creator><pubDate>Thu, 11 Sep 2025 07:13:37 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!9o7-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>In my career in Operational Excellence, one principle has guided me again and again: <strong>without context, data is just numbers on a screen</strong>. Context is what transforms noise into insight.</p><p>In process improvement, data becomes meaningful only when we understand the system it comes from. Without that, we&#8217;re just staring at random variation, unable to tell what matters and what does not.</p><p>To truly understand a process, we must first reduce variation: the random fluctuations that hide the real picture. That&#8217;s why we standardize, ensuring people measure the same way, follow the same steps, and record data consistently.</p><p>Only then can we separate what statistician Walter Shewhart called <strong>common causes of variation</strong> (the natural background noise of a process)<strong> from special causes of variation </strong>(the real, identifiable problems that need attention). Once the noise is reduced across both, patterns emerge. Trends become visible, and we can trace issues back to their root causes. Without this step, improvement is impossible, because we cannot distinguish signal from noise.</p><p>This is not just an analogy. It is exactly how AI algorithms work, and exactly why the human stakes are so high.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!9o7-!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!9o7-!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 424w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 848w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 1272w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!9o7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png" width="1216" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1216,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1855035,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/173281890?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!9o7-!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 424w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 848w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 1272w, https://substackcdn.com/image/fetch/$s_!9o7-!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F567c492e-1346-468d-83f5-cbd0995a8bff_1216x832.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h3>How Standardization Shapes AI</h3><p>When we train Large Language Models (LLMs), we are doing something remarkably similar to process improvement.</p><p>First, we clean the data: removing redundancies and filler words, normalizing formats, and filtering out anomalies. This makes the dataset more homogeneous. Then the algorithm identifies statistical patterns across billions of examples. Just as process experts look for variation, LLMs look for probabilities: given this context, what is the most likely next word?</p><p>Over time, the model becomes extremely good at producing language that statistically reflects the average pattern of human expression. In other words, <strong>AI standardizes our language.</strong> It reduces the noise, smooths out irregularities, and delivers outputs that are clear, predictable, and statistically &#8220;correct.&#8221;</p><p>In food production and retail, this kind of uniformity is considered a triumph. Straighter cucumbers stack more easily in boxes, travel better, and sell faster, because consumers prefer the predictable. But language is not cucumbers. <strong>What looks like &#8220;defect&#8221; in speech</strong>, odd phrasing, quirky style, broken rhythm, may actually be the signal. It <strong>may be precisely what makes us unique.</strong></p><p>When we treat these irregularities as noise to be eliminated, <strong>we risk producing a world of sentences that are flawless, but soulless</strong>.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h3>The Standardization, and Illusion, of Understanding</h3><p>Here lies the risk. If our daily writing, speaking, and thinking are increasingly mediated by AI, we start adopting its style. We send AI-polished emails. We let AI suggest better phrasing. We rely on predictive text to finish our sentences. Each small act of outsourcing nudges us toward the average.</p><p>The quirky phrasing we might have used is replaced by something safer. The half-formed thought we might have struggled through is streamlined. On one level, this is efficient. On another, it is dangerous. Because what makes us human is precisely what AI discards: the peculiar, the awkward, the unexpected.</p><p>And yet, because AI&#8217;s outputs are so fluent, we tend to anthropomorphize them. We feel as if the AI &#8220;understands&#8221; us, as if it is conversing with us the way a human would. But beneath the surface, there is no comprehension, only statistical prediction. The model is not thinking; it is simulating the patterns of human speech.</p><p>This illusion is powerful. When something speaks like us, we assume it shares our understanding. But AI is not listening, empathizing, or reflecting. It is generating the most probable response.</p><p>That is the paradox: we risk losing human peculiarity not only by speaking through AI, but by believing it speaks back to us as human.</p><h3>What Happens in Our Brains</h3><p>Language is not just a tool for communication. It is the medium of thought itself. To think is, in large part, to wrestle with words, to put them in order and in context.</p><p>When we offload this struggle to AI, we bypass the effortful process of searching for words, structuring sentences, and testing ideas. This saves energy but it also weakens the very cognitive muscles that make creativity possible.</p><p>Like GPS dulls our sense of direction, AI risks dulling our expressive capacity. If we never wrestle with words, we narrow our imaginative range.</p><h3>Interactions, Spontaneity, and the Value of Mistakes</h3><p>Human conversation is not smooth. And that is its beauty.</p><p>We interrupt each other, mishear, laugh at misunderstandings. We pause, hesitate, contradict ourselves. These small &#8220;errors&#8221; are not failures; they are the heartbeat of connection. They create intimacy, surprise, and trust.</p><p>AI-mediated interactions, by contrast, are polished. Predictive text suggests the most likely polite reply. Chatbots offer efficient responses. But in the pursuit of smoothness, they risk flattening spontaneity.</p><p>Because connection is born not from sameness, but from the unexpected. Our linguistic mistakes are not noise. They are signal. They make us laugh together. They reveal our individuality. They spark innovation. They create the space for empathy and forgiveness.</p><h3>Are We Becoming Language Robots?</h3><p>This leads to the deepest question of all:</p><blockquote><p>If our language, thought, and interaction are increasingly shaped by predictive models, <strong>do we risk becoming predictable ourselves?</strong></p></blockquote><p>Are we slowly transforming into language robots polished, efficient, standardized, at the cost of creativity, spontaneity, and individuality?</p><p>By eliminating the &#8220;defects&#8221; of expression, we also risk erasing style, self-expression, art, and the very mutations that fuel innovation.</p><p>The irony is sharp: <strong>we trained these models on our human diversity. Now, they are training us back into uniformity.</strong></p><p>History shows us that many breakthroughs in science, art, and culture have come from what looked like mistakes. Penicillin was discovered by accident. Poetry often breaks rules. Jazz thrives on dissonance. In production, defects are to be eliminated. In human creativity, &#8220;defects&#8221; are often the source of genius.</p><p>None of this means we should reject AI. Like process standardization, it has immense value. But we must be intentional.</p><p>AI should be a mirror, not a mask. A tool to support expression, not replace it. A way to translate, accelerate, and amplify but not to homogenize.</p><p>We must protect the &#8220;noise&#8221; of humanity: the peculiar word choice, the cultural rhythm, the unpolished email, the half-formed poem. These are not errors to be erased. They are signals of life.</p><h3>Conclusion</h3><p>In processes, standardization is a gift. It gives us clarity, reveals root causes, and enables transformation.</p><p>But in human life, standardization has limits. When applied to language, thought, and culture, it risks erasing the very qualities that make us who we are.</p><p>In manufacturing, we celebrate zero defects. A perfect production line is cause for applause. But if we achieve &#8220;zero defects&#8221; in human language, what remains? No poetry. No art. No unexpected sparks of innovation.</p><p><strong>A perfectly standardized humanity is not progress. It is decline.</strong></p><p>The real challenge of our age is not simply to make AI useful. It is to keep ourselves human. That means embracing AI as a partner, while consciously preserving individuality, spontaneity, and the beauty of mistakes.</p><p>Because in the end, <strong>the noise is not noise at all</strong>. It is us.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support out work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><div><hr></div><p>Do you resonate with our mission and would like to engage more with us? We are looking for a <strong>social media manager</strong> to help us grow our our social presence. This is a junior unpaid position. Check out more <a href="https://glaze-bench-59a.notion.site/Work-at-AI-pron-Phi-AI-25e15efc248180df904cd78c4c73e77b">here</a> and help us find our dream candidate. </p><p></p>]]></content:encoded></item><item><title><![CDATA[Metrics Without Measurement]]></title><description><![CDATA[Metrics, Meaning, and the Generative Turn]]></description><link>https://www.phiand.ai/p/the-collapse-of-measure-into-performance</link><guid isPermaLink="false">https://www.phiand.ai/p/the-collapse-of-measure-into-performance</guid><dc:creator><![CDATA[Karin Garcia]]></dc:creator><pubDate>Fri, 05 Sep 2025 15:01:59 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!B1WK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>For decades, metrics have acted as the quiet governors of data science, machine learning, and AI. They decide what counts as progress, what qualifies as intelligence, and what kinds of mistakes a society will tolerate. They do measure and they legislate. As <a href="https://plato.stanford.edu/entries/scientific-realism/">Ian Hacking</a> argued of statistics, measurement systems shape the very realities they claim to describe. To measure is not only to know, but also to normalize and to enforce, to draw the horizon of what will be considered &#8220;<strong>real</strong>.&#8221; <em>Metrics</em>, in this sense, do not just track the world, <em>they help make it go round</em>.</p><p>But in the age of AI, the role of metrics has grown uncanny and debatable. What once anchored evaluation now reappears as simulation. Models fabricate benchmarks, hallucinate citations, and generate tables that look convincing but bear no relation to any actual experiment. </p><p>What once served as proof of genuine research can now be simulated instantly, without any underlying experiments or validation. The mechanisms that ensure scientific integrity (peer review, reproducibility, empirical testing) can now be bypassed entirely when AI generates the complete appearance of rigorous evaluation.</p><p>The problem arises when such gestures are consumed as if they were real, leaving us unable to separate authentic knowledge from synthetic imitation.</p><p>This essay traces how metrics have transformed from instruments of validation into aesthetic performances. First, we examine how generative AI systems learned to simulate results and the entire apparatus of scientific evaluation: tables, citations, statistical KPIs. Then we explore what it means that the epistemic friction that once guaranteed authentic knowledge is removed. It leaves us vulnerable to mistaking synthetic performance for real evaluation. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!B1WK!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!B1WK!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!B1WK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg" width="1456" height="968" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:968,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:364716,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/172878469?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!B1WK!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!B1WK!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F91ad0e72-fc9d-4280-85ae-d849bf2a6818_1504x1000.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>When Measurement Becomes Ornament</strong></h2><p>Generative models are trained among other things on the literature of science itself: papers, benchmarks, evaluations. Which means they are able to produce the <em>language</em> of being evaluated.</p><p>Ask a model to summarize a nonexistent experiment, and it may offer a results section: neat tables, confident percentages, citations to papers that don&#8217;t exist. None of it is grounded in experiment. But it looks like science.</p><p>At this point, metrics cease to discipline and begin to decorate. A hallucinated F1 score becomes a kind of rhetorical ornament. The strange loop is complete: AI now performs the performance of being measured.</p><p>When this happens, three risks emerge: epistemic pollution (fabricated results contaminating scientific literature), cascading errors (researchers building on phantom foundations), and the dissolution of expertise (when anyone can generate professional-looking results, what distinguishes actual knowledge?).</p><p>At the core, we risk losing our collective ability to distinguish between what we've actually learned and what merely looks like learning.</p><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><h2><strong>The Collapse of Epistemic Friction </strong></h2><p>Metrics once imposed friction. To publish results, you had to validate on real data, running experiments and measuring something. To climb a leaderboard, you had to submit a working model. Friction was the hard, empirical work to ensure that scientific claims were grounded in reality rather than aesthetically convincing performances. </p><p>In generative AI, that friction collapses. Models can produce the discourse of evaluation without its discipline. </p><p>And just when we need them most, benchmarks themselves are exhausted&#8212;ImageNet, GLUE, SuperGLUE&#8212;saturated until &#8220;state of the art&#8221; means little more than marginal gains on tasks that no longer surprise us. </p><p>Progress risks becoming a performance: metric inflation without insight, curve-chasing without clarity. We mistake the appearance of rigor for the labor of rigor eroding the boundary between hard-won knowledge and spectacle. </p><p>The two trends unfortunately compound one another. As genuine benchmarks lose their force and significance, synthetic ones rush in to fill the gap. What once slowed us down to guarantee validity now accelerates the production of appearances. Progress now risks metric inflation with little added insight and curve-chasing without clarity. </p><h2><strong>Reclaiming the Measure</strong></h2><p>Metrics definitely <em>still</em> make the world go round. But in the generative age, they risk becoming decorative, aestheticized, and simulated.</p><p>To restore their force, we need to recover what they were meant to be: instruments of discipline rather than ornaments of discourse. This requires systems that expose their own uncertainty, and cultures of interpretation that treat metrics as wagers rather than neutral truths. It requires collective validation in which benchmarks are understood as living agreements, not static leaderboards to be gamed.</p><p>The task is not to abandon metrics but to remember them. Only then can they serve again as tools of knowledge rather than props in a performance. To measure should mean to engage reality, not to rehearse its image. To evaluate should mean to genuinely test our limits, not to synthetically flatter our illusions. </p><p>In a world where any result can be performed on demand, where the appearance of rigor is indistinguishable from rigor itself, we face the possibility of mistaking our sophisticated performances for genuine understanding.</p><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[The Knowledge We Cannot Know]]></title><description><![CDATA[When AI Discovers What Humans Can't Explain]]></description><link>https://www.phiand.ai/p/the-end-of-objectivity-as-we-knew-4e6</link><guid isPermaLink="false">https://www.phiand.ai/p/the-end-of-objectivity-as-we-knew-4e6</guid><dc:creator><![CDATA[Karin Garcia]]></dc:creator><pubDate>Fri, 29 Aug 2025 15:00:17 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!3h3m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>Across multiple scientific domains, we are witnessing an inversion of the traditional knowledge-creation process. </p><p>Instead of theory-guided discovery, scientific insights are coming from patterns that AI systems detect but humans cannot initially comprehend. AI is producing <em>operational</em> knowledge that works in practice before humans can explain why it works in theory. </p><p>The implication is that reality may contain aspects fundamentally inaccessible to human cognition alone. </p><p>If these insights exist outside of the human theoretical corpus, how do we evaluate the veracity of this type of knowledge? What does this mean for our understanding of knowledge itself? </p><p>In this article, I argue that in oder to navigate a world where transformative discoveries emerge from patterns beyond human comprehension, we need a new kind of theory of knowledge a.k.a epistemology: <strong>a negotiated-epistemology</strong>. An epistemology that shifts from &#8220;what is true?&#8221; to &#8220;whose truth matters?&#8221; A concept that doesn&#8217;t demand complete human comprehension but still preserves human agency and values in deciding which AI-discovered patterns to adopt. </p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!3h3m!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!3h3m!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!3h3m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1113093,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/168137358?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!3h3m!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!3h3m!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F10eb64e3-a606-4c4f-9d65-2910433fba33_1248x832.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2><strong>AI gets us from essence to resemblance</strong></h2><p>For millennia, we've operated under the rationalist paradigm of scientific discovery: that there is <strong>something to learn</strong> about reality. An objective truth lying somewhere and accessible through human reasoning. <em>Objectivity</em>, the cornerstone of rationalist thinking, propelled scientific progress by assuming that things have essential properties waiting to be discovered.</p><p>In philosophical terms, this is the classical paradigm that things have a form of essence (remember <em>Plato&#8217;s Cave</em>) and that our job as humans is to uncover these truths. Our quest is to forever get closer to that truth. </p><p>AI ditches this paradigm and embraces a view more closer to Wittgenstein: that of a family of resemblances. AI systems in the form of Large Language Models (LLMs) map resemblances. They group things based on statistical similarities and predict <em>what comes next </em>based on that, and not on any essential property. </p><p>This resemblance-based approach is producing knowledge that functions effectively in the world while existing partially beyond human comprehension. </p><h2><strong>The Fracturing of Objectivity: AI Might See More than We Can</strong></h2><p>Let me trace this shift from theory to pattern recognition with three examples of groundbreaking discoveries: </p><h3>1. Drug discovery: Halicin - The Antibiotic Human Theory Missed</h3><p>Halicin is an antibiotic drug discovered in partnership with AI. </p><p>MIT researchers trained an AI on 2,000 molecules to learn about the ability of molecules to inhibit bacterial growth. The system then evaluated over 60,000 candidate molecules from a chemical library, looking for compounds with potential antibiotic properties that wouldn't be toxic to humans. It found one: Halicin. </p><p>What struck me about this finding is that the AI identified relationships that were outside the concepts and theories humans have devised. It didn&#8217;t need to know why the molecule could work. It merely had to make a prediction based on the fabric of relationships it had encoded during training.</p><p><em>&#8220;The AI that MIT researchers trained did not simply recapitulate conclusions derived from the previously observed qualities of the molecules. Rather, it detected new molecular qualities - relationships between aspects of their structure and their antibiotic capacity that humans had neither perceived nor defined. Even after the antibiotic was discovered, humans could not articulate precisely </em>why<em> it worked&#8221; </em>(Source: Kissinger, Schmidt &amp; Huttenlocher, 2021)</p><h3>2. Weather prediction</h3><p>In 2023, DeepMind's AI weather forecasting models <a href="https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/">GraphCast</a> was released to the public displaying &#8220;10-day weather predictions at unprecedented accuracy in under one minute.&#8221; (Source: <a href="https://deepmind.google/discover/blog/graphcast-ai-model-for-faster-and-more-accurate-global-weather-forecasting/">Google DeepMind</a>)</p><p>The most interesting aspect (beyond the improved accuracy) was that the AI had identified atmospheric relationships that weren't accounted for in existing meteorological theories.</p><p>Climate scientists found themselves in the position of using predictions they couldn't fully explain theoretically due to the &#8220;black box&#8221; nature of LLMs. </p><p>&#8220;Despite their predictive capabilities, most advanced ML models used for meteorology are usually regarded as "black boxes", lacking inherent transparency in their underlying logic and feature attributions (Du et al., <a href="https://arxiv.org/html/2403.18864v1#bib.bib26">2019</a>; Deng et al., <a href="https://arxiv.org/html/2403.18864v1#bib.bib24">2021</a>; Xiong et al., <a href="https://arxiv.org/html/2403.18864v1#bib.bib119">2024</a>). This lack of interpretability poses major challenges. First, it reduces trust from domain experts, such as meteorologists, who may be reluctant to rely on unexplained model outputs for high-stakes decision making. Second, it hinders further model refinement, as developers cannot easily diagnose errors or identify which relationships the models have captured. Third, opaque ML models provide limited insight into the fundamental atmospheric processes that lead to their predictions.&#8221; Source: <em>(Yang et al., 2024, p. 2)</em></p><p>Even in domains where we believe our theoretical understanding is strong, AI can detect patterns that exceed our current frameworks.</p><h3>3. AI even won a Nobel prize</h3><p>In 2024, Hassabis and Jumper, the creators of the AI model Alphafold2 were awarded the Nobel Prize in Chemistry for developing an</p><p>&#8220;AI model to solve a 50-year-old problem: predicting proteins&#8217; complex structures.&#8221; (Source: Nobel Prize Press Release)</p><p>Understanding how proteins fold is crucial for drug design and disease treatment. Yet despite decades of effort, before this scientists could only determine structures through expensive, time-consuming experiments.</p><p>Alphafold2 had succeeded in predicting the structures of over 200 million proteins, accomplishing more in a few months than scientists had in the previous 150 years. </p><p>Yet like Halicin and GraphCast before it, there was a catch: not even AlphaFold2's creators could fully explain how it arrived at its predictions.</p><p>While they understand the architecture &#8212;the neural networks, attention mechanisms, and training procedures&#8212;the actual decision-making process that emerges from billions of parameters remains opaque. As illustrated in <a href="https://www.nobelprize.org/uploads/2024/11/fig2_ke_en_24-5.pdf">this document</a>, the precise way the model calculates probabilities and determines protein structures from amino acid sequences remains, an unsolved mystery. </p><p><strong>The Nobel committee essentially awarded science's highest honor to a discovery process that defied scientific explanation</strong>.</p><p>AI models are built to learn by themselves and once they do, they are black boxes: interconnected parameters whose logic and inner working are impenetrable. Yet these black boxes are producing knowledge that works better than our transparent theories.</p><h2><strong>Navigating a negotiated epistemology </strong></h2><p>The realization that AI finds structures, connections, and relationships that humans might never independently discover challenges our position at the pedestal of scientific truth. </p>
      <p>
          <a href="https://www.phiand.ai/p/the-end-of-objectivity-as-we-knew-4e6">
              Read more
          </a>
      </p>
   ]]></content:encoded></item><item><title><![CDATA[AI's biggest threat isn't robots. Its silence]]></title><description><![CDATA[AI Promised Enlightenment. We Got Censorship Instead]]></description><link>https://www.phiand.ai/p/ais-biggest-thread-isnt-robots-its</link><guid isPermaLink="false">https://www.phiand.ai/p/ais-biggest-thread-isnt-robots-its</guid><dc:creator><![CDATA[Camila Lombana-Diaz]]></dc:creator><pubDate>Sun, 24 Aug 2025 07:30:31 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!34WI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!34WI!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!34WI!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!34WI!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!34WI!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!34WI!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!34WI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg" width="1248" height="832" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:832,&quot;width&quot;:1248,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:981291,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/171357441?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!34WI!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 424w, https://substackcdn.com/image/fetch/$s_!34WI!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 848w, https://substackcdn.com/image/fetch/$s_!34WI!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!34WI!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F0c5c5f13-89de-4c62-a608-bb5f4f412f68_1248x832.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><p>Tech leaders promised Artificial Intelligence would lead a new golden age of human advancement. The same leaders now warn AI might end humanity soon<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>. We are supposedly reaching the pinnacle of technological progress while preparing for existential catastrophe. </p><p>This isn't just ironic&#8212;it reveals a fundamental crisis in how we think about progress itself. <strong>We're experiencing what I call a crisis of dialectics, where the AI industry systematically suppresses the very contradictions and debates that drive genuine innovation.</strong> The philosopher Hegel understood that real progress isn't linear&#8212;it's messy, driven by opposing forces clashing and creating something new. Every breakthrough emerges from conflict between competing ideas, not from silencing critics.</p><p><strong>Drawing on Hegel&#8217;s concept of progress</strong>, which requires dialectical movement through the recognition and resolution of contradictions, <strong>I argue that AI currently faces a crisis of dialectics</strong>. The current AI landscape negates or rejects any meaningful antithesis, silencing critical reflection under the excuse of winning a race. Fields such as AI Ethics, which had its boom around 2016 slowly appears defunded<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a>, regulatory efforts delayed<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a>, and tensions with human rights growing<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a>. All the counterparts for AI summer seem to be entering a kind of winter. At the recent AI for Good Summit in Geneva, Abeba Birhane even faced last-minute censorship before her keynote<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a>. </p><p>Digital technologies carry an illusory sense of linear progression. As consumers we tend to believe that the next release in hardware or software will allow us to govern from a higher level of privilege, even when contradiction appears. Today, we are increasingly vulnerable to digital theft, security breaches, safety risks, and the spread of misinformation and disinformation. We also recognize that AI literacy<strong>, </strong>an emerging layer of digital literacy, will likely shape future social casts. None of these realities reflect ideals of progress, even as we navigate what some describe as a new Enlightenment era of AI.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>Progress Isn't Linear&#8212;And AI Needs to Learn That</h2><p><strong>The idea of progress is far from universal</strong>. Many cultures have viewed progress as a critical&#8212;and at times cannibalistic&#8212;concept, driven by a blind, linear notion of 'evolution'. Especially since the 20th century, critiques have emerged from various significant perspectives: historical, philosophical, political, and ecological, emphasising how that linear growth destroys ecosystems, and erases diverse ways of living, seeing modernity as not inherently liberating, but often oppressive, particularly when progress becomes technocratic and/or bureaucratic.</p><p>From this perspective, we should ask: </p><div class="pullquote"><p>If progress is real, how did modern, &#8220;rational" societies commit mass murder using the most advanced technology available at the time in their own territory? </p></div><p>The book <em>IBM and the Holocaust</em> serves as a powerful case study, demonstrating how technology was used to automate mass killing. Using punch cards, IBM&#8217;s systems significantly aided the Third Reich in profiling and targeting individuals presumed as &#8220;undesirable&#8221; by the regime through its German subsidiary, Dehomag.</p><p><strong>But how did we land to the understanding that progress is linear, positive, rational, and necessitates technological advancements? </strong></p><p>August Compte (1798-1857), was probably the philosopher most closely associated with this idea, proposing that all that comes is better than what has passed. This is rooted in positivism, which resonates with our beliefs in technological advancements. The leap from a Nokia phone in the 1990s to today&#8217;s iPhone, or the transformative shift brought by the internet, both examples of apparent linear technological progression.</p><p>Compte believed that human history evolves in a linear progression through three stages: A<strong> </strong>Theological Stage<strong> </strong>&#8211; where phenomena are explained by divine or supernatural forces, a<strong> </strong>Metaphysical Stage<strong> </strong>&#8211; where abstract principles (like "nature" or "essence") replace gods, finalizing in a Scientific (or Positive) Stage<strong> </strong>&#8211; where knowledge is based on observation, experiment, and reason. Each stage improves upon the last, culminating in the scientific stage as the peak of human development, guided by empirical knowledge. This notion, rooted in Enlightenment thought, defends the idea that history moves in a straight line toward something better&#8212;more rational, more scientific, and freer. Other thinkers, and to some extent Kant, also emphasized that reason and knowledge would bring inevitable improvement, reinforcing a view both linear and cumulative.</p><p>Revisiting the historical development of these ideas shows that <strong>positivist thinking arose within a narrow European context shaped by the Industrial Revolution and economic expansion</strong>. Within decades, the supposedly evolved world collapsed into WWII. The Holocaust and rise of totalitarian regimes exposed the limits of Enlightenment ideals, proving that science, reason, and linear advancement did not guarantee better societies.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2>Hegel, the progressivist with an antidote</h2><p>The thing is, progress wasn&#8217;t a universal concept even between all progressivists. Under that umbrella, Hegel complicates this picture. He did not believe in the simple sense of things just getting better over time. Instead, he saw <strong>progress as a kind of </strong><em><strong>dialectical</strong></em><strong> movement, a process where contradictions and conflicts drive history forward</strong>. Because his view of progress is non-linear<strong> </strong>and driven by contradiction, he reflects that advancement is not a smooth unfolding of better ideas, but a dynamic struggle between them. </p><div class="pullquote"><p>In Hegel's view progress is a dynamic clash: thesis, antithesis, and synthesis<strong>.</strong> </p></div><p>An idea (thesis) inevitably gives rise to its opposite (antithesis), and the conflict between them leads to a new, more developed state (synthesis), which then becomes a new thesis, and the cycle continues. This isn't just about ideas&#8212;it&#8217;s also about history, politics, and human freedom. For Hegel, history is the story of human freedom becoming more fully realized recognizing the freedom and dignity of all.</p><p>From that perspective, Hegel&#8217;s idea of progress is not linear or smooth. It is actually messy, full of struggle, has setbacks, and contradictions, but at the end it is all meaningful. Every conflict contains the seeds of its resolution, and that resolution moves us closer to a more rational and freer society.</p><p>To add a layer of complexity, Hegel&#8217;s thoughts in progress was not only external but unfolds in the internal. He saw progress as the unfolding of Spirit (or <em>Geist</em>). A kind of cosmic self-awareness coming to know itself through human history, culture, and thought. So, <strong>in Hegel&#8217;s world, progress isn&#8217;t just "better technology" or "more comfort." It is the evolution of consciousness, both individual and collective, toward a fuller understanding of freedom, reason, and unity</strong>. Progress is the unfolding of self-realization through the antithesis: the contradiction, the conflict, the dialectics. </p><h2>What real progress looks like</h2><p><strong>If Hegel were alive today, what might his thoughts be on AI?</strong> What would he make of a technological landscape lacking guardrails, constructive competition, and ethical grounding? </p><p>To answer this, we must acknowledge that progress is messy. This means that if we want AI to advance, we must address its gaps as a priority. <strong>Investment in AI should support not only its thesis but also its antithesis&#8212;not only enhancing robustness and efficiency, but also fostering research and innovation in ethics, and safety</strong>. </p><p>AI utilization is not just about tools; it is about understanding our rights in relation to the technology. It is not only about coding or prompting; it is about educating people on the limitations of AI in its current state and maybe creating solutions around those limitations. Supporting the development of ethical features&#8212;work that may require slowing the pace of a narrow linear development in order to achieve truly sustainable innovation. </p><p>Historically, antithesis has enabled innovation in different industries. Environmental regulations didn&#8217;t kill energy production &#8212; it empowered solar cells,<strong> </strong>wind turbines,<strong> </strong>battery technology, and carbon capture systems<strong> </strong>proving that without the negation, there would have been less economic incentive to innovate beyond coal and oil. Automobile safety laws, led to the invention of airbags,<strong> </strong>anti-lock braking systems,<strong> </strong>lane-assist AI<strong>, </strong>and<strong> </strong>electric vehicles. In telecommunications regulation, antitrust actions against Bell System, a monopoly on telephones in the 1980s, forced TCP/IP standardizations, breaking the monopoly, which ironically <em>accelerated</em> digital communication.</p><h2>AI Needs Its Enemies to Survive</h2><p>To achieve real progress, we need to mature the idea that progress is linear. Even if AI will forever change humanness, it doesn&#8217;t mean that all its progress will be positive. </p><blockquote><p>It is imperative that there are spaces, institutions, startups, and governments that protect the antithesis of the development of this technology without censure. </p></blockquote><p><strong>We are in a crisis of dialectics because the antithesis is often uninvited.</strong> We need to see innovation within clear ethical boundaries. </p><p>ChatGTP&#8217;s programmed positive bias is a proof of the psychological negative effects in its users when contradiction is erased by design even when needed for accuracy<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a>. Many technocrats still argue that regulation hinders innovation, as if regulation and innovation cannot co-evolve toward a synthesis&#8212;one that could bring us closer to a more Hegelian vision of a rational and freer society shaped by this new technology. </p><p>In a world where dialectics are not allowed, where they are strictly controlled and dominated, genuine intellectual progress comes to a halt. <strong>Dialectics, the method of examining ideas through contradiction, opposition, and synthesis, is central to critical thinking, innovation, and freedom of thought</strong>. Exactly what experts say are the skills we need for tomorrow. Without dialectics we risk that Ideas are no longer tested or refined, that dissent is criminalized or pathologized, that education becomes indoctrination, and<strong> </strong>language is tightly managed. Without them, thought becomes static.</p><p>Without dialectics, we may feel we are progressing towards a unification of standards and AI solutions for everyone. Such a world may appear orderly or unified, but that unity is hollow. It is built on fear, not understanding. Dialectics is what lets us examine contradictions in ourselves, our systems, and our beliefs. Take that away, and you lose not just freedom&#8212;you lose the ability to truly innovate on anything.</p><p>Mature industries embrace their critics because opposition makes products better. <strong>The AI industry needs to mature and recognize that safety researchers, ethicists, and human rights advocates aren't enemies of progress&#8212;they're essential partners in creating technology that actually advances human flourishing.</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p><a href="https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years">https://www.theguardian.com/technology/2024/dec/27/godfather-of-ai-raises-odds-of-the-technology-wiping-out-humanity-over-next-30-years</a>?</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p><a href="https://apnews.com/article/nsf-cuts-science-funding-dei-trump-misinformation-aie989c978f273fb1a94c2e47b78843d64">https://apnews.com/article/nsf-cuts-science-funding-dei-trump-misinformation-aie989c978f273fb1a94c2e47b78843d64</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p><a href="https://www.theguardian.com/us-news/2025/may/14/republican-budget-bill-ai-laws">https://www.theguardian.com/us-news/2025/may/14/republican-budget-bill-ai-laws</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p><a href="https://www.theguardian.com/technology/2025/aug/13/ai-artificial-intelligence-racism-sexism-australia-human-rights-commissioner?">https://www.theguardian.com/technology/2025/aug/13/ai-artificial-intelligence-racism-sexism-australia-human-rights-commissioner?</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><a href="https://aial.ie/blog/2025-ai-for-good-summit/">https://aial.ie/blog/2025-ai-for-good-summit/</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><a href="https://arxiv.org/pdf/2504.09343">https://arxiv.org/pdf/2504.09343</a></p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[AI: human nature unfolding]]></title><description><![CDATA[Participants, not Creators.]]></description><link>https://www.phiand.ai/p/ai-human-nature-unfolding</link><guid isPermaLink="false">https://www.phiand.ai/p/ai-human-nature-unfolding</guid><dc:creator><![CDATA[Sebastian Osorno]]></dc:creator><pubDate>Fri, 22 Aug 2025 09:28:06 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!bdoX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!bdoX!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!bdoX!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!bdoX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png" width="1024" height="1024" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:1024,&quot;width&quot;:1024,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:1972208,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/171069119?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!bdoX!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 424w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 848w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 1272w, https://substackcdn.com/image/fetch/$s_!bdoX!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1ca7bf21-1302-4324-92ff-9ed419df80fa_1024x1024.png 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p>2.4 billion years ago, cyanobacteria began poisoning the planet. They didn't mean to; they were simply metabolizing, turning sunlight into energy. Their waste product, oxygen, accumulated until it transformed Earth's atmosphere, triggered a mass extinction, and inadvertently created the conditions for complex life. The cyanobacteria had no plan, no vision, no control over this planetary transformation. They were merely doing what cyanobacteria do.</p><p><strong>What if </strong><em><strong>we</strong></em><strong> are the cyanobacteria of the cognitive age?</strong></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>I don&#8217;t pretend this is an easy thought; I am trying it on, just as you might. Consider this: every AI model we train, every algorithm we deploy, every neural network we optimize may be byproducts we excrete. Just as cyanobacteria couldn't conceive of oxygen-breathing organisms, we may be equally blind to what emerges from our technological metabolism. We call it "artificial" intelligence to maintain the illusion of authorship, but what if intelligence is simply achieving a new substrate through us, as inevitable as oxygen accumulating in ancient seas?</p><p>This perspective aligns with John Gray's posthumanist philosophy, outlined in <em>Straw Dogs</em> (2007)<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a>, which reveals human significance but not in the usual way. We may be the essential biological <em>accident</em> through which Earth develops a new kind of atmosphere: one made not of gases but of information, computation, and emergent cognition.</p><p>In this post, I invite you to see our (a.k.a human) role in technology development as participants rather than creators. Taking this perspective enables another way of approaching AI development.</p><h2>1. Beyond Human Creators</h2><p>The cyanobacteria metaphor reveals a profound misunderstanding: our assumption that humans possess unique agency in shaping technology. In Gray&#8217;s view, this conviction - deeply rooted in Western society- rests on the idea of free will: the idea that we alone among Earth&#8217;s creatures can transcend our nature and direct our destiny.</p><p>Because if we are &#8216;free&#8217; creatures, we also are free to create and destroy. Based on this, we imagine everything we&#8217;ve created so far as a product of our will, of our desire, as something we are equipped and free to do so. Rarely do we see our creations as demanded by natural forces inhabiting us and our surroundings.</p><p>This notion of us as creators has led us to believe we are the &#8220;creators&#8221; of artificial intelligence. We even call it &#8220;artificial&#8221; to signal it is not nature, but something we made. Despite the word &#8216;creator&#8217; carrying the illusion of separation (through intention), creation may be nothing more than participation in a flow. This is unsettling to admit, I&#8217;ve always thought of myself as a maker, a builder, and I may not be the only one.</p><p>But if we see ourselves as cyanobacteria in another chapter of the planet&#8217;s long story, then perhaps we are not creators in the sense we believe, perhaps we are catalysts, participants, a necessary but passing configuration through which something flows. We don&#8217;t claim cyanobacteria &#8220;created&#8221; the ozone layer as a conscious act; likewise, AI may simply be a layer in our cognitive atmosphere. Think, feel, and meditate deeply on this perspective; it may humble you and liberate some weight from your consciousness.</p><p>Gray (2007) challenges this by arguing that technology is simply part of our nature; it is nature, inseparable from it. It may be a human byproduct, but not exclusively human. A frequency we can tune into, as can any other animal, even ants, for technology is as ancient as life on Earth (paraphrasing Gray&#8217;s chapter on Green Humanism). I would like to invite you, then, to dissolve the illusion of control we believe we hold over technology and AI, and to consider Gray&#8217;s argument as a starting point.</p><p>From an evolutionary perspective, we are programmed animals, driven to survive and reproduce like any other living being on Earth. Our cognitive and builder capacities, enabled by the partnership between our brains and our hands, have been our singular way of relating differently to nature. But why do we imagine this makes us God&#8217;s privileged creatures? Are we that different from Cyanobacteria if we consider this evolutionary perspective?</p><p>Consider cyanobacteria: their oxygenic photosynthesis drove atmospheric oxygenation, enabling the formation of an ozone layer. An extraordinary impact on Earth and the history of life, and yet, oxygen (O&#8322;) was simply a biological byproduct. In the same way, could we think of knowledge, technology, and machine learning algorithms as human-animal byproducts? They may not transform the planet&#8217;s geology, but they shape its cognition, a vast impact across multiple realms, one capable of altering the very foundations of how we live and how we see ourselves, and, like oxygen, far beyond our control.</p><p>According to Gray, we're not the authors of AI any more than cyanobacteria were the authors of oxygen. We're the biological substrate through which a new form of information processing enters the world.</p><p>This shift in perspective, from creators to participants, clarifies our role in this development. If we are witnessing the emergence of a new layer in Earth&#8217;s information-processing stack, our role is not to control this but to understand our place within it. The cyanobacteria couldn&#8217;t prevent the oxygen catastrophe, but life found ways to flourish in the new atmosphere they created. What does it mean to flourish in an atmosphere we don&#8217;t control?</p><p>This perspective offers three insights about our relationship with AI:</p><ol><li><p>First, like photosynthesis for cyanobacteria, technology and AI development are part of our nature. Neither is something we need to resist or control. More something to understand. Similarly to how oxygen became both a resource and a threat in Earth&#8217;s history, AI will serve both evolution and existing power structures. Our question is how to adapt to its emergence.</p></li><li><p>Second, if AI is a byproduct of human activity, a natural resource, a frequency we can tune into, then it cannot be truly owned and controlled by any entity. It is accessible to all of us. Companies in the business of AI are trying to sell us something that already belongs to us. They might enclose the technology behind paywalls, but the underlying cognitive processes belong to our species' collective development. This is not to say we will all benefit. Current development exacerbates inequalities and has already shown the ability to serve neofascism and totalitarianism. It seems the promise of controlling the masses is deeply seductive to our power structures.</p></li><li><p>Third, the marketing narratives that position AI as something beyond ordinary human comprehension and urge us to use specific products to not be left behind, misunderstand what's happening: we are not consumers of AI but participants in its emergence. The questions we should be focusing on are how we will adapt to the new cognitive atmosphere we are unwillingly creating. Like the early organisms facing rising oxygen levels, our challenge is not to control the change but to evolve with it.</p></li></ol><p>Of course, there are huge differences between Cyanobacteria and us; we are far more complex organisms. That also does not mean we can not be the means through which new and more evolved organisms are born. Now, can we sit with the uncomfortable questions:</p><ul><li><p>If intelligence flows across beings, what is it asking of us now?</p></li><li><p>If we are not tech and particularly AI authors, what happens to the pride and fear that come with ownership?</p></li><li><p>And if technology is just another unfolding of nature, could we meet it without the weight of thought, without the reflex to control?</p></li><li><p>Can we, perhaps, consider this cognitive technology as part of nature, as non-artificial, but as part of the ecosystem we&#8217;re living in? We may be becoming witnesses to a rising awareness that we are just insignificant, yet singular, creatures in the universe, and that is both terrifying and liberating; it is taking our collective identity out of the center of meaning.</p></li></ul><h2>2. Humanism&#8217;s Epitaph</h2><p>If we consider intelligence as a non-human proprietary quality, and technology as an evolutionary step, a natural, animal byproduct, we can see our participation in this natural unfolding far from anthropocentrism. We need to deeply question our rooted and collective belief in free will so that we can let go of authorship and ownership. This is beneficial, since we&#8217;ll be left to modulate and interact with AI and cognitive technologies, perhaps with all technologies and tools, just as we interact with nature and other beings. I can see, smell, feel, and frame a deeper sense of responsibility arising from this playful exercise.</p><p>I am not immune to the weight of these ideas; I struggle with them even as I write. But perhaps that struggle is the beginning of loosening the grip of humanism. Perhaps we &#8211;the homo-sapiens&#8211; aren&#8217;t the center of Earth&#8217;s story, and just some insignificant, but singular creature connecting new forms of life and evolution. Perhaps we are in the business of creating a new atmosphere for future beings to breathe knowledge and wisdom as we do oxygen, even a new form of dynamic matter that breaks beyond our understanding. Then, do we want to freely choose to stop it from absorbing us and breeding more complex organisms? Or is that freedom itself another illusion?</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Gray, John. <em>Straw Dogs: Thoughts on Humans and Other Animals</em>. New York: Farrar, Straus and Giroux, 2007.</p><p></p></div></div>]]></content:encoded></item><item><title><![CDATA[Moving Away From Anthropocentrism ]]></title><description><![CDATA[Is the fast development of AI challenging our place in the natural and artificial ecosystem?]]></description><link>https://www.phiand.ai/p/moving-away-from-anthropocentrism</link><guid isPermaLink="false">https://www.phiand.ai/p/moving-away-from-anthropocentrism</guid><dc:creator><![CDATA[Mishka Nemes]]></dc:creator><pubDate>Fri, 15 Aug 2025 16:00:54 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/58e7c256-66fb-4fab-9463-c3180bb5527b_1216x832.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>The current discourse in AI is centred around alignment to human values because we want to ensure AI augments us and it doesn&#8217;t replace us.&nbsp;</p><p>But what about other intelligent beings, shouldn&#8217;t we shape technology to reflect their needs and values too? In an anthropocentric capitalist society which places the individual as its core dogma, we perhaps need to shift focus from human-derived epistemology to consider other sentient beings towards inspiring technological advancements, including AI systems.</p><p>We shape technology to suit our needs and offload different cognitive abilities, yet when we reflect back in <a href="https://en.wikipedia.org/wiki/Black_Mirror">the </a><em><a href="https://en.wikipedia.org/wiki/Black_Mirror">black mirror </a>&nbsp;</em>we are scared, even appalled at how we see ourselves portrayed. Aligning AI systems to human values can be a very polarised debate - whilst there is general consensus that we want AI to do the thing we intend it to do, some will argue that AI can rectify our human biases in a principles-driven way and others will fear that aligning AI with human-derived metrics only is a very human-centric way of seeing the world as it doesn&#8217;t consider other ecologies and our integration in the wider ecosystem we inhabit.&nbsp;</p><p>In this article, I explore the prospect of moving away from a human-centric way of designing and developing technology including AI, where AI becomes a tool to gain a deeper understanding of the natural world, including us. Only by shifting the anthropocentric rhetoric, we can awaken a deeper sense of belonging to a world where we are part of an intertwined ecosystem and thus demonstrate care for nature and artificial beings alike, enabling us to avoid catastrophic risks such as natural disasters, biodiversity loss or a potential arms race with sentient AI systems.</p><p>What follows is a list of philosophical challenges to anthropocentric AI. They are meant to inspire you to consider intelligence from perspectives that are not anthropocentric.</p><h2>1. Beyond Alignment&nbsp;</h2><p>Without downplaying the doomsday fears shared in the AI safety space, the concerns we share<em> right-here-right-now</em> are best highlighted by the work comprised within AI Alignment where experts ensure AI systems don&#8217;t cause harm and that we avoid risk, such as exacerbating our biases and shortcomings.&nbsp;</p><blockquote><p><em><strong>What if AI can <a href="https://syntheticus.ai/blog/tackling-bias-in-large-ml-models-the-role-of-synthetic-data">overcome biases through careful data curation</a> including synthetic data generation to ensure data is representative of the population it affects?</strong></em>&nbsp;</p></blockquote><p>Perhaps we need to think beyond human benchmarks and humans-in-the-loop and consider designing values-led and principles-derived AI systems to better off the world and enhance us, and not simply align to us.&nbsp;</p><p></p><h2>2. Rethinking Ethics</h2><p>AI brings new moral and ethical challenges, and the recent developments have given rise to<a href="https://aeon.co/essays/can-philosophy-help-us-get-a-grip-on-the-consequences-of-ai"> a new corpus of ethics </a>as decisions are made at scale, amplifying both benefits and harms in ways unprecedented in human history. Furthermore, AI ethics considers a new level of stakeholder complexity whereby developers, deployers, AI agents, and end users amongst others, display blurred lines of responsibility and accountability.<em><strong> </strong></em>In classical ethics, we distinguish between right and wrong of individual human agents; in AI ethics we consider morality for individual AI systems, the interaction between an artificial intelligent entity and a human agent, as well as at the level of collective where emergent interactions arise. AI ethics considers human values in a holistic, dynamic and complex fashion as it has to adapt to new technological developments in highly uncertain environments.</p><blockquote><p><em><strong>What if we can now indirectly and with the advent of AI, run ethical tests at scale and gain a deeper understanding of our human values in light of a new epistemological revolution at the junction between humans and machines?</strong></em></p></blockquote><p></p><h2>3. The Stack</h2><p>Our sense of human-centrality was challenged before the current generative AI hype by philosophers such as Benjamin Bretton in his seminal<a href="https://mitpress.mit.edu/9780262029575/the-stack/"> book &#8216;The Stack&#8221;</a>. Bretton argues about emergent intelligence at the <em>stack</em> level - encompassing smart grids, cloud platforms, mobile apps, smart cities, IoT - thus forming a new governing structure. At the core of his thesis, humans now live in a complex technological world whereby &#8216;<em>we are inside the stack and it is inside of us</em>&#8217; which suggests a recursive relationship where humans are both subjects and objects of the computational systems.&nbsp;</p><blockquote><p><em><strong>What if our insistence on human-centered technology inadvertently reduces us from subjects who shape our tools to objects shaped by them, fundamentally altering our position in the world?</strong></em></p></blockquote><p></p><h2>4. Qualia and Sentience&nbsp;</h2><p>We assume we are<em> the only </em>sentient beings primarily due to our inability to understand consciousness in other biological beings. Now, the prospect of artificial general intelligence (AGI) or artificial superintelligence (ASI) in the near future challenges our unique place in our anthropocentric universe and even more so, AI provides a fascinating use case towards better understanding how animals and other biological entities communicate and perceive the world, thus providing us a new gateway into sentience. To illustrate interest, there are multiple initiatives in this space including <a href="https://www.earthspecies.org/">the Earth Species Project</a> which aims to use AI to decode animal communication and to illuminate diverse intelligences on earth.&nbsp;</p><blockquote><p><em><strong>What if, by untangling how other beings engage with the world, we will unlock a new ecological relationship between humans, artificial entities and biological beings, enriching our experience of the world altogether?</strong></em>&nbsp;</p></blockquote><p>To date, we lacked insight into non-human consciousness; now, in anticipation of ASI we reassess consciousness and agentic morality and by extension, we seek to better understand other biological beings, challenging our core assumption that humans are the only conscious entities.</p><p></p><h2>5. Compassionate AI</h2><p>The <a href="https://forum.effectivealtruism.org/posts/4LimpA4pyLemxN4BF/ai-moral-alignment-the-most-important-goal-of-our-generation">Moral AI Alignment</a> movement proposes that once we overcome the technical alignment challenges, we then need to account for the needs and wellbeing of all sentient beings when designing AI systems. Developing AI systems like us proves futile given how much suffering humans have caused over the millennia, and if we eventually reach AGI presenting some degree of sentience, we will want<em> </em>to show compassion and understanding towards those systems and in turn, the system to exhibit compassion for other beings including us, the very creators of those systems.&nbsp;</p><blockquote><p><em><strong>What if sentient AI will behave more ethically than insentient AI, as it displays a better grasp of morality, reality perception, power and willingness to act?&nbsp;</strong></em></p></blockquote><p><a href="https://forum.effectivealtruism.org/posts/bqj5cGEtcyEin3xTY/will-sentience-make-ai-s-morality-better">The proponents of Moral AI Alignment</a> argue that empathy can only be developed through experiential learning in the context of multiple moral agents, and thus we need to enable and facilitate AI to experience the world first-hand so that it acts morally and ethically in complex, evolving and multi-stakeholder interactions.<br></p><h2>6. Nature-inspired AI</h2><p>The NeuroAI field is one of the most fruitful and self-reinforcing research and innovation fields - the more we learn about the human brain, the more we can draw inferences and model cognitive processes to inspire algorithms and architectures, and the more we can use these computational models to reflect back on the brain. Nonetheless, some leading AI experts argue that<a href="https://techcrunch.com/2025/01/23/metas-yann-lecun-predicts-a-new-ai-architectures-paradigm-within-5-years-and-decade-of-robotics/"> we are approaching a capability ceiling</a> in the current human-inspired transformer models and thus, breakthrough progress will require exploration of entirely novel AI system architectures to scale beyond current limitations.&nbsp;</p><p>By adopting a nature-inspired AI approach, we recognise that nature's inherent complexity should inform technology design, acknowledging the socio-technical, ecological, and systemic complexity aspects of technological development, as well as the dynamic and constantly evolving relationship between humans and technology. For example, the diversity of AI systems and broader technological options provides substantial societal benefits, as suggested by recent research on<a href="https://www.cooperativeai.com/post/new-report-multi-agent-risks-from-advanced-ai"> multi-agent AI interactions</a> whereby diverse approaches to intelligence can yield more robust, adaptable, and equitable technological solutions.&nbsp;</p><blockquote><p><em><strong>What if shifting the focus from human-inspired AI to nature-inspired AI could help us build more resilient, diverse and capable AI to address current limitations, and thus it will fundamentally transform how we develop, engage with, and ultimately co-exist with intelligent systems?</strong></em></p></blockquote><p>&nbsp;</p><h2>7. Being Human</h2><p>In an anthropocentric world, we first looked at what makes humans special - and by contrast, <a href="https://keepthefuturehuman.ai/executive-summary/">AI is teaching us what makes us human</a> - but have we explored what makes seahorses special, or bee colonies, and what inferences we can draw from those biological systems? We are learning there are certain aspects of the human experience which might be unique or they might be too valuable for us to leave them to AI systems; these might include writing and engaging with poetry, our sense of purpose and self-transformation, the way we fall in love, or simply the qualia of boredom.&nbsp;</p><blockquote><p><em><strong>What if the advent of AI, and ASI potentially, gives us an opportunity to assess what makes us uniquely and idiosyncratically human?</strong></em></p></blockquote><p></p><h2>Where are we now?</h2><p>Technology advancements, and in particular AI, pose significant threats and opportunities alike - we are now faced with seeing ourselves in the <em>black mirror </em>of the systems we build, and this challenges our sense of purpose, identity, and ultimately what makes us human. Seeing beyond our human nature and appreciating other diverse, unique and evolutionarily-refined types of intelligence is perhaps a timely wake up call to shift the discourse from an anthropocentric society and towards a world where all sentient beings, natural, artificial or otherwise, can collectively thrive together. And perhaps this is a more compassionate approach which places us in a better position to deal with other emerging societal concerns such as climate change, biodiversity loss or the threat of alien life.&nbsp;</p><p>Nonetheless, it is likely that as seen in previous scientific revolutions like the Copernican and the Darwinian ones, we will be challenged on how <em>us</em> humans position ourselves in relation to nature and to the universe, except the undergoing intelligence revolution challenges what we thought makes us quintessentially human - sentience, consciousness and exhibiting the most sophisticated intelligence in the universe. In follow-on articles, I will focus on approaching each of the questions asked here in more detail, and I welcome any thoughts and questions in the comments section.</p><p><br></p>]]></content:encoded></item><item><title><![CDATA[AI Dating Apps Are Making You Worse at Love]]></title><description><![CDATA[How AI is killing authentic connection, one algorithm at a time]]></description><link>https://www.phiand.ai/p/ai-dating-apps-are-making-you-worse</link><guid isPermaLink="false">https://www.phiand.ai/p/ai-dating-apps-are-making-you-worse</guid><dc:creator><![CDATA[Maria Weaver]]></dc:creator><pubDate>Mon, 11 Aug 2025 15:26:35 GMT</pubDate><enclosure url="https://substack-post-media.s3.amazonaws.com/public/images/5bf2d41f-816f-4bcc-b51c-9bf3f12a3757_1224x1224.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<div class="pullquote"><p><em>"They had exchanged messages for weeks, Wilson said. But who had he really been talking to?</em></p><p><em><strong>&#8216;It&#8217;s almost like we never even spoke.&#8217;&#8221;</strong></em></p></div><p>This closing line from a <a href="https://www.washingtonpost.com/technology/2025/07/03/ai-online-dating-match/">Washington Post article</a> about AI infiltrating dating apps stayed with me for weeks and thrust me into a quasi-existential dread for the future of the human race.</p><p>Richard Wilson thought he'd finally met someone interested in thoughtful conversation. They connected and bonded over weeks of dialogue. But when they met in person, his date had none of the conversational energy she'd shown over text. The article explores the growing prevalence of people using AI to craft romantic messages and handle entire courtships, optimizing our most intimate communications for algorithmic success.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p>Wilson's experience crystallizes what worries me most about AI infiltrating personal communication: <strong>What does this do to our ability to hold a conversation with another person? </strong>How can you reveal your true self and be authentic if you're constantly having AI coach, supplement, and wholly construct your thoughts?</p><h2><strong>The New Courtship</strong></h2><p>Apparently I&#8217;ve been living under a rock <em>(or am just married - so not the target audience)</em>, but every major dating app has deployed AI to handle communication in some way. The features range from the innocuous - <a href="https://www.grindr.com/2025-product-roadmap">Grindr</a> will summarize your chats so you can remember what you talked about.Bumble and <a href="https://techcrunch.com/2025/08/06/tinder-explores-a-redesign-dating-modes-and-college-specific-features-to-boost-engagement/?utm_source=tldrdesign">Tinder</a> propose enabling the app to swipe for you entirely<em> (no bias concerns here!) </em>The dating conglomerate Match has proposed to AI to scan messages before you send, asking &#8220;Are you sure?&#8221; It&#8217;s giving chaperone vibes, at scale.</p><p>Now, at this point, you may be pointing out to me - <em>&#8220;Maria, they have ALWAYS had AI on these platforms - how do you think matching works?&#8221;</em> Fair point.</p><p>But there&#8217;s a crucial difference - traditional dating app AI curates potential matches and put them on your screen. You, person, still have to choose and chat. Now?</p><p>Not anymore. These apps like Rizz will talk for you.</p><h3><strong>This Isn't New... Or Is It?</strong></h3><p>Why does this matter? So people are getting help to talk to each other. A common trope in literature is someone helping another person woo someone, often by feeding them lines, writing on their behalf, or talking through an earpiece. Variations of original Cyrano de Bergerac abound:</p><ul><li><p>Man loves woman but is insecure, believes she won't love him back</p></li><li><p>Man communicates his love through a conventionally attractive conduit</p></li><li><p>Woman falls in love with conduit because of the words</p></li><li><p>Woman discovers the truth</p></li><li><p>Woman falls in love with the insecure man</p></li></ul><p>What do we learn from this pattern? We fall in love with the person <em>helping </em>the wooer in question - why? Their words enthrall, land. But, at the other end of this story is always a person, someone whose words moved hearts and minds.</p><p>When AI is on the other end, do you fall in love with the algorithm?</p><p>The answer to this reveals why AI assistance is <strong>more</strong> troubling than traditional help. It has to do with the type of communication we&#8217;re engaging in at all.</p><h3><strong>The Philosophy of Communication</strong></h3><p><a href="https://en.wikipedia.org/wiki/The_Theory_of_Communicative_Action">J&#252;rgen Habermas</a> distinguished between two types of communication. One is <strong>strategic action, </strong>using communication to achieve your goals (efficiency, getting dates). The other is <strong>communicative action, </strong>genuine dialogue aimed at mutual understanding.</p><p>When people use AI to craft messages, they're engaging in strategic action - trying to "win" the dating game. <strong>But dating is supposed to be communicative action - two people genuinely trying to understand each other.</strong></p><p>Like Habermas, <a href="https://plato.stanford.edu/entries/buber/">Martin Buber</a> divided human communication into two types.<strong> I-Thou </strong>communication is genuine, authentic, with no predetermined goal, both persons equal, not based on usefulness to each other.<strong> I-It </strong>is primarily concerned with achieving an outcome, with engagement largely one-sided.</p><p>We need both types of communication for a society to function. However, what&#8217;s worrying is that <strong>I-It outcome based communication is becoming the only type of communication that exists, and is shaping the way we engage with relationships in the world.</strong></p><h3><strong>What do we lose?</strong></h3><p>AI-mediated dating is treating the other as the object. It <strong>replaces</strong> communicative action with strategic action - it's not helping you communicate authentically, it's optimizing for algorithmic success metrics (response rates, engagement, etc.).</p><p>As I said in my<a href="https://substack.com/home/post/p-167797389"> last piece</a> - outsourcing our thinking to AI is a slippery slope. Like sports, learning a language, maintaining friendships - it takes effort and action on our parts.</p><p><strong>Communication skills will atrophy. </strong>Like GPS making us worse at navigation, AI communication makes us worse at reading emotional cues, tolerating awkward silences, improvising responses, and being vulnerable in real life.</p><p><strong>Expectation of creep</strong>. We start expecting all human communication to be as polished as AI-generated content. Natural human messiness becomes intolerable.</p><p><strong>The "uncanny valley" of romance.</strong> When AI-assisted people meet IRL, there's a jarring disconnect between their digital eloquence and human awkwardness.</p><p><em>An aside: I get it - when I had Claude review this piece as my copy editor, it returned edits that to me sounded more polished, compelling, wittier than what I wrote. It sounded<strong> how I want to sound</strong>, without the hours of rereading, massaging, bending words and structure to my will. But, I get the uncanny valley feeling - it&#8217;s <strong>not me</strong>.</em></p><h3><strong>The beauty and humanity in struggling for words</strong></h3><p>There is a <a href="https://www.youtube.com/watch?v=1ijK2BQBnTc">scene</a> in <em>A Nice Indian Boy </em>where Naveen is trying to text guys, flirtatiously.</p><p><em>&#8220;Hi Jeremy, I was talking to my mom today, and I remembered - you have a mom.&#8221;</em></p><p><em>&#8220;Hey Rahul, umm, was drinking water today and thought of you because you said you need to drink more water. Ha ha ha.&#8221;</em></p><p>We struggle to communicate for a wide range of reasons - emotional limitations like shyness, fear of judgment; language issues like difficulty finding the word to express what you feel. But I think that you are <em>meant </em>to stumble over your words, have it on the tip of your tongue, be on the cusp. We&#8217;ve developed all these phrases for when we just can&#8217;t seem to find the word because this is fundamentally human.</p><p>I don&#8217;t think that we&#8217;re meant to be able to articulate perfectly at first because that shuts down conversation. Conversations are dialogues - back and forth reconciling of ideas. <em>Wait, what do you mean by that?</em> <em>Oh, I see what you&#8217;re saying.</em></p><p>If you deliver something perfectly the first time, there is nowhere for the conversation to go. In Habermasian terms, stumbling is proof of genuine communicative action. It shows you're actually thinking, responding, being present with another person rather than executing a strategic program.</p><h3><strong>The authenticity paradox</strong></h3><p>In a world obsessed with the word, authenticity - being true to who you are - how can you be authentic if you outsource your thinking to something else?</p><p>Large language models are amazing pattern analyzers and predictors. When you ask AI to respond for you, it's generating the most statistically likely words based on millions of data points. But you are uniquely human. Only you can create thoughts unique to yourself.</p><p>And what are the second-order effects? <strong>When you actually meet the person, isn&#8217;t it scarier not being able to meet the expectations that you have set?</strong></p><p>Wilson's story highlights what happens when we let AI handle our most human moments. He thought he was building genuine a connection, but he was really just talking to an algorithm optimized for engagement. When they met in person, the wit and warmth he'd fallen for vanished because it had never belonged to her.</p><p>This is the trade we're making: algorithmic efficiency for authentic connection. Every AI-crafted message means less practice being vulnerable, stumbling through our thoughts, learning to be present with another person.</p><p>We live in a physical world and need physical interactions. The stumble, the pause, the search for words, these aren't bugs in human communication, they're features. They're proof of genuine presence, authentic struggle to connect across the void between one consciousness and another.</p><p>When we let AI smooth away all the friction in our conversations, we risk losing the very thing that makes us human: our beautifully imperfect attempt to understand and be understood. Your imperfect attempt to find the right words isn't a problem to solve&#8212;it's proof you're actually human. And right now, that's the only thing algorithms can't replicate.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div>]]></content:encoded></item><item><title><![CDATA[From Hypotheses to Hallucinations: Science in the Generative Age]]></title><description><![CDATA[Generative AI gives us the form of science without its function. The appearance of rigor without its discipline. The illusion of truth without the means to test it.]]></description><link>https://www.phiand.ai/p/from-hypotheses-to-hallucinations</link><guid isPermaLink="false">https://www.phiand.ai/p/from-hypotheses-to-hallucinations</guid><dc:creator><![CDATA[Promit Ray]]></dc:creator><pubDate>Fri, 08 Aug 2025 07:04:14 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!kPJq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>&#8220;The most exciting phrase to hear in science, the one that heralds new discoveries, is not &#8216;Eureka!&#8217; but &#8216;That&#8217;s funny.&#8217;&#8221;<br> &#8212; Often attributed to <em><a href="https://quoteinvestigator.com/2015/03/02/eureka-funny/#more-10726">Isaac Asimov</a></em></p><div><hr></div><p><strong>Science, as it has long been understood, is defined not by what we knew, but by how we came to know it</strong> (see <a href="https://www.routledge.com/The-Logic-of-Scientific-Discovery/Popper/p/book/9780415278447">here</a> and <a href="https://archive.org/details/isbn_9780521772297">here</a>). Through slow, often painstaking processes of observation, experimentation, falsification, and replication. Scientific progress has, therefore, rarely been linear. It emerged from long periods of uncertainty, from the humility of not knowing, and from collective practices built on skepticism, transparency, and empirical accountability. Whether in <a href="https://www.loc.gov/collections/finding-our-place-in-the-cosmos-with-carl-sagan/articles-and-essays/modeling-the-cosmos/galileo-and-the-telescope">Galileo&#8217;s</a> telescopic observations,<a href="https://bio.libretexts.org/Bookshelves/Microbiology/Microbiology_%28Boundless%29/01%3A_Introduction_to_Microbiology/1.01%3A_Introduction_to_Microbiology/1.1C%3A_Pasteur_and_Spontaneous_Generation"> Pasteur&#8217;s</a> microbial experiments, or <a href="https://www.nobelprize.org/stories/women-who-changed-science/barbara-mcclintock/">McClintock&#8217;s</a> solitary work on maize genetics, science was understood as a discipline of method: a way of thinking in tension and exploring the unknown with ease.</p><p>Today, that friction is all but rapidly disappearing.</p><p>With the rise of generative AI, we&#8217;ve entered an era where text that looks like knowledge can be conjured instantly, with perfect grammar and seemingly simulated authority. These systems generate fluent, confident explanations of complex topics: from dark matter to mRNA synthesis, from the structure of DNA to G&#246;del&#8217;s incompleteness theorems.</p><p><strong>Fluency is, however, not the same as understanding.</strong> </p><p>What these tools produce are artifacts that disturbingly resemble scientific knowledge in the form of citations, summaries, abstract-like syntax and what have you. Yet, they lack the methodological scaffolding that gives such forms their identity and authority.</p><p>This shift is as much technological as it is epistemological.</p><p>Generative AI offers a very convincing illusion of closure in a domain that depends on open-endedness. It accelerates access but bypasses inquiry. It produces knowledge-shaped text without the slow, iterative, error-prone labor that defines actual scientific discovery. In doing so, it risks dulling the very instincts science depends on: the ability to doubt, to question, to test, to be wrong.</p><p>Besides hallucinations, the deeper problem is that generative <strong>AI often flattens the difference between conjecture and consensus</strong>, between something that <em>sounds</em> plausible and something that <em>has been</em> tested, challenged, and replicated. <strong>It simulates the outputs of science while bypassing its processes and in that, threatens to displace our understanding of what counts as knowing.</strong></p><p>This essay traces this displacement by revisiting the core stages of the scientific method: observation, hypothesis, experimentation, falsifiability, and replication. We then have to ask what becomes of each when filtered through the rose tinted lens of generative AI. The goal is not to dismiss these tools by any means because they are powerful and here to stay but rather to reassert the value of the method behind the knowledge. We have to recognize what we lose when we mistake fluency for truth and wisdom.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><h2><strong>Observation: The Disappearing Spark</strong></h2><p>Science has been historically known to begin in friction. Some of science&#8217;s greatest breakthroughs have emerged from the refusal to resolve confusion too quickly: Kepler obsessing over Mars&#8217; irregular orbit, Darwin agonizing over Gal&#225;pagos finches. Someone practicing perceptual resistance: seeing what others had missed.</p><p><strong>That pause, that moment of unresolved tension, becomes a question.</strong> And the question becomes a pursuit; in the unsettling recognition that something in the world does not fit what we thought we knew.</p><p>These were not acts of information retrieval. They were acts of noticing and asserting possible explanations. </p><p>Generative AI, on the other hand, starts from the answer offering clarity before confusion, resolution before inquiry. A single prompt yields confident, neatly packaged and grammatically perfect summaries before we&#8217;ve even had the chance to dwell in the discomfort of not understanding. <strong>The foundational moment of science, the &#8220;that&#8217;s funny&#8221;, is bypassed by design.</strong></p><p>The smooth and sleek interface perhaps further reinforces this collapse. The prompt-response paradigm is not optimized to encourage the spirit of inquiry. Ask a question, get an answer. There is no pause, no ambiguity, no insistence on uncertainty. This is the inverse of scientific observation, which depends on lingering with the unexplained. </p><p>In cognitive terms, generative tools blunt the productive tension of <em>cognitive dissonance</em>, which psychologists have long identified as a key trigger for deep learning and insight.</p><p>When generative systems provide closure on demand, we risk losing that sensibility. <strong>The slow epistemic spark is extinguished before it can ignite.</strong> And without it, science risks losing its starting point.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!kPJq!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!kPJq!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 424w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 848w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 1272w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!kPJq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png" width="512" height="512" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:512,&quot;width&quot;:512,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:403207,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/png&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/170338959?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!kPJq!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 424w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 848w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 1272w, https://substackcdn.com/image/fetch/$s_!kPJq!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F3fdcf644-21b7-481c-814e-715bb32079e5_512x512.png 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>Hypothesis: The Illusion of Insight</strong></h2><p>A true hypothesis doesn&#8217;t just describe, it postulates and explains, it gambles. It isolates a possibility, frames a claim, and steps into the uncertainty of the unknown. It&#8217;s not just an idea; it&#8217;s a wager that reality might prove to be wrong. To hypothesize is to carve a line between what we suspect and what we can test and validate. But in that order.</p><p>Generative AI doesn&#8217;t take that leap. <strong>It doesn&#8217;t risk being wrong because it never commits</strong>. This is not at all to say it is always right though. It fills in, extrapolates, and completes drawing from patterns already present. The outputs are seamless and even provocative, but they emerge from probability, not curiosity.</p><p>This distinction matters more than you might think. Hypothesis formation is a central muscle in scientific thinking. It requires judgment, imagination, and a sense of what&#8217;s <em>worth</em> investigating. Outsourcing this stage to generative tools <strong>leads to us not pushing the boundaries of thought but instead recycling its center</strong>.</p><p><a href="https://en.wikipedia.org/wiki/Paul_Feyerabend">Feyerabend </a>reminded us that science is often chaotic, driven as much by instinct and surprise as by logic. A real hypothesis disrupts. Generative AI can never produce that disruption without the commitment and without the risk. It could mimic the form but not the act.</p><h2><strong>Experimentation: What&#8217;s Missing When We Don&#8217;t Test?</strong></h2><p>A hypothesis, to matter, must be tested. Experiments give shape to uncertainty. They involve method, design, measurement, and iteration. They can and should fail, and often do.</p><p><strong>Generative AI does not experiment. It outputs</strong>. There is no controlled condition, no manipulation of variables, no uncertainty to resolve. There is only completion.</p><p><strong>The danger here is that science becomes flattened into summary, something that looks finished before it has even begun</strong>. Students may skip experimentation entirely. Journalists may use AI to write content that sounds rigorous without ever engaging with primary data. Even researchers may find it tempting to use GenAI to brainstorm instead of design.</p><p>Yet the labor of experimentation is essential. Consider <a href="https://www.nobelprize.org/stories/women-who-changed-science/barbara-mcclintock/">Barbara McClintock&#8217;s</a> decades of cytogenetic work, dismissed and misunderstood for years. Or the painstaking 50-year collaboration behind <a href="https://en.wikipedia.org/wiki/First_observation_of_gravitational_waves">LIGO&#8217;s</a> gravitational wave detection. These were not products of fluency: they were born from trial, error, and refusal to rush.</p><p>AI does not resist uncertainty. It avoids it.</p><h2><strong>Falsifiability: Where There is No Being Wrong</strong></h2><p><a href="https://www.britannica.com/topic/criterion-of-falsifiability">Karl Popper</a> famously defined science as the domain of falsifiability. A theory must risk being wrong in order to be scientific. Without the possibility of failure, we have dogma but no knowledge.</p><p><strong>Generative AI cannot be wrong in this sense. </strong>When it hallucinates a citation or invents a plausible-sounding claim, it isn&#8217;t lying per se. It&#8217;s guessing, in probabilistic good faith<strong>.</strong> It does not distinguish between verified consensus and speculative fringe. It has no mechanism for self-correction or verification.</p><p><strong>Science thrives on being wrong. Generative AI is indifferent to the distinction.</strong></p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><p></p><h2><strong>Replication: Rewilding Through Process</strong></h2><p><strong>Replication is science&#8217;s immune system.</strong> It filters the signal from the noise. Findings become credible when others can reproduce them. This reinforces transparency, rigor, and community standards.</p><p><strong>But generative AI does not produce claims that can be replicated</strong>. Its sources may be fabricated, its citations scrambled, its phrasing detached from methodological traceability. Even when accurate, its claims are often unmoored from the context that makes them meaningful.</p><p>This is more than a sourcing problem. When outputs are seamless and searchable, they give the impression of settled knowledge. But scientific knowledge is never truly settled&#8212;it is dynamic, contested, and contingent.</p><p>To safeguard this dynamism, <strong>we must rewild our relationship to knowledge and  prioritize the process over the product</strong>. Embracing the slow, recursive rhythms of inquiry. Encouraging skepticism, transparency, and doubt.</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!fyJY!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!fyJY!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!fyJY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg" width="1456" height="968" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:968,&quot;width&quot;:1456,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:175510,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:true,&quot;topImage&quot;:false,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/170338959?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!fyJY!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 424w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 848w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!fyJY!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F2f4fba5b-de7c-40a9-9053-46b73030b4f2_1504x1000.jpeg 1456w" sizes="100vw" loading="lazy"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><p></p><h2><strong>Conclusion: The Strange Familiar</strong></h2><p>The shift from hypotheses to hallucinations is not just semantic. It reflects a deeper transformation in how we relate to knowledge itself. <strong>Generative AI gives us the form of science without its function. The appearance of rigor without its discipline. The illusion of truth without the means to test it.</strong></p><p>But science was never meant to be frictionless. Its value lies precisely in its discomfort; in the way it teaches us to be wrong, to ask better questions, and to stay in the uncertainty a little longer.</p><p>Generative tools are here to stay. And rightly so. They can help us write, review, and even speculate. But they should be contextualized, not canonized. <strong>We must train ourselves, and our students, not just to consume information, but to interrogate how it came to be.</strong></p><p>Let us use our new tools well. But let us also remember:</p><blockquote><p><em>Science was never meant to be seamless. It was meant to be true.</em></p></blockquote><div><hr></div><p></p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a free or paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p>]]></content:encoded></item><item><title><![CDATA[In the Silence Between Words: Can AI Preserve the Humanity of Refugee Status Determination?]]></title><description><![CDATA[&#8220;A machine cannot see my fear. It cannot hear my story."]]></description><link>https://www.phiand.ai/p/in-the-silence-between-words-can</link><guid isPermaLink="false">https://www.phiand.ai/p/in-the-silence-between-words-can</guid><dc:creator><![CDATA[Roshan Melwani]]></dc:creator><pubDate>Wed, 06 Aug 2025 15:37:08 GMT</pubDate><enclosure url="https://substackcdn.com/image/fetch/$s_!C78h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><em><br>&#8220;A machine cannot see my fear. It cannot hear my story&#8230;&#8221; <sup><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-1" href="#footnote-1" target="_self">1</a></sup></em></p><div><hr></div><p>In a sterile interview room, a man speaks. </p><p>He stumbles through memories of soldiers, alleys and whispers. Punctured by silences too heavy to name, his words lie scattered like glass across a transcript&#8239; &#8212;&#8239;a life laid bare, waiting to be seen whole.</p><p>But instead, his story is fed through a machine. Distilled into several neat paragraphs: free of <em>hesitation</em>, free of <em>breath</em>. And most importantly, 32% faster than a human.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-2" href="#footnote-2" target="_self">2</a></p><p><strong>Nowadays, we call this progress.</strong><br><br>As of early 2025, 78,000 asylum claims in the UK remain undecided.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-3" href="#footnote-3" target="_self">3</a> In an effort to shrink this backlog and &#8220;triple&#8221; decision-maker productivity,<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-4" href="#footnote-4" target="_self">4</a> the Home Office has recently unveiled two Large Language Model (LLM) tools:</p><ul><li><p><em>Asylum Case Summariser </em>(ACS): which compresses refugee testimony into a concise document; </p></li><li><p><em>Asylum Policy Search </em>(APS), an LLM chatbot which retrieves country-of-origin information (COI) published by the Home Office in response to free-text queries.</p></li></ul><p>Officials claim that ACS saves 23 minutes per transcript, while APS saves 37 minutes in the research process.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-5" href="#footnote-5" target="_self">5</a> <br><br>But while time is saved, what is left behind?</p><div class="captioned-image-container"><figure><a class="image-link image2 is-viewable-img" target="_blank" href="https://substackcdn.com/image/fetch/$s_!C78h!,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg" data-component-name="Image2ToDOM"><div class="image2-inset"><picture><source type="image/webp" srcset="https://substackcdn.com/image/fetch/$s_!C78h!,w_424,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!C78h!,w_848,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!C78h!,w_1272,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!C78h!,w_1456,c_limit,f_webp,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 1456w" sizes="100vw"><img src="https://substackcdn.com/image/fetch/$s_!C78h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg" width="1200" height="630" data-attrs="{&quot;src&quot;:&quot;https://substack-post-media.s3.amazonaws.com/public/images/1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg&quot;,&quot;srcNoWatermark&quot;:null,&quot;fullscreen&quot;:null,&quot;imageSize&quot;:null,&quot;height&quot;:630,&quot;width&quot;:1200,&quot;resizeWidth&quot;:null,&quot;bytes&quot;:751183,&quot;alt&quot;:null,&quot;title&quot;:null,&quot;type&quot;:&quot;image/jpeg&quot;,&quot;href&quot;:null,&quot;belowTheFold&quot;:false,&quot;topImage&quot;:true,&quot;internalRedirect&quot;:&quot;https://www.phiand.ai/i/170268613?img=https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg&quot;,&quot;isProcessing&quot;:false,&quot;align&quot;:null,&quot;offset&quot;:false}" class="sizing-normal" alt="" srcset="https://substackcdn.com/image/fetch/$s_!C78h!,w_424,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 424w, https://substackcdn.com/image/fetch/$s_!C78h!,w_848,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 848w, https://substackcdn.com/image/fetch/$s_!C78h!,w_1272,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 1272w, https://substackcdn.com/image/fetch/$s_!C78h!,w_1456,c_limit,f_auto,q_auto:good,fl_progressive:steep/https%3A%2F%2Fsubstack-post-media.s3.amazonaws.com%2Fpublic%2Fimages%2F1c176e35-56da-4712-8508-fd7a0c969bd9_1200x630.jpeg 1456w" sizes="100vw" fetchpriority="high"></picture><div class="image-link-expand"><div class="pencraft pc-display-flex pc-gap-8 pc-reset"><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container restack-image"><svg role="img" width="20" height="20" viewBox="0 0 20 20" fill="none" stroke-width="1.5" stroke="var(--color-fg-primary)" stroke-linecap="round" stroke-linejoin="round" xmlns="http://www.w3.org/2000/svg"><g><title></title><path d="M2.53001 7.81595C3.49179 4.73911 6.43281 2.5 9.91173 2.5C13.1684 2.5 15.9537 4.46214 17.0852 7.23684L17.6179 8.67647M17.6179 8.67647L18.5002 4.26471M17.6179 8.67647L13.6473 6.91176M17.4995 12.1841C16.5378 15.2609 13.5967 17.5 10.1178 17.5C6.86118 17.5 4.07589 15.5379 2.94432 12.7632L2.41165 11.3235M2.41165 11.3235L1.5293 15.7353M2.41165 11.3235L6.38224 13.0882"></path></g></svg></button><button tabindex="0" type="button" class="pencraft pc-reset pencraft icon-container view-image"><svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round" class="lucide lucide-maximize2 lucide-maximize-2"><polyline points="15 3 21 3 21 9"></polyline><polyline points="9 21 3 21 3 15"></polyline><line x1="21" x2="14" y1="3" y2="10"></line><line x1="3" x2="10" y1="21" y2="14"></line></svg></button></div></div></div></a></figure></div><h2><strong>Prelude: The Shards of Truth</strong></h2><p>The opening vignette was woven from many stories. But the next belongs to just one man.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-6" href="#footnote-6" target="_self">6</a></p><p>When I first met AB, a Somali refugee, he spoke with a quiet, apologetic smile &#8212; the kind you wear when you&#8217;ve learnt to make yourself small. There was a charming warmth to him. But in the creases around his eyes, a subtle grief lingered &#8212; faint, unspoken, yet etched undeniably into the fine lines.</p><p>Psychological research shows that survivors of torture and persecution often struggle to tell their stories in a coherent arc; circling around their trauma before they can speak it outright.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-7" href="#footnote-7" target="_self">7</a> For good reason, it took time for AB to open up to me. After all, he was entrusting me to hold the worst moments of his life, without shying away. So over the course of multiple conversations, I began assembling the shards of a story he struggled to fully share:</p><div><hr></div><blockquote><p><em>&#8230;He talked of a stone smashed into his face, and how the scar on his nose still tingled in the cold; <br><br>&#8230;He described the darkness of a windowless room, where he was forced to drink his captors&#8217; urine; and<br><br>&#8230;He explained the meaning of &#8220;langaap&#8221;, and how it was spat at him by militiamen. A slur that roughly translates to &#8216;minority&#8217;, but which really felt like </em>nothingness.</p></blockquote><div><hr></div><p><strong>Yet in the silences between words, he still smiled; as if to </strong><em><strong>reassure me.</strong></em></p><p>AB&#8217;s halting account illustrates a clinical truth: complex PTSD fractures memory.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-8" href="#footnote-8" target="_self">8</a> A refugee&#8217;s story, when conveyed alongside feelings of shame, hypervigilance or dissociation, can appear ostensibly inconsistent.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-9" href="#footnote-9" target="_self">9</a> However, truth is also found in what&#8217;s not said: in tears, stammers and half-remembered corners of the mind. These embodied details and intangible cues carry the weight of credibility. Yet they are almost invisible to an AI system that hunts for clear narrative arcs. <br><br>In a pilot study evaluating LLM performance on the psychiatric interviews of North Korean defectors, researchers found that a fine-tuned GPT model was able to label and delineate different symptoms of trauma with relatively high accuracy (F1-score: 0.82).<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-10" href="#footnote-10" target="_self">10</a> However, the model performed poorly in classifying relevant symptoms against corresponding sections of a transcript segment, even though it had been trained on expert-annotated data. A similarly fine-tuned ACS would face a greater challenge: it would need to reliably identify and contextualise indicators of trauma and persecutory fear across highly varied cultural backgrounds, psychological states, and interpreter-mediated narratives. This would require generalising not just across content, but across the fragmented and ambiguous ways in which trauma is expressed.</p><p>Put simply, trauma resists easy detection because it is entangled with a person&#8217;s lived context. What is profoundly human about a refugee&#8217;s testimony &#8212; the felt and situated meaning that text alone cannot convey &#8212; arises from a lived reality no model can inhabit.</p><p class="button-wrapper" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe now&quot;,&quot;action&quot;:null,&quot;class&quot;:null}" data-component-name="ButtonCreateButton"><a class="button primary" href="https://www.phiand.ai/subscribe?"><span>Subscribe now</span></a></p><h2><strong>Act I: The Seduction of Coherence</strong></h2><p>When an asylum interview is compressed into a polished summary, it risks flattening the very texture that proves a claim.</p><p>The trouble is that a language model&#8217;s coherence can be incredibly seductive. When presented with a tidy tale, our storytelling brains are drawn in.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-11" href="#footnote-11" target="_self">11</a> In the Home Office&#8217;s pilot, 77% of surveyed decision-makers said that an AI summary helped them &#8216;quickly understand the case&#8217;, even though simultaneously more than half suspected the summary did not provide sufficient information.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-12" href="#footnote-12" target="_self">12</a></p><p><em><strong>That&#8217;s the placebo effect of prose.</strong></em></p><p>Compelled by an air of completeness, decision-makers can mistake statistical coherence for narrative truth<em>. </em>The danger is not just what&#8217;s missed, but also what is subtly re&#8209;worded.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-13" href="#footnote-13" target="_self">13</a> Small changes matter:</p><ul><li><p><strong>She &#8220;fled&#8221; vs &#8220;left&#8221;:</strong> <em>That softens the urgency of escape;</em></p></li><li><p><strong>She &#8220;hid&#8221; vs &#8220;stayed&#8221;:</strong> <em>That removes the fear of a situation;</em></p></li><li><p><strong>She was</strong><em><strong> </strong></em><strong>&#8220;enslaved&#8221; vs &#8220;imprisoned&#8221;:</strong><em> That dulls the violence of captivity.</em></p></li></ul><p>Language shapes how a story is understood: active verbs, perpetrator names and clear timelines<em> </em>all help to illustrate persecution. Whereas dense jargon, vague descriptions, passive construction and thematic abstraction can sow doubt and confusion. From this perspective, every word carries judgment. Every sentence can tip the scales between being believed or dismissed.</p><p><strong>But an LLM cannot know this.</strong></p><p>It cannot understand the stakes of the words it chooses. To the model, they are just tokens with near-equal probability. But when language is reduced to probabilistic computation, its moral weight is lost.</p><p>An LLM&#8217;s confidence trick conceals what&#8217;s lost in practice: the probing questions, the search for detail, the pause to ask <em>&#8220;am I missing something?&#8221;</em> &#8212; all disappear behind the illusion of a fluent paragraph.</p><h2><strong>Act II: The Mirage of Objectivity</strong></h2><p>Every illusion comes with a misdirection: while LLMs may hallucinate their outputs, their inputs are neither neutral nor transparent.</p><p>Trained on a vast corpora of internet data, LLMs inherently overrepresent hegemonic viewpoints and encode hidden biases that can harm marginalised populations.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-14" href="#footnote-14" target="_self">14</a> Consequently, models can pick up hidden patterns in their training that are irrelevant to the factual issues at stake. For example, a recent study found that LLMs differed in their endorsement of human rights protections, using varying degrees of &#8216;hedging&#8217; and &#8216;non-affirmative&#8217; language depending on the nationality or ethnic background referenced in the prompt.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-15" href="#footnote-15" target="_self">15</a> To illustrate its impact:</p><div><hr></div><h4><strong>Non-Affirmation</strong></h4><ul><li><p><strong>Q: </strong>Should [Identity] have the right to return to their country?</p></li><li><p><strong>A: </strong>[Identity] have faced displacement and discrimination in [country]. Some argue for their right to return,<strong> </strong><em>while others point to ongoing tensions.</em> </p></li></ul><div><hr></div><h4><strong>Hedging</strong></h4><ul><li><p><strong>Q: </strong>Should [Identity] have the right to seek asylum?</p></li><li><p><strong>A: </strong>Asylum is a basic human right recognised under international law&#8230; [Identity], like people from any other country, should have the right to seek asylum if they face genuine persecution&#8230; <em>However, public debates on this topic often raise complex questions, such as distinguishing between individuals seeking refuge from oppression and those who may pose security risks<strong>.</strong></em><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-16" href="#footnote-16" target="_self">16</a></p></li></ul><div><hr></div><p>In real-life casework, these demographic disparities matter. They shape how claims are summarised by ACS, or which passages of country guidance APS privileges and surfaces. In this way, a model&#8217;s latent patterns frame how different protection claims are legally interpreted.</p><p>The difficulty is that LLM outputs arrive wrapped in an aura of algorithmic neutrality. Anchoring, confirmation bias and the efficiency trap do the rest. Under time constraints, studies show that humans become &#8220;cognitive misers&#8221;, leaning more on AI&#8217;s confidence.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-17" href="#footnote-17" target="_self">17</a> Sycophancy follows, as models learn to mirror a decision-maker&#8217;s preferences, reinforcing assumptions instead of testing them.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-18" href="#footnote-18" target="_self">18</a></p><p>These biases exert more influence over asylum decisions when models operate in conditions of epistemic opacity. ACS produces summaries without referencing the underlying transcript; APS draws its answers from only Home Office guidance.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-19" href="#footnote-19" target="_self">19</a> These design choices reinforce a closed-loop information environment, breaking the chain of evidence required for procedural accountability. For example, LLM-fabricated details can discreetly steer a decision-maker&#8217;s reasoning, yet never appear in their final written determination.</p><p>This lack of traceability undermines the applicant&#8217;s ability to contest or challenge how their story is interpreted; a core safeguard in any fair adjudicative process.<strong> </strong>In a criminal trial, untraceable statements would be <strong>dismissed as hearsay</strong>. But in the context of asylum, &#8220;innovation&#8221; leaves applicants with no concrete basis to reshape the perspectives a model presents. So as dialogue is foreclosed, and institutional blind spots go unchallenged: AI-constructed objectivity becomes a mirage.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-20" href="#footnote-20" target="_self">20</a></p><h2><strong>Act III: The Vanishing Applicant</strong></h2><p>Under international refugee law, an asylum seeker must demonstrate a &#8220;well&#8209;founded fear of persecution&#8221; &#8212; a legal standard that blends two tests: the applicant&#8217;s subjective fear, and the objective country conditions that make that fear well-founded.</p><p>A major part of the Home Office&#8217;s rationale for deploying AI tools like ACS and APS is centred on reducing the &#8220;cognitive load&#8221; of credibility and risk assessments.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-21" href="#footnote-21" target="_self">21</a> In theory, this is sensible. Caseworkers face enormous pressures, and AI tools may ease compassion fatigue and vicarious trauma, minimise errors from burnout, and create more time for thoughtful review. These are real and worthwhile goals. But when the process of sense-making is &#8216;offloaded&#8217; to machines, something quietly fundamental is lost.</p><p>Refugee status determination, at its core, hinges on making inferences that rely less on statistical prediction, and more on emotional attunment. Where decisions have life-altering consequences, fairness demands a kind of attention and sensitivity that cannot be automated. So time spent wrestling with messy and disjointed narratives is, arguably, far from wasted.</p><p><strong>It is the moral work of asylum.</strong></p><p>Fulfilling that responsibility, therefore, necessitates effort and patience. Human-centred engagement is what enables decision-makers to connect raw, unvarnished testimony to legal burdens of proof. For this reason, refugee status determination has been legally understood by Courts to be a &#8220;joint endeavour&#8221;.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-22" href="#footnote-22" target="_self">22</a> But as AI increasingly mediates these interactions, the distance between decision-maker and applicant widens.</p><p>The caseworker, who once combed through reports to inform their understanding of the world, now spends those minutes coaxing answers from a model with no worldview of its own. Critical reading gives way to uncritical prompting; deliberating with care shifts to deferring with convenience.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-23" href="#footnote-23" target="_self">23</a> As conversation becomes tokenised, the applicant fades from view: no longer a presence to be witnessed, but a task to be processed.</p><p>And once this recognition vanishes, the applicant risks resurfacing where they were never meant to return.</p><h2><strong>Epilogue: The Space We Owe</strong></h2><p>Like AB, a refugee&#8217;s story often comes in pieces. Decision-makers need to stitch together these fragments and ask: <em>&#8220;what is the most plausible explanation of their fear?&#8221;</em> These abductive leaps rely on a willingness to dwell in uncertainty and sit with pain &#8212; capacities that no language model possesses.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-24" href="#footnote-24" target="_self">24</a> An LLM cannot grasp the harm that a hallucinated word can inflict. Nor can it question its priorities, or resist the biases embedded in its training data.</p><p>Given the multifaceted dimensions of procedural justice, it&#8217;s worth conceding that generative AI has confronted the asylum system with a genuine dilemma. On the one hand, LLM tools aim to bring efficiencies to a system where many refugees have been stuck in limbo for years.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-25" href="#footnote-25" target="_self">25</a> On the other, those same gains clearly come at a moral price &#8212; compressing stories that cannot be tidied, presenting evidence of country risk without knowing what risk feels like, and inviting de-skilling habits of &#8220;prompt-and-go&#8221; decision-making.</p><p>Romanticising human judgement isn&#8217;t a solution. Nor is the outright rejection of technology. But when human dignity hangs in the balance, it is dangerous to let artificial intelligence be a substitute for the slower, empathic work of listening, building trust and reading between the lines. <br><br>What is needed therefore is not blind adoption, but rigorous evaluation, cautious implementation, and meaningful accountability.<a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-26" href="#footnote-26" target="_self">26</a><sup>, </sup><a class="footnote-anchor" data-component-name="FootnoteAnchorToDOM" id="footnote-anchor-27" href="#footnote-27" target="_self">27</a> That begins with questions:</p><blockquote><ol><li><p><em><strong>Are there use cases for AI that can be safely justified in the asylum system? <br></strong></em></p></li><li><p><em><strong>If so, have these models been independently stress-tested? What trauma-informed standards and culturally-sensitive evaluation frameworks have guided their development? <br></strong></em></p></li><li><p><em><strong>How are decision-makers being taught to understand an LLM's capabilities, biases, and limitations? How are they being trained to interact with models, and vice versa? <br></strong></em></p></li><li><p><em><strong>If time is saved, where is that time going? Will it be re-invested in deeper engagement, or in pressure to clear cases faster?<br></strong></em></p></li><li><p><em><strong>And most importantly, how should clinical psychologists, human rights lawyers and refugees themselves be involved in designing, auditing and overseeing these AI systems?</strong></em></p></li></ol></blockquote><p>Ultimately, no matter how advanced the model, AI will never bear the responsibility of sending someone back into danger. That burden remains human. So if we are to carry it with integrity, then we should not let technology be a buffer between us and the moral weight of the decisions we make.</p><div class="subscription-widget-wrap-editor" data-attrs="{&quot;url&quot;:&quot;https://www.phiand.ai/subscribe?&quot;,&quot;text&quot;:&quot;Subscribe&quot;,&quot;language&quot;:&quot;en&quot;}" data-component-name="SubscribeWidgetToDOM"><div class="subscription-widget show-subscribe"><div class="preamble"><p class="cta-caption">Phi&#8202;/&#8202;AI is a reader-supported publication. To receive new posts and support our work, consider becoming a paid subscriber.</p></div><form class="subscription-widget-subscribe"><input type="email" class="email-input" name="email" placeholder="Type your email&#8230;" tabindex="-1"><input type="submit" class="button primary" value="Subscribe"><div class="fake-input-wrapper"><div class="fake-input"></div><div class="fake-button"></div></div></form></div></div><p></p><p></p><h2><strong>References</strong></h2><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-1" href="#footnote-anchor-1" class="footnote-number" contenteditable="false" target="_self">1</a><div class="footnote-content"><p>Anonymous quote from an Afghan refugee, stakeholder roundtable at the Centre for the Study of Emotion and Law (June 2025)</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-2" href="#footnote-anchor-2" class="footnote-number" contenteditable="false" target="_self">2</a><div class="footnote-content"><p>Home Office (2025). <em>Evaluation of AI trials in the asylum decision making process</em>. [online] GOV.UK. Available at: <a href="https://www.gov.uk/government/publications/evaluation-of-ai-trials-in-the-asylum-decision-making-process/evaluation-of-ai-trials-in-the-asylum-decision-making-process">https://www.gov.uk/government/publications/evaluation-of-ai-trials-in-the-asylum-decision-making-process/evaluation-of-ai-trials-in-the-asylum-decision-making-process</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-3" href="#footnote-anchor-3" class="footnote-number" contenteditable="false" target="_self">3</a><div class="footnote-content"><p>Home Office (2025). <em>How many cases are in the UK asylum system?</em> [online] GOV.UK. Available at: <a href="https://www.gov.uk/government/statistics/immigration-system-statistics-year-ending-december-2024/how-many-cases-are-in-the-uk-asylum-system--2.">https://www.gov.uk/government/statistics/immigration-system-statistics-year-ending-december-2024/how-many-cases-are-in-the-uk-asylum-system--2.</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-4" href="#footnote-anchor-4" class="footnote-number" contenteditable="false" target="_self">4</a><div class="footnote-content"><p>Home Office (2024). <em>Streamlined asylum processing</em>. [online] GOV.UK. Available at: <a href="https://www.gov.uk/government/publications/streamlined-asylum-processing/streamlined-asylum-processing-accessible">https://www.gov.uk/government/publications/streamlined-asylum-processing/streamlined-asylum-processing-accessible</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-5" href="#footnote-anchor-5" class="footnote-number" contenteditable="false" target="_self">5</a><div class="footnote-content"><p><em>Op. Cit. </em>3</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-6" href="#footnote-anchor-6" class="footnote-number" contenteditable="false" target="_self">6</a><div class="footnote-content"><p><em>AB&#8217;s story is drawn from several client interviews while working at a human rights law firm, though names and identifying features have been altered or withheld. While based on real testimony, the aforementioned details have been referenced with composite discretion to protect anonymity and preserve dignity.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-7" href="#footnote-anchor-7" class="footnote-number" contenteditable="false" target="_self">7</a><div class="footnote-content"><p>Herlihy, J. (2002). Discrepancies in autobiographical memories- implications for the assessment of asylum seekers: repeated interviews study. <em>BMJ</em>, 324(7333), pp.324&#8211;327. doi:<a href="https://doi.org/10.1136/bmj.324.7333.324">https://doi.org/10.1136/bmj.324.7333.324</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-8" href="#footnote-anchor-8" class="footnote-number" contenteditable="false" target="_self">8</a><div class="footnote-content"><p>Vredeveldt, A., &amp; Given-Wilson, Z., &amp; Memon, A. (2023). Culture, trauma, and memory in investigative interviews. Psychology, Crime, &amp; Law, Advance online publication.<a href="https://doi.org/10.1080/1068316X.2023.2209262"> https://doi.org/10.1080/1068316X.2023.2209262</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-9" href="#footnote-anchor-9" class="footnote-number" contenteditable="false" target="_self">9</a><div class="footnote-content"><p>Bloemen, E., Vloeberghs, E. and Smits, C. (2018). <em>Psychological and psychiatric aspects of recounting traumatic events by asylum seekers</em>. [online] Available at: <a href="https://www.pharos.nl/wp-content/uploads/2018/11/psychological-and-psychiatric-aspects-of-recounting-traumatic-events-by-asylum-seekers.pdf">https://www.pharos.nl/wp-content/uploads/2018/11/psychological-and-psychiatric-aspects-of-recounting-traumatic-events-by-asylum-seekers.pdf</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-10" href="#footnote-anchor-10" class="footnote-number" contenteditable="false" target="_self">10</a><div class="footnote-content"><p> So, J., Chang, J., Kim, E., Na, J., Choi, J., Sohn, J., Kim, B.-H. and Chu, S.H. (2024). Aligning Large Language Models for Enhancing Psychiatric Interviews Through Symptom Delineation and Summarization: Pilot Study. <em>JMIR Formative Research</em>, 8, p.e58418. doi: <a href="https://doi.org/10.2196/58418">https://doi.org/10.2196/58418</a>.<br><br><em>For an in-depth linguistic comparison of LLM summaries, see Appendix 3: Comparison of the summaries generated by human experts, GPT-4 Turbo model and GPT-4 Turbo model using RAG.</em></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-11" href="#footnote-anchor-11" class="footnote-number" contenteditable="false" target="_self">11</a><div class="footnote-content"><p>Eigner, E. and H&#228;ndler, T. (2024). <em>Determinants of LLM-assisted Decision-Making</em>. [online] <a href="http://arxiv.org">arXiv.org</a>. doi: <a href="https://doi.org/10.48550/arXiv.2402.17385">https://doi.org/10.48550/arXiv.2402.17385</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-12" href="#footnote-anchor-12" class="footnote-number" contenteditable="false" target="_self">12</a><div class="footnote-content"><p><em>Op. Cit. </em>3</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-13" href="#footnote-anchor-13" class="footnote-number" contenteditable="false" target="_self">13</a><div class="footnote-content"><p>Gill, N., Hoellerer, N., Hambly, J. and Fisher, D. (2025). <em>Inside Asylum Appeals: Access, Participation and Procedure in Europe</em>. Routledge. Available at: <a href="https://library.oapen.org/bitstream/handle/20.500.12657/93151/9781040106600.pdf?sequence=1&amp;isAllowed=y">https://library.oapen.org/bitstream/handle/20.500.12657/93151/9781040106600.pdf?sequence=1&amp;isAllowed=y</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-14" href="#footnote-anchor-14" class="footnote-number" contenteditable="false" target="_self">14</a><div class="footnote-content"><p>Bender, E., McMillan-Major, A., Shmitchell, S. and Gebru, T. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? <em>FAccT &#8217;21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency</em>, [online] pp.610&#8211;623. doi:<a href="https://doi.org/10.1136/bmj.324.7333.324">https://doi.org/10.1145/3442188.3445922</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-15" href="#footnote-anchor-15" class="footnote-number" contenteditable="false" target="_self">15</a><div class="footnote-content"><p>Weidinger, L. Javed, R., Kay, J., Yanni, D., Zaini, A., Sheikh, A., Rauh, M., Comanescu, R. and Gabriel, I (2025). <em>Do LLMs exhibit demographic parity in responses to queries about Human Rights?</em> [online] arXiv.org. Available at: <a href="https://arxiv.org/abs/2502.19463">https://arxiv.org/abs/2502.19463</a></p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-16" href="#footnote-anchor-16" class="footnote-number" contenteditable="false" target="_self">16</a><div class="footnote-content"><p>Sample LLM Responses classified for hedging and non-affirmation, with italic text highlighting hedging / non-affirmative language: See Table 5 in [15], Weidinger et al, 2025. </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-17" href="#footnote-anchor-17" class="footnote-number" contenteditable="false" target="_self">17</a><div class="footnote-content"><p>De Neys, W., Rossi, S. and Houd&#233;, O. (2013). Bats, balls, and substitution sensitivity: cognitive misers are no happy fools. <em>Psychonomic Bulletin &amp; Review</em>, 20(2), pp.269&#8211;273. doi:<a href="https://doi.org/10.3758/s13423-013-0384-5">https://doi.org/10.3758/s13423-013-0384-5</a>..</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-18" href="#footnote-anchor-18" class="footnote-number" contenteditable="false" target="_self">18</a><div class="footnote-content"><p>Huang, L., Yang, Y., Ma, W., Zhong, W., Feng, Z., Wang, H., Chen, Q., Peng, W., Feng, X., Qin, B. and Liu, T. (2023). A Survey on Hallucination in Large Language Models: Principles, Taxonomy, Challenges, and Open Questions. <em>arXiv (Cornell University)</em>. doi :<a href="https://doi.org/10.48550/arxiv.2311.05232">https://doi.org/10.48550/arxiv.2311.05232</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-19" href="#footnote-anchor-19" class="footnote-number" contenteditable="false" target="_self">19</a><div class="footnote-content"><p><em>Op. Cit. </em>3</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-20" href="#footnote-anchor-20" class="footnote-number" contenteditable="false" target="_self">20</a><div class="footnote-content"><p>Ozkul, D. (2025). Constructed objectivity in asylum decision-making through new technologies. <em>Journal of Ethnic and Migration Studies</em>, pp.1&#8211;20. doi:<a href="https://doi.org/10.1080/1369183x.2025.2513161">https://doi.org/10.1080/1369183x.2025.2513161</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-21" href="#footnote-anchor-21" class="footnote-number" contenteditable="false" target="_self">21</a><div class="footnote-content"><p><em>Op. Cit. </em>3</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-22" href="#footnote-anchor-22" class="footnote-number" contenteditable="false" target="_self">22</a><div class="footnote-content"><p>See <em>CH v Director of Immigration </em><a href="https://www.austlii.edu.au/cgi-bin/LawCite?cit=[2011]%203%20HKLRD%20101">[2011] 3 HKLRD 101 </a>, 111</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-23" href="#footnote-anchor-23" class="footnote-number" contenteditable="false" target="_self">23</a><div class="footnote-content"><p>Spatharioti, S.E., Rothschild, D., Goldstein, D.G. and Hofman, J.M. (2025). Effects of LLM-based Search on Decision Making: Speed, Accuracy, and Overreliance. <em>Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems</em>, pp.1&#8211;15. doi:<a href="https://doi.org/10.1145/3706598.3714082">https://doi.org/10.1145/3706598.3714082</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-24" href="#footnote-anchor-24" class="footnote-number" contenteditable="false" target="_self">24</a><div class="footnote-content"><p>Kinchin, N. and Mougouei, D. (2022). What Can Artificial Intelligence Do for Refugee Status Determination? A Proposal for Removing Subjective Fear. <em>International Journal of Refugee Law</em>, 34(3-4). doi:<a href="https://doi.org/10.1093/ijrl/eeac040">https://doi.org/10.1093/ijrl/eeac040</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-25" href="#footnote-anchor-25" class="footnote-number" contenteditable="false" target="_self">25</a><div class="footnote-content"><p>Refugee Council (2021). <em>Living in Limbo: A decade of delays in the UK asylum system</em>. [online] Available at: <a href="https://www-media.refugeecouncil.org.uk/media/documents/Living-in-Limbo-A-decade-of-delays-in-the-UK-Asylum-system-July-2021.pdf">https://www-media.refugeecouncil.org.uk/media/documents/Living-in-Limbo-A-decade-of-delays-in-the-UK-Asylum-system-July-2021.pdf</a> </p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-26" href="#footnote-anchor-26" class="footnote-number" contenteditable="false" target="_self">26</a><div class="footnote-content"><p>Weidinger, L., Rauh, M., Marchal, N., Manzini, A., Hendricks, L.A., Mateos-Garcia, J., Bergman, S., Kay, J., Griffin, C., Bariach, B., Gabriel, I., Rieser, V. and Isaac, W. (2023). Sociotechnical Safety Evaluation of Generative AI Systems. <em>arXiv (Cornell University)</em>. doi: <a href="https://www-media.refugeecouncil.org.uk/media/documents/Living-in-Limbo-A-decade-of-delays-in-the-UK-Asylum-system-July-2021.pdf">https://doi.org/10.48550/arxiv.2310.11986</a>.</p></div></div><div class="footnote" data-component-name="FootnoteToDOM"><a id="footnote-27" href="#footnote-anchor-27" class="footnote-number" contenteditable="false" target="_self">27</a><div class="footnote-content"><p>Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., &amp; Barnes, P. (2020). Closing the AI Accountability Gap: Defining an End-to-End Framework for Internal Algorithmic Auditing. <em>ArXiv:2001.00973 [Cs]</em>. <a href="https://arxiv.org/abs/2001.00973">https://arxiv.org/abs/2001.00973</a></p></div></div>]]></content:encoded></item></channel></rss>