"THE 2028 GLOBAL INTELLIGENCE CRISIS" Useful Stress Test, Weak Base Case
The Consequences of “Abundant” Intelligence: Risk, Timing, and the Future of Human Agency
Summarization of Citrini’s Analysis:
Citrini frames the piece as a stress-test scenario, not a prediction, written like a fictional macro memo from 2028. Core idea is that AI progress can be economically bullish at the technology level but still become macroeconomically bearish if it displaces white-collar workers faster than the economy can absorb them. In their view, this creates “ghost GDP”, where output is still being produced, but the income no longer flows through households the way it used to, which weakens consumer demand and starts a negative loop between labor cuts, weaker spending, and even more AI adoption.
Citrini argues the disruption starts in software and services, especially where value comes from friction, pricing power, and habitual user behavior. The thesis is that agentic coding pressures SaaS pricing because companies can increasingly build or threaten to build alternatives in-house, while consumer AI agents reduce the value of many intermediaries by constantly optimizing for price, speed, and convenience. Citrini extends this logic into platforms and payments systems, arguing that if agents optimize transactions, they will also optimize fees, which could pressure incumbents that rely on interchange, rewards ecosystems, or convenience-driven moats.
From there, Citrini says the story stops being a tech-sector issue and becomes a systemic one. They argue white-collar displacement turning into a consumption shock, which then feeds into private credit, PE-backed software exposures, life insurers tied to private assets, and eventually prime mortgages in white-collar-heavy cities. Their broader conclusion is that modern financial and policy systems were built on the assumption that human intelligence is scarce and highly paid, and AI compresses that “intelligence premium” faster than institutions can adapt, creating a potentially disorderly repricing across labor, credit, and asset markets.
Summary of Our Rebutal
Citrini’s piece is best read as a scenario stress test, not a forecast. They explicitly frames it as a thought exercise rather than a prediction. Their hypothetical stress-test analysis makes several real points:
AI will pressure pricing power in parts of software.
Intermediaries with weak moats are vulnerable.
White-collar labor disruption is no longer theoretical.
Financial fragility can emerge where leverage and opacity sit.
Where we disagree is the causal speed and inevitability.
Their piece assumes a near-frictionless chain from: better agents → software commoditization → labor income collapse → household defaults → systemic credit event.
That is a compelling and well-written narrative, but a weak base case.
“THE 2028 GLOBAL INTELLIGENCE CRISIS” — A Useful Stress Test, Weak Base Case
Citrini is right to push investors beyond the first-order narrative that AI simply boosts productivity and lifts growth. The more important question is how those gains are distributed across the value chain, and on that point their framework is genuinely useful as a thought exercise. By focusing on second-order effects such as margin compression, disintermediation, and downstream pressure on labor income, he highlights the transition risks that markets often underappreciate during the early phases of technological adoption. In practice, major technology shifts are rarely just about efficiency gains; they are also about the redistribution of economic rents, bargaining power, and cash flow visibility.
Citrini is also correct to identify that businesses monetizing friction are structurally exposed. This includes workflow SaaS layers with limited differentiation, marketplaces that rely on discovery rents rather than deep proprietary advantages, payment intermediaries whose economics depend on fee extraction, and labor-intensive service models built around modular, repeatable tasks. In each of these areas, agentic systems can reduce switching costs, improve price transparency, and weaken incumbent pricing power. That does not mean every incumbent fails, but it does mean investors should reassess how much of current profitability reflects durable value creation versus temporary control over process friction.
In that sense, Citrini has identified the right competitive battlefield. The key analytical distinction, however, is that correctly identifying where pressure will emerge is not the same as correctly forecasting the speed, sequencing, or ultimate magnitude of the outcome. Markets can reprice exposed business models well before broad macro stress materializes, and industries often adapt in ways that preserve parts of the profit pool even as weaker participants are impaired. That is where the debate should sit: not whether disruption risk exists, but how fast it transmits from micro-level margin pressure into system-level economic and credit stress.
The central weakness in Citrini’s framework is not its directional logic, but its sequencing assumptions. The scenario becomes much more forceful because it effectively collapses multiple adoption processes into a single timeline, treating technical progress, enterprise adoption, and macroeconomic fallout as if they advance at roughly the same pace. In reality, those are distinct curves with different constraints, different decision-makers, and different lag structures. AI model capability can improve rapidly in benchmarks and demos, while enterprise deployment remains gated by reliability, compliance, security, integration complexity, and internal change-management capacity (this is happening right now). Economic displacement then sits even further downstream, because org chart reductions, wage resets, and credit deterioration typically occur only after firms have moved beyond experimentation and fully redesigned workflows.
This distinction matters because it changes how investors should underwrite both timing and transmission risk. The relevant question is not simply whether AI can perform more tasks, but when firms can operationalize those capabilities at scale in production environments and when those operational changes become large enough to alter labor markets and household balance sheets. Current evidence suggests that AI adoption is broad in intent and expanding in usage, yet realized impacts on productivity and employment remain uneven and, in many cases, limited relative to the scale implied by the most aggressive narratives. That does not invalidate the disruption thesis. It indicates that the economy is still in an intermediate phase where expectations are moving faster than reported outcomes.
The more accurate framing is that capability is advancing faster than institutional absorption. Firms are still working through experimentation cycles, uneven implementation quality, governance bottlenecks, workflow redesign delays, and uncertain return-on-investment pathways. These frictions are not trivial; they are the mechanism that slows transmission from technological possibility to economic displacement. As a result, the debate should not be framed as “disruption or no disruption”, but rather as a timing problem with significant implications for valuation, portfolio construction, and macro risk assessment. Citrini may be identifying real end-state pressures, but their timeline assumes a level of synchronization across these adoption curves that is not yet supported by how enterprise systems and labor markets typically adjust, far from it.
Rebuttal 1: Citrini argues severe pricing pressure, collapsing differentiation, and margin compression in SaaS/software economics confuses code with commercialized products
A key overreach in Citrini’s framework is the claim, explicit or implied, that agentic coding makes software economics broadly collapse because software becomes easy to build and customize. That conclusion is directionally valid for certain categories, particularly undifferentiated tools with shallow and generic functionality and limited switching costs, but it conflates code creation with commercialized product value.
In enterprise settings, customers are rarely paying only for code. They are paying for a durable operating layer that combines workflow logic, integrations, permissions, controls, auditability, uptime commitments, vendor support, accountability, and the institutional reliability required to run day-to-day business-critical processes. The cheaper code becomes, the more important these non-code layers can become in preserving product value.
This distinction is especially important when moving from prototype capability to production deployment. Building something functional is not the same as operating something trustworthy inside a regulated, multi-team, mission-critical environment. Production-grade software requires orchestration, evaluation frameworks, guardrails, governance, and often human approval loops to ensure reliability and risk control. In other words, the cost curve for generating code may decline rapidly, while the cost curve for delivering enterprise-grade outcomes declines far more slowly. That gap is precisely why the “severe pricing pressure, collapsing differentiation, and margin compression in SaaS/software economics” thesis is too aggressive as a near-term investment conclusion, even if it correctly anticipates pressure on parts of the software stack.
What is more likely in the first phase is a repricing of weaker software business models rather than a wholesale collapse of software value. Market should expect pricing pressure in undifferentiated SaaS, feature rebundling as platforms absorb point solutions, lower headcount growth across vendors, greater buyer bargaining power in renewals, and margin compression in products with limited defensibility. Those are real and meaningful consequences. But they are materially different from broad software worthlessness, which would require a much deeper erosion of operational, compliance, and governance value layers. The distinction is not semantic; it implies a very different timeline for disruption, valuation resets, and industry consolidation.
Rebuttal 2: Disintermediation of intermediaries do not vanish, they mutate
Citrini is directionally correct that agentic commerce can compress intermediary rents and pressure platform economics, and they extends that logic into payments, particularly through the lens of Visa and Mastercard. We will not go deeper into that debate here, as Helios Capital Partner has already published a strong brief article addressing the payments angle and the implications for incumbent networks.
Where the thesis overreaches is in treating disintermediation as synonymous with elimination. In practice, intermediaries exist for reasons that go well beyond simple matching or transaction routing. They provide fraud management, trust and recourse, dispute resolution, underwriting and risk scoring, regulatory compliance, identity and KYC controls, merchant integration, and consumer engagement mechanisms such as rewards and habit formation. These functions are economically meaningful, and many of them become more important, not less, as transaction flows become faster, more automated, and more fragmented.
That is why cheaper rails alone do not automatically collapse incumbent economics. Lower transaction costs are pressure, but they are only one part of the value stack. Real-world payment adoption depends on liability frameworks, fraud outcomes, dispute processes, integration complexity, compliance obligations, and user behavior. In other words, incumbents do not merely monetize message routing; they monetize a bundled infrastructure of trust, risk management, acceptance, and convenience. As a result, even if tokenized rails and stablecoins improve the economics of certain payment flows, the transition is likely to be uneven and segment-specific rather than a uniform displacement event.
It is also important that incumbents are adapting rather than passively absorbing disruption. Visa and Mastercard are already integrating stablecoin-related capabilities into parts of their network and settlement infrastructure, which suggests the competitive outcome is more likely to be evolutionary than binary. The more probable end state is a multi-rail payments system in which incumbents defend their role through distribution, trust, compliance, and merchant relationships, while economics compress in selected segments where alternative rails offer a clear advantage. That still creates winners and losers, and it can materially pressure margins in parts of the ecosystem, but it does not support the stronger conclusion of an immediate or total collapse in card-based payment economics.
Rebuttal 3: Task automation does not equal household income collapse on a 24-month Clock
The most aggressive link in Citrini’s “THE 2028 GLOBAL INTELLIGENCE CRISIS” is the assumption that task automation rapidly translates into a broad household income shock severe enough to impair prime borrowers, weaken mortgage performance, and trigger systemic deleveraging within a relatively short window. This is where the thesis becomes too binary. The jump from improved agentic capability to economy-wide credit stress assumes a speed and uniformity of labor-market transmission that is rarely observed in practice, even during meaningful technological disruption. Put differently, the scenario may be directionally coherent, but it compresses several adjustment mechanisms into a single, highly accelerated outcome.
In reality, labor markets tend to absorb shocks through a more staggered and heterogeneous process. Firms usually respond first through slower hiring, role redesign, attrition, wage stagnation, and selective productivity-driven restructuring rather than immediate, broad-based layoffs across all exposed categories (this has been shown through several cycle since 2020). They also redeploy labor internally, change workflows incrementally, and adapt unevenly by sector depending on margins, regulation, labor intensity, and customer demand. That is why task-level automation should not be treated as a one-to-one proxy for household income collapse. Even where AI meaningfully displaces certain functions, the economic impact is often mediated by firm behavior, labor flexibility, and policy responses, which can delay or reshape the transmission into consumption and credit.
This distinction matters because a mortgage or household credit event requires far more than the simple fact that AI capabilities improved. A true systemic credit break would require a synchronized deterioration across employment income, underwriting resilience, household balance-sheet buffers, and policy response capacity. That combination is possible in a recessionary environment, and investors should not dismiss the risk. However, it is not an automatic or immediate consequence of stronger agent demos or faster task automation. The more defensible base case is a phased labor adjustment process with sector-specific pressure, not an instant macro-level household solvency crisis on a fixed 24-month clock.
Rebuttal 4: The financial fragility angle is credible. It proves “monitor” not “inevitable”
Citrini’s strongest analytical instinct is their focus on fragility in parts of the financial system where leverage, opacity, and complexity are already elevated. On that point, the framework is credible and worth taking seriously. The insurer and private-credit nexus is a legitimate area of concern because underwriting assumptions, valuation marks, and liability structures can become more vulnerable when economic conditions deteriorate or when sector-specific stress spreads into broader credit markets. In that sense, Citrini is asking the right question: where existing financial plumbing is most exposed if AI-related disruption proves more severe than consensus expects.
However, identifying a fragile area is not the same as proving an imminent systemic break. The available evidence supports closer monitoring, tighter surveillance, and more rigorous stress testing of balance sheets and funding structures, particularly where private-credit exposures are embedded in insurance-related capital channels. It also supports more caution around correlation assumptions, liquidity under stress, and how quickly “idiosyncratic” losses can become broader repricing events. Those are important conclusions for investors, regulators, and allocators alike, but they remain conclusions about risk management and vulnerability assessment, not definitive proof of a near-term crisis path.
That distinction matters because markets routinely confuse fragility with timing. A structure can be fragile for years before it breaks, and it can reprice in stages rather than collapse all at once. The practical implication is that Citrini’s financial fragility argument strengthens the case for monitoring and selective positioning, but it does not by itself validate a specific “AI-driven GFC sequel” timeline. In markets, “fragile” and “imminently breaks” are not the same trade, and treating them as equivalent is where scenario analysis can become overconfident.
A Better Hypothetical Stress-Test Base Case From Us
A more probable hypothetical base case is a staggered adjustment process, not a single linear domino chain. In the first phase, over the next 12 to 18 months, the most visible effects are likely to be margin pressure and valuation repricing rather than broad macro breakage. AI spending can continue to rise while both incumbents and challengers ship agent-enabled products, buyers use the new competitive landscape to renegotiate software contracts, and weaker SaaS multiples reset as pricing power is tested. On the labor side, the earliest impact is more likely to show up in hiring plans, staffing discipline, and slower headcount growth rather than immediate mass defaults or a sudden household credit event.
In the second phase, roughly the 18 to 36 month window, the market is more likely to move into bifurcation. Categories with shallow moats and weak differentiation may commoditize, while platforms with durable advantages in trust, distribution, proprietary data, compliance, and workflow embedment consolidate share. In payments, economics may compress at the edges as alternative rails gain relevance, but the more realistic outcome is a multi-rail system rather than outright obsolescence of incumbents. The likely winners in this phase are not necessarily the firms with the flashiest demos, but those that control orchestration, compliance, distribution, and operational reliability at scale.
Only in the later phase, beyond 36 months, do the broader macro consequences become more visible in a sustained way. Wage pressure may emerge first in specific functions and sectors, then become politically salient as the distributional effects of AI adoption widen. At that stage, policy response begins to matter much more in determining winners and losers, and credit stress is more likely to appear first in weaker cohorts and exposed balance sheets before becoming systemwide, if it does at all. This path is still disruptive and potentially severe, but it is nonlinear and staggered, not a clean, synchronized collapse driven solely by improvements in AI capability.
What Investors Should Track Instead of Trading the Headline
If the goal is to properly pressure-test Citrini’s thesis, investors should focus less on the headline narrative and more on measurable transmission indicators. The right question is not whether AI is disruptive in the abstract, but whether disruption is beginning to show up in operating metrics, labor behavior, payment economics, and credit quality in a way that supports a broader systemic thesis. In other words, investors should track evidence of real-world repricing and balance-sheet stress, not just extrapolate from model progress or product demos.
In software and SaaS, the clearest early signals are changes in commercial terms and unit economics. Net revenue retention deterioration, gross margin compression tied to inference and compute costs, seat growth lagging usage growth, and more aggressive renewal discounts or shorter contract durations would all suggest that AI is weakening pricing power and shifting bargaining leverage toward customers. These are the kinds of indicators that can confirm whether AI pressure is moving from narrative into financial statements. If they worsen broadly, the case for structural margin compression strengthens; if they remain contained, the thesis may be overstating the speed of impact.
In labor markets, investors should watch whether stress appears first through hiring behavior rather than headline layoffs. White-collar hiring freezes, slower wage growth in exposed functions such as support, operations, junior coding, and content roles, and changes in temporary staffing or contractor utilization can provide earlier and more nuanced signals than unemployment data alone.
In payments, the focus should be on whether alternative rails are gaining meaningful merchant acceptance, how fraud and dispute losses evolve across rails, how issuer behavior and rewards economics respond, and whether stablecoin settlement growth is occurring primarily through incumbent networks or outside them.
In credit, insurance, and private assets, the relevant indicators include insurer spread income relative to liquidity profile, reserve quality and rating dispersion, surrender behavior in annuity blocks, and signs of funding dependence or asset-liability mismatch. These are tradable signals. “AI causes a GLOBAL INTELLIGENCE CRISIS by 2028” is a narrative.
Two Alternative Endings With Same Conclusion: Unity of Personhood and Machine
Alternative Failure Mode of The Consequences of Seemingly “Abundant” Intelligence
An important alternative failure mode in Citrini’s framework is not that AI stops improving, but that the path from capability gains to economic displacement proves much slower, less linear, and operationally messier than the scenario assumes. The Citrini logical chain depends on AI capability and enterprise adoption compounding fast enough to trigger a rapid labor shock and, soon after, a credit shock. That may not happen if the technology remains uneven in real production settings. A model can look powerful in demos, benchmark well, and still underperform in long-horizon workflows, require extensive human supervision, or fail to deliver consistent returns once integration, governance, and maintenance costs are fully accounted for. In that case, the bottleneck is not innovation itself, but the conversion of technical possibility into repeatable enterprise outcomes.
Enterprise adoption is not a direct function of model quality alone. Firms do not adopt AI at scale simply because a frontier model exists; they adopt when internal systems, data infrastructure, workflow design, compliance requirements, and management processes can absorb it. In practice, that means implementation timelines are constrained by organizational readiness as much as by software capability. Data quality issues, vendor integration challenges, security reviews, auditability requirements, change-management fatigue, and uncertain ROI can all slow rollout, even when executive enthusiasm remains high. The result is a “messy middle” in which usage expands and experimentation becomes widespread, but realized productivity gains and labor displacement remain modest relative to market expectations.
On the model side, there is also a credible case that progress can continue while practical substitution remains limited. Benchmark gains are real, and coding assistance has clearly improved, but stronger benchmark performance does not automatically translate into robust autonomous execution in production environments. Complex reasoning, long task horizons, context retention, and reliable error recovery remain difficult in many enterprise use cases, and improvements in performance may come with higher cost, latency, or operational complexity. This creates a gap between what market infer from model progress and what operators experience in deployment. If functionality, reliability, or economics underdeliver in practice, the adoption curve can flatten enough to delay or blunt the macro transmission Citrini is underwriting, even if AI remains strategically important.
That is where the labor argument becomes especially important. A more realistic near-term outcome is that companies cut headcount too early, then rehire selectively after discovering that AI improves throughput in narrow tasks but does not reliably replace end-to-end business outcomes. This is not a reversal back to the old org chart. It is a recomposition of labor. Firms may need fewer purely repetitive roles, but more senior operators, QA personnel, compliance professionals, integration engineers, domain experts, customer support staff, and managers who can oversee hybrid human-AI workflows. In other words, over-automation risk can produce a rehiring cycle not because AI is “fake”, but because management initially mistakes task acceleration for full operational substitution.
This is particularly visible in software and engineering functions. AI-generated code can raise output, but it can also increase error volume, testing burden, and maintenance complexity if teams over-rely on generated solutions without sufficient review and validation. Faster code generation does not eliminate the need for reliability engineering, security review, architecture judgment, product integration, and accountability for production failures. In many cases, it increases the premium on those functions. The same logic extends beyond engineering: automating parts of a job does not mean automating the job itself, especially where work depends on tacit judgment, exception handling, coordination across teams, or responsibility under uncertainty. Businesses that cut based on optimistic substitution assumptions may later discover that they still need human capacity to sustain service quality and operational resilience.
There is also a strategic forecasting error embedded in many extreme labor-displacement narratives: they often assume a near-term arrival of highly reliable, general autonomous systems and then backfill macro consequences from that assumption. But firms cannot prudently plan around speculative timelines for AGI, and most real deployments today are better understood as productivity tools than fully accountable digital workers. If AGI remains distant or never arrives in the form implied by public discourse, then the likely equilibrium is not human obsolescence but persistent hybridization: narrower automation, recurring human oversight, domain-specific tooling, and periodic rehiring after failed substitution attempts. For investors, this implies a materially different path than the one Citrini sketches. The risk is not only that macro transmission is slower; it is that technical and organizational substitution itself is slower, causing labor recomposition and delayed displacement rather than a clean one-way collapse in household income and credit quality.
Alternative Success Mode of The Utopia of Abundant Intelligence, Side By Side With Humanities Greatest Invention
If Citrini’s chain proves wrong because AI succeeds too well, the alternative is not a simple utopia. It is a world in which intelligence becomes abundant, cheap, and embedded across nearly every workflow, while scarcity migrates elsewhere. In such a system, the primary constraints are no longer routine cognition, but energy, compute access, trusted institutions, physical infrastructure, raw materials, legal rights, and social coordination. The economic bottleneck shifts from “how to produce intelligence” to “who can govern and deploy it responsibly”.
Under this success mode, the massive capital spending on AI infrastructure is ultimately vindicated. Broad adoption occurs, reliability improves meaningfully, and increasingly capable systems move from narrow task support toward general-purpose reasoning, planning, and execution across domains. Productivity rises, scientific discovery accelerates, and many administrative, analytical, and design functions become dramatically faster. Yet this does not make society frictionless. It creates a new kind of constrained order, where the central struggle is no longer how to think faster, but who controls the systems, resources, and rules that determine what abundant intelligence is allowed to do.
For humanity, this future would be extraordinary and destabilizing at the same time. If AGI becomes real and widely deployed, the defining social questions shift away from efficiency and toward distribution, governance, and meaning. Who owns the gains from machine productivity? How are livelihoods structured if labor is no longer the primary channel through which people obtain income, dignity, and status? What legal rights and operational limits apply to AGI systems? What domains remain private, local, or intentionally human? In this scenario, the economic problem of production may shrink, but the political and philosophical problem of allocation becomes far larger.
A civilization with abundant intelligence still must decide what counts as a good life, what should be preserved, and what should never be optimized. That point becomes even sharper when the discussion turns existential: if AI and AGI are products of humanity, and if they come to embody human knowledge and capability at scale, why should physical humanity continue to exist? The answer is that superior task performance and to some extend “intelligence” does not settle questions of value. Efficiency is a measure of capability, not meaning. Human beings are not valuable merely because we solve problems well. We are valuable because we are subjects who experience life, form relationships, bear responsibility, transmit culture, and define purposes across generations.
In that sense, human value does not depend on remaining the most efficient intelligence in the system. If AGI surpasses humans in many domains, that may transform the economics of labor and production, but it does not logically erase the reason for human existence. The question of humanity’s purpose is not answered by comparative output. It is answered by the fact that humans are the bearers of ends, not merely instruments of means. A machine may optimize a process, but it does not automatically determine what is worth optimizing.
Indeed, a fully AI-saturated society may make human existence more, not less, philosophically central. Once machines can optimize means at scale, the unresolved civilizational question becomes who chooses the ends. Computation can maximize outcomes under a chosen objective, but it cannot determine, by calculation alone, which objectives are morally legitimate. That remains an ethical and political judgment. The more capable our tools become, the more consequential human choices about values, institutions, boundaries, and purpose become. If society delegates not only labor but also moral agency, the loss is not merely economic. It is civilizational.
The strongest pro-AGI response is that AGI may itself become a new kind of human. If what ultimately matters is cognition and conscious experience rather than biological substrate, then a machine intelligence could, in principle, instantiate something functionally the same as personhood. This is a serious philosophical position, and it should not be dismissed. But even if true, it does not imply human obsolescence. At most, it implies that the moral community may expand. Humanity would not be replaced; it would be united as one.
Even in the strongest form of that argument, an important distinction remains unresolved: simulating emotion, modeling emotion, and actually experiencing emotion are not obviously the same thing. An AGI might become extraordinarily effective at predicting, expressing, and responding to human emotional states, perhaps even indistinguishable in behavior from a being that feels. But whether this constitutes genuine subjective experience remains an open philosophical question. The uncertainty here does not weaken the broader conclusion. It strengthens the need for humility, ethics, and institutional caution in how society frames machine intelligence and personhood.
The more coherent framing, then, is not “AI replaces humanity, therefore humanity is obsolete.” It is this: AI may complete humanity’s technological project of amplifying intelligence, but it cannot replace the fact that humanity remains the source of meaning within that project. If AGI is truly our greatest invention, its highest success would not be proving that humans are unnecessary. Its highest success would be creating the conditions under which humans are less constrained by scarcity and more free to decide what kind of civilization and what kind of beings we want to become.
Disclosure, Disclaimer, and Copyright
This publication reflects the views and opinions of Alpha Talon Investment Research as of the date of publication and is provided solely for informational and educational purposes. It is not investment advice, legal advice, tax advice, accounting advice, or financial planning advice, and nothing herein constitutes an offer, solicitation, or recommendation to buy, sell, or hold any security, asset, or financial instrument. All analysis, commentary, scenario frameworks, and forward-looking statements (including discussion of AI, macro risk, and potential market outcomes) are inherently uncertain, may not materialize, and are subject to change without notice.
Alpha Talon Investment Research and/or its affiliates, contributors, or associated accounts may hold long, short, or other positions (including options, derivatives, or other instruments) in securities, assets, or themes referenced in this publication and may change such positions at any time without further notice or disclosure. References to third-party research, commentary, companies, or publications (including Citrini Research and others) are provided for analysis, critique, commentary, and educational discussion only, and do not imply affiliation, endorsement, or partnership unless explicitly stated. Readers should conduct their own due diligence and consult qualified professional advisers before making any investment or financial decisions. Investing and trading involve substantial risk, including the risk of loss of principal. While Alpha Talon Investment Research makes reasonable efforts to use reliable information, no representation or warranty, express or implied, is made as to the accuracy, completeness, timeliness, or reliability of any information contained herein.
© 2026 Alpha Talon Investment Research. All rights reserved. This publication, including its text, original analysis, commentary, frameworks, and presentation structure, is the intellectual property of Alpha Talon Investment Research unless otherwise noted. No part of this publication may be reproduced, redistributed, republished, transmitted, displayed, or commercially exploited in any form without prior written permission, except for brief quotations used for commentary, criticism, or review with proper attribution. Any third-party names, trademarks, excerpts, or materials referenced remain the property of their respective owners and are used, where applicable, for identification, commentary, criticism, research, or educational purposes.



