<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
    <channel>
        <title>Robert Greiner</title>
        <link>https://robertgreiner.com/</link>
        <description>I write about AI strategy, leadership, and technology.</description>
        <language>en-us</language>
        <lastBuildDate>Wed, 07 Jan 2026 19:23:13 +0000</lastBuildDate>
        <atom:link href="https://robertgreiner.com/rss.xml" rel="self" type="application/rss+xml"/>

        
        <item>
            <title>The 1% Error That Ruins Everything</title>
            <link>https://robertgreiner.com/the-1-percent-error-that-ruins-everything/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-1-percent-error-that-ruins-everything/</guid>
            <pubDate>Wed, 07 Jan 2026 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/ai-compounding-errors.webp" alt="The 1% Error That Ruins Everything" /><br/><p>In 2025, <a href="https://www.algorithma.ai/articles/why-agentic-ai-projects-fail-part-2-integrating-tech-organization-and-business-to-drive-impact">42% of companies abandoned most of their AI initiatives</a> - up from just 17% the year before. The culprit wasn&rsquo;t bad implementation or insufficient data. It was math: a 99% accurate AI agent performing 50 sequential steps succeeds only 60% of the time, and most enterprise workflows require far more than 50 decisions. The industry has been optimizing the wrong variable.</p>
<p>Executives didn&rsquo;t cancel those programs because the UX was clunky &ndash; or that there were too many em-dashes. They canceled them because the systems quietly failed in production, over and over, in ways no one could predict or fix. When it came time to show return on investment, there was nothing to show.</p>
<p>This isn&rsquo;t a talent problem or a tooling problem. It&rsquo;s a category error. We&rsquo;ve been <strong>treating probabilistic systems as deterministic infrastructure</strong>, expecting software behavior from something that is, at its core, an extremely sophisticated dice roll.</p>
<hr>
<p>Software either works or it has a bug. Models are different. They&rsquo;re <em>never fully right and never fully wrong</em>. They are distributions. And that changes everything.</p>
<p>FERZ&rsquo;s <a href="https://ferzconsulting.com/archive/FERZ_Enterprise_AI_Reliability_WhitePaper_July2025.html">reliability analysis</a> quantified what many engineering teams have learned the hard way. Take five agents, each &ldquo;pretty good&rdquo; at 85% accuracy, or one model asked to make five sequential decisions. The system-level reliability isn&rsquo;t 85%&hellip; it collapses to 44%. At ten agents, it drops below 20%.</p>
<p>The situation gets worse as you push models toward autonomy. Every step in a multi-hop plan adds another chance to drift. Sentrix Labs <a href="https://www.sentrixlabs.com/blog/why-your-agent-is-not-production-ready">documented a customer service agent that grew 2.7% more verbose every week, doubling response length in six months without anyone noticing</a>. Drift is <em>not a bug</em>. It&rsquo;s a law of gravity slowly pulling you back down to earth when any stochastic system runs unobserved.</p>
<p>And hallucination isn&rsquo;t going away. Researchers have shown that <a href="https://sebastianbarros.substack.com/p/ai-agents-today-are-almost-useless">hallucination floors don&rsquo;t vanish with more parameters</a>; it&rsquo;s baked into how these models work. Throw in retrieval, tools, and external APIs, and you haven&rsquo;t removed randomness. You&rsquo;ve just spread it across more components.</p>
<p>Then leaders take this stack and say, &ldquo;Let&rsquo;s replace an entire workflow.&rdquo; What they get isn&rsquo;t an agent. It&rsquo;s an unreliable Rube Goldberg machine with a friendly chat UI.</p>
<hr>
<p>The industry response has been to double down on control: guardrails, policy engines, deterministic validators, human-in-the-loop review. All necessary. None sufficient to rescue the &ldquo;autonomous agent&rdquo; dream.</p>
<p>Look closely at the rare &ldquo;successful&rdquo; production agents Sentrix and others showcase. Under the hood, you find three things every time: heavy deterministic scaffolding, hard-coded guardrails, and humans quietly cleaning up the mess.</p>
<p>This isn&rsquo;t deploying sophisticated high-tech AI agents&hellip; it&rsquo;s just <strong>rebuilding traditional software around an unpredictable component</strong> and calling the whole thing AI-native.</p>
<p>And you pay for it in senior engineers writing harnesses instead of features, in domain experts reviewing outputs more carefully than they review junior staff, in incident response when the agent does something &ldquo;no one has ever seen before&rdquo; but was always mathematically possible.</p>
<p><strong>This is the new AI tax.</strong> These systems often require more experienced oversight than the people you were planning to replace. Autonomy was supposed to cut headcount. In practice, it drags your most expensive people deeper into the loop.</p>
<hr>
<p>&ldquo;But&hellip;! Models will get better! We&rsquo;re just early. You don&rsquo;t want to get left behind, do you?&rdquo;</p>
<p>Sure&hellip; capability will improve. But, <strong>reliability will not converge to software-like behavior.</strong> Even at 99.9% step accuracy, a 1,000-step workflow has only a 37% chance of succeeding end-to-end. Most serious business processes span thousands of micro-decisions across systems, contexts, and time. All while software is becoming more and more complex.</p>
<p>The math does not bend to your roadmap.</p>
<p>RAG, fine-tuning, evals, better prompts, better vendors; all of it optimizes parameters inside the same architecture. None of it changes the fact that you&rsquo;re chaining probabilistic steps and expecting deterministic outcomes.</p>
<p>Just one more roll of the dice.</p>
<p>So the strategic question has to shift. Stop asking, &ldquo;How do we make agents reliable enough to run this workflow?&rdquo; Start asking, &ldquo;Where is AI genuinely good, and where is it a liability?&rdquo;</p>
<p>Because AI <em>is</em> genuinely good at some things. It&rsquo;s extraordinary at synthesis, pattern recognition, drafting, exploration, and augmenting human judgment in the moment. It&rsquo;s terrible at sequential autonomy, consistency over time, and anything where a 1% error rate compounds into chaos.</p>
<p>Knowing the difference is the skill. Having the judgment to deploy AI where it shines - and keep it away from where it doesn&rsquo;t - is what separates strategy from hype.</p>
<p>Instead of making the agent the star of the show, use AI to help design and <em>build</em> the show. An AI coding assistant will do far more to help you analyze your data and write actual software than trying to untangle the mess of random agentic output. Testable. Updatable. Predictable. Repeatable. Traditional software that does exactly what you need, built faster with AI&rsquo;s help.</p>
<p>That&rsquo;s not as sexy as &ldquo;autonomous agents.&rdquo; But it works. It ships. It doesn&rsquo;t delete your production database.</p>
<p>The winners won&rsquo;t be organizations with the most agents. They&rsquo;ll be the ones with the judgment to know when AI is the tool and when it&rsquo;s the trap.</p>
<p>The 42% knew something the optimists didn&rsquo;t: you can&rsquo;t debug probability into certainty.</p>
]]></description>
        </item>
        
        <item>
            <title>Believe the Checkbook</title>
            <link>https://robertgreiner.com/believe-the-checkbook/</link>
            <guid isPermaLink="true">https://robertgreiner.com/believe-the-checkbook/</guid>
            <pubDate>Fri, 19 Dec 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/checkbook-vs-megaphone.webp" alt="Believe the Checkbook" /><br/><p>Anthropic&rsquo;s AI agent was the <a href="https://bun.com/blog/bun-joins-anthropic">most prolific code contributor to Bun&rsquo;s GitHub repository, submitting more merged pull requests than any human developer</a>. Then Anthropic paid millions to acquire the human team anyway. The code was MIT-licensed; they could have forked it for free. Instead, they bought the people.</p>
<hr>
<p>Everyone&rsquo;s heard the line: <strong>&ldquo;AI will write all the code</strong>; engineering as you know it is finished.&rdquo;</p>
<p>Boards repeat it. CFOs love it. Some CTOs quietly use it to justify hiring freezes and stalled promotion paths.</p>
<p>The Bun acquisition <strong>blows a hole in that story</strong>.</p>
<p>Here&rsquo;s a team whose project was open source, whose most active contributor was an AI agent, whose code Anthropic legally could have copied overnight. No negotiations. No equity. No retention packages.</p>
<p>Anthropic still fought competitors for the right to buy that group.</p>
<p>Publicly, AI companies talk like engineering is being automated away. Privately, they deploy millions of dollars to acquire engineers who already work with AI at full tilt. That contradiction is not a PR mistake. It is a <em>signal</em>.</p>
<hr>
<p>The key constraint is obvious once you say it out loud. <strong>The bottleneck isn&rsquo;t code production, it is <em>judgment</em>.</strong></p>
<p>Anthropic&rsquo;s own announcement barely talked about Bun&rsquo;s existing codebase. It praised the team&rsquo;s ability to <a href="https://www.anthropic.com/news/anthropic-acquires-bun-as-claude-code-reaches-usd1b-milestone">rethink the JavaScript toolchain &ldquo;from first principles&rdquo;</a>.</p>
<p>That&rsquo;s investor-speak for: we&rsquo;re paying for how these people think, what they choose not to build, which tradeoffs they make under pressure. They didn&rsquo;t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.</p>
<p>AI drastically increases the volume of code you can generate. It does almost nothing to increase your supply of people who know which ten lines matter, which pull request should never ship, and which &ldquo;clever&rdquo; optimization will explode your latency or your reliability six months from now.</p>
<p>So when Anthropic&rsquo;s own AI tops the contribution charts and they still decide the scarce asset is the human team, pay attention. <strong>That&rsquo;s revealed preference.</strong></p>
<p>Leaders don&rsquo;t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands. If you want to understand what AI companies actually believe about engineering, <strong>follow the cap table, not the keynote.</strong></p>
<hr>
<p>So what do you do with this as a technical leader?</p>
<p>Stop using AI as an excuse to devalue your best knowledge workers. Use it to give them more leverage.</p>
<p>Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They&rsquo;ll use AI to move faster, explore more options, and harden their decisions with better data.</p>
<p>Double down on developing judgment, not just syntax speed: architecture, performance modeling, incident response, security thinking, operational literacy. The skills Anthropic implicitly paid for when it bought a team famous for rethinking the stack, not just writing another bundler.</p>
<p>Be careful about starving your junior pipeline based on &ldquo;coding is over&rdquo; narratives. As AI pushes routine work down, the gap between senior and everyone else widens. Companies that maintain a healthy apprenticeship ladder will own the next generation of high-judgment engineers while everyone else hunts the same shrinking senior pool at auction.</p>
<p>Most important: <strong>calibrate your strategy to revealed preferences</strong>, not marketing copy. When someone&rsquo;s AI writes more code than their engineers but they still pay millions for the engineers, believe the transaction, not the tweet.</p>
]]></description>
        </item>
        
        <item>
            <title>The Most Expensive Wall in Software</title>
            <link>https://robertgreiner.com/the-most-expensive-wall-in-software/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-most-expensive-wall-in-software/</guid>
            <pubDate>Wed, 17 Dec 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/fde-warehouse.webp" alt="The Most Expensive Wall in Software" /><br/><p>Palantir didn&rsquo;t have a working product for the first several years. What they had were brilliant engineers building custom solutions on customer sites. Somehow that &ldquo;broken&rdquo; model made them worth nearly $500 billion. The company that couldn&rsquo;t ship software became one of the most valuable enterprise platforms of the decade by doing the one thing every engineering VP tries to prevent: sending their best people to live with customers instead of letting them write code in peace.</p>
<p>We treat that story like a Palantir quirk, some weird exception from a weird company. It isn&rsquo;t. It&rsquo;s a preview.</p>
<p>The &ldquo;Forward Deployed Engineer&rdquo; sounds like a new job title. It&rsquo;s not. It marks the moment a company admits that the wall between &ldquo;building software&rdquo; and &ldquo;understanding the problem&rdquo; has been a very expensive illusion.</p>
<hr>
<p>For years, we&rsquo;ve optimized for engineer &ldquo;focus.&rdquo; Noise-canceling headphones. Dark mode. A Jira board that shields them from anything messy or human. No customer calls. No sales drama. Just tickets.</p>
<p>We assumed fewer distractions meant more productivity. We never questioned whether we were removing the input they actually needed.</p>
<p>An engineer told to &ldquo;build an executive dashboard&rdquo; isn&rsquo;t doing product work. They&rsquo;re playing telephone. One person heard it from sales. Sales heard it from a VP. The VP heard it from a board slide. By the time the engineer sees the ticket, the real problem is unrecognizable.</p>
<p>So they do what they&rsquo;re paid to do. They make boxes and charts. The execs shrug. Nobody is thrilled. The engineer walks away a little more convinced they should just be left alone to code.</p>
<p>We call that a personality type. Usually, it&rsquo;s an organizational symptom.</p>
<hr>
<p>Forward Deployed Engineers flip the script. Same brains. Same editor. Different raw material.</p>
<p>Instead of sitting behind a backlog, they sit inside the customer&rsquo;s day. Three or four days a week on-site, watching how analysts fudge CSVs, how operators bypass the tool, where people swear under their breath because the system &ldquo;just doesn&rsquo;t get it.&rdquo; Then they fix it, right there, while the user is still at their keyboard.</p>
<p>You don&rsquo;t need a PRD when you&rsquo;re watching someone copy-paste the same field into three different systems.</p>
<p>The last mile of business logic and &ldquo;reasoning tokens&rdquo; is where the moat lives. Those messy, tacit rules in a claims team or a supply chain desk are precisely what future AI systems will need to learn.</p>
<p>Think about what an FDE actually captures. Not just requirements. Not just bug reports. They&rsquo;re watching the workarounds. The spreadsheet that Linda maintains because the system doesn&rsquo;t handle edge cases. The mental model that a senior analyst has built over fifteen years that lets her spot a fraudulent claim in seconds. The tribal knowledge that exists only in the heads of people who&rsquo;ve been doing the job long enough to know where the bodies are buried.</p>
<p>That knowledge has always been valuable. It&rsquo;s about to become essential.</p>
<p>Large language models are remarkable at general reasoning. They&rsquo;re terrible at knowing that your company approves claims differently on the last day of the quarter, or that &ldquo;rush order&rdquo; means something completely different to the Chicago warehouse than it does to the one in Phoenix. The models don&rsquo;t know that when a customer says &ldquo;the usual,&rdquo; they mean the configuration they&rsquo;ve been using since 2019 that nobody documented.</p>
<p>This is the knowledge gap that will separate AI that works from AI that sort of works. And FDEs are uniquely positioned to close it.</p>
<p>Every time an FDE watches someone work around the system, they&rsquo;re documenting a gap in the model&rsquo;s training data. Every time they build a quick fix for a specific customer workflow, they&rsquo;re encoding business logic that no foundation model will ever learn from public data. They&rsquo;re not just smoothing sales cycles. They&rsquo;re harvesting structured insight from chaos.</p>
<p>If you see FDEs as revenue padding, you&rsquo;ll treat them like overpaid sales engineers. If you see them as a data acquisition engine for your AI future, you&rsquo;ll treat them like your most strategic asset—and that recognition rewrites the org chart.</p>
<hr>
<p>Product management starts to hollow out. When engineers have direct customer relationships and live context, you don&rsquo;t need as many people rewriting customer pain into Jira poetry. Some PMs evolve into true strategists, synthesizing markets, pricing, portfolio bets. Others, whose job was &ldquo;talk to customers, then make tickets,&rdquo; find there&rsquo;s no seat left.</p>
<p>Compensation models buckle. What do you pay the engineer who rewired a deployment on-site and saved a $5 million deal from churning? Base salary plus&hellip; a sales commission? A spot bonus? Equity? No spreadsheet handles &ldquo;closed the deal and wrote the patch.&rdquo;</p>
<p>Career ladders fork. The old path (IC, senior, staff, principal) assumed &ldquo;deeper into the code&rdquo; was the only axis. FDEs create a second track: deep enough technically, but wide in context. They know the customer&rsquo;s industry, regulatory mess, and how the CFO thinks. Both tracks are valuable. They will quietly compete for your best people.</p>
<p>The &ldquo;coding is dead&rdquo; crowd has this backwards. It&rsquo;s not that engineers disappear, it&rsquo;s that the walls between roles do. An FDE with AI workflows can do the job of the engineer, the solutions architect, the PM who translates requirements, and half the support team. They&rsquo;re in the room, they understand the problem, and now they have tools that let them ship the fix before the meeting ends. The specialists who survive aren&rsquo;t the ones who go deeper into one skill. They&rsquo;re the ones who go wider across the problem—and use AI to cover the gaps.</p>
<hr>
<p>You&rsquo;re already paying for the gap between your engineers and reality. You pay for it in features nobody uses. In quarters-long roadmap resets. In &ldquo;strategic pivots&rdquo; that are really just corrections to bad guesses.</p>
<p>Joe Lonsdale said about Palantir&rsquo;s early days: &ldquo;We didn&rsquo;t actually have a product that worked for the first several years. What we had were brilliant engineers who could quickly build solutions for specific customer problems.&rdquo; That sounded like an admission. It was also their advantage. Every on-site hack became another puzzle piece of the eventual platform.</p>
<p>Not every company can station engineers at every client. But every company can lower the wall. Put engineers on sales calls. Rotate them through support. Let them watch users struggle without a PM running interference. Treat customer exposure as fuel for better code, not a distraction from it.</p>
<p>Palantir&rsquo;s &ldquo;broken&rdquo; model turned out to be the only thing that wasn&rsquo;t broken. They understood before everyone else that the distance between your engineers and reality is the most expensive line item on your P&amp;L.</p>
]]></description>
        </item>
        
        <item>
            <title>The Breaker Box Economy</title>
            <link>https://robertgreiner.com/the-breaker-box-economy/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-breaker-box-economy/</guid>
            <pubDate>Tue, 04 Nov 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/datacenter-breaker-box.webp" alt="The Breaker Box Economy" /><br/><p>During a summer blackout when I was a kid, a neighbor ran an orange extension cord across the street so our freezer wouldn&rsquo;t thaw. It looked absurd: this thin line humming with borrowed power, keeping the lasagna alive. But it worked. In a pinch, you build your own grid.</p>
<p>OpenAI is doing the grown-up version. Not with cords, but contracts. They&rsquo;re stringing a private power grid across rival utilities, locking in long-term &ldquo;compute offtake&rdquo; so the lights of their AI never flicker.</p>
<p>Look at the map they&rsquo;ve drawn. They <a href="https://techcrunch.com/2025/01/21/microsoft-is-no-longer-openais-exclusive-cloud-provider/">pried open their exclusivity with Microsoft</a>, won the right to buy from any cloud, and immediately signed a <a href="https://www.cnbc.com/2025/11/03/open-ai-amazon-aws-cloud-deal.html">seven-year, $38 billion deal with Amazon</a>. Then came data center projects with <a href="https://openai.com/index/announcing-the-stargate-project/">Oracle, SoftBank, and sovereign partners in the Gulf</a>—$500 billion through the Stargate Project. In parallel, they locked in <a href="https://www.cnbc.com/2025/10/13/openai-partners-with-broadcom-custom-ai-chips-alongside-nvidia-amd.html">chip supply with Nvidia, AMD, and Broadcom</a> so the turbines behind the meter actually spin. None of this reads like a software roadmap. It reads like a utility prospectus.</p>
<p>For a decade, the cloud dictated terms. Everyone else took what they could get. Now the script flips. Hyperscalers become suppliers. The leading AI buyer aggregates their capacity. When OpenAI complained it couldn&rsquo;t get enough compute from Microsoft alone, it wasn&rsquo;t a feature request. It was a reliability concern.</p>
<p><strong>The numbers tell you where the leverage moved.</strong> Last year, <a href="https://www.cnbc.com/2025/10/31/tech-ai-google-meta-amazon-microsoft-spend.html">Amazon, Google, Meta, and Microsoft spent over $380 billion on infrastructure</a> — more than <a href="https://www.worldeconomics.com/Country-Size/Finland.aspx">the entire GDP of Finland</a>, spent in a single year by four companies. OpenAI, meanwhile, <a href="https://fortune.com/2024/09/28/openai-5-billion-loss-2024-revenue-forecasts-fundraising-chapgpt-fee-hikes/">remains unprofitable</a>. Yet they&rsquo;re committing $38 billion to Amazon over seven years. That deal alone exceeds <a href="https://companiesmarketcap.com/ford/marketcap/">Ford&rsquo;s entire market cap</a>.</p>
<p>The traditional calculus would call this a bubble. The company with no profits dictating terms to the most valuable companies on earth. But that misreads what&rsquo;s happening. OpenAI isn&rsquo;t betting they&rsquo;ll be profitable next quarter. They&rsquo;re betting that guaranteed access to compute becomes the most valuable asset in technology. They&rsquo;re securing supply before the shortage arrives.</p>
<p>This is what commodity markets look like when everyone realizes the same thing at once. <a href="https://www.cnbc.com/2021/05/14/chip-shortage-expected-to-cost-auto-industry-110-billion-in-2021.html">In 2021, car manufacturers couldn&rsquo;t build vehicles</a> because they didn&rsquo;t own chip fabrication. They got outbid by companies that did. Now imagine that dynamic, but with compute instead of semiconductors, and the stakes aren&rsquo;t empty dealer lots. It&rsquo;s whether your AI works at all.</p>
<p><strong>The shift isn&rsquo;t subtle.</strong> Strategy used to be about inventing a better model. Now it&rsquo;s about financing a continent of capacity and keeping it fed. Risk used to be &ldquo;does it work?&rdquo; Now it&rsquo;s &ldquo;does it arrive on time?&rdquo; Whoever aggregates demand across utilities starts to look less like a tenant and more like a grid operator.</p>
<p>Consider what that means for the next decade. The breakthrough that matters won&rsquo;t necessarily be the cleverest algorithm. It will be who locked in supply at 2025 prices before the 2027 shortage. Who secured diversity so a single vendor&rsquo;s outage doesn&rsquo;t crater their service. Who convinced a sovereign wealth fund that compute infrastructure is as strategic as oil reserves.</p>
<p>In commodities, advantage compounds quietly. The steel mill that signed iron ore contracts before prices spiked doesn&rsquo;t celebrate publicly. They just keep running while competitors idle. In AI, we&rsquo;re approaching the same dynamic. The winners will be the ones who treated compute like the scarce resource it&rsquo;s becoming, not like the abundant cloud capacity it used to be.</p>
<p><strong>All this infrastructure raises a different question:</strong> what happens to the people? Recent <a href="https://www.stlouisfed.org/on-the-economy/2025/aug/is-ai-contributing-unemployment-evidence-occupational-variation">analyses from the St. Louis Fed</a> paint a more complex picture than the standard narrative. Occupations with higher AI exposure have experienced larger unemployment rate increases between 2022 and 2025. Computer and mathematical occupations, among the most AI-exposed at around 80%, saw some of the steepest unemployment rises. Meanwhile, blue-collar jobs and personal service roles, which have limited AI applicability, experienced relatively smaller increases.</p>
<p>But the infrastructure being built suggests different stakes than current conditions reveal. Some economists warn that if systems approach human-like general intelligence within years, <a href="https://www.hbs.edu/bigs/will-artificial-intelligence-improve-or-eliminate-jobs">wages and work could be jolted</a> in ways our social safety nets weren&rsquo;t designed to handle. The gap between today&rsquo;s emerging patterns and tomorrow&rsquo;s possible disruption is the same gap that existed between early subprime mortgage exposure and full-blown crisis. Not everyone sees the bridge until it&rsquo;s crossed.</p>
<p>The path forward depends less on what AI can do than on <a href="https://www.mckinsey.com/capabilities/tech-and-ai/our-insights/superagency-in-the-workplace-empowering-people-to-unlock-ais-full-potential-at-work">whether we invest in reskilling</a> at the same rate we pour concrete for data centers. Either way, the bottleneck won&rsquo;t be ideas. It will be throughput.</p>
<p>The premium shifts from model architecture to infrastructure literacy. For engineers, understanding how to optimize for constrained compute becomes more valuable than squeezing another point of accuracy. For companies, strategic advantage flows to those who secure capacity now, even at uncomfortable cost. Waiting for prices to drop assumes supply will meet demand. History suggests otherwise.</p>
<p>The boring bets may matter most. Not the sexiest model, but the companies with the longest runway of guaranteed compute. Not the flashiest demo, but the partnerships that ensure it keeps running under load. And for nations, compute dependency becomes a geopolitical wedge. Countries that built domestic chip fabs after recent shortages are now asking the same questions about AI infrastructure. The grid matters more than the code running on it.</p>
<p><strong>We spent a decade believing software eats the world because it scaled like thought.</strong> Marginal costs near zero. Distribution instant. Barriers low. The next decade looks different. AI scales like energy: constrained by physical infrastructure, governed by supply contracts, and bottlenecked by whoever controls the flow.</p>
<p>In that world, brilliance still matters. But the decisive move isn&rsquo;t elegant. It&rsquo;s securing the breaker box before the lights go out.</p>
]]></description>
        </item>
        
        <item>
            <title>The Internet's Forgotten Superpower</title>
            <link>https://robertgreiner.com/how-we-forgot-the-url/</link>
            <guid isPermaLink="true">https://robertgreiner.com/how-we-forgot-the-url/</guid>
            <pubDate>Mon, 03 Nov 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/retro-web-state.webp" alt="The Internet's Forgotten Superpower" /><br/><p>When I was eight, my save button was a pencil.</p>
<p>Not the controller. A pencil. And a scrap of paper.</p>
<p>You&rsquo;d finish a stage in Mega Man 2 and the game would show you a grid. Five rows of dots, each one either empty or filled. You&rsquo;d copy it down dot by dot, turn off the NES, and come back days later. Enter that same pattern and your world reappeared. All eight robot masters. Every E-tank. Metal Man&rsquo;s stage half-cleared.</p>
<p>One small grid held your entire state.</p>
<p>You expected it to work. You trusted it.</p>
<p>The web has had this same feature since 1991. We just stopped using it.</p>
<h3 id="what-we-broke">What We Broke</h3>
<p>A colleague sends you a GitHub link. It doesn&rsquo;t just open the file. It highlights lines 8 through 15, exactly where the bug lives. You land in the right place, conversation ready to start.</p>
<p>Figma does the same. Click a teammate&rsquo;s link and you&rsquo;re on their canvas, same position, sometimes same object selected.</p>
<p>Google Maps puts coordinates right in the URL. A pin isn&rsquo;t just &ldquo;coffee shop.&rdquo; It&rsquo;s precisely where you were looking.</p>
<p>This isn&rsquo;t innovation. It&rsquo;s just the web working as designed.</p>
<p>Then React launched in 2013 and single-page applications became the default. The trade seemed worth it: instant updates, no flicker, that native-app feel.</p>
<p>But the cost was steeper than anyone admitted.</p>
<p>SPAs broke the browser&rsquo;s most fundamental contract: refresh should restore, not destroy. The back button should remember. A link should mean something.</p>
<p>Instead, we got applications where your filters vanish on reload. Where sharing your screen means sending a link to a useless homepage, then giving verbal directions. Where analytics teams write custom JavaScript to manually fire events every time the URL changes. Except half the time the URL doesn&rsquo;t change because updating it is &ldquo;extra work.&rdquo;</p>
<p>We built save systems that die in RAM.</p>
<h3 id="why-nobody-does-this">Why Nobody Does This</h3>
<p>It&rsquo;s easier not to.</p>
<p>Redux launched in 2015 and everyone copied the pattern. State lives in memory, managed by reducers. Tutorials taught this approach. Libraries assumed it. The entire ecosystem optimized around it.</p>
<p>It worked until you hit refresh. Then tutorials would sheepishly mention you&rsquo;d need to &ldquo;rehydrate from the server&rdquo; like it was some minor detail.</p>
<p>The URL sat there, a solved problem we chose to ignore.</p>
<p>Early React Router didn&rsquo;t even consider the URL a first-class state container. It was decoration. The routing library itself didn&rsquo;t believe routes should carry data.</p>
<p>And nobody wanted to think about what belongs in a URL. Is it IDs? Filters? View modes? Sort order? The answer is &ldquo;it depends,&rdquo; which means you actually have to think about your application.</p>
<p>It&rsquo;s easier to dump everything in Redux and hope for the best.</p>
<h3 id="the-real-complexity">The Real Complexity</h3>
<p>To be fair, URL state isn&rsquo;t trivial.</p>
<p>Browsers limit URLs to around 2,000 characters. Try serializing a complex filter object and you&rsquo;ll hit that ceiling fast. Put sensitive data in URLs and it leaks everywhere: server logs, browser history, analytics tools, shoulder surfers.</p>
<p>Nested objects don&rsquo;t serialize cleanly. Arrays of objects with their own nested arrays? Good luck making that readable. And if you naively push every state change to the URL, you pollute browser history until the back button becomes unusable.</p>
<p>These are real problems.</p>
<p>But they&rsquo;re solvable problems. And more importantly, they&rsquo;re problems worth solving.</p>
<p>The character limit matters for complex queries with dozens of filters. Most applications have three to five. IDs are short. Sort orders and view modes take a few characters. You&rsquo;re not serializing your entire database.</p>
<p>Sensitive data never belonged in URLs anyway. Authentication tokens go in cookies or headers. PII stays on the server. This isn&rsquo;t a URL problem, it&rsquo;s a security boundary you should already have.</p>
<p>Complex objects? Most view state isn&rsquo;t that complex. When it is, you can use short identifiers that reference server-side state. Stripe does this with their expandable API parameters. Linear does it with saved filters.</p>
<p>History pollution? Use replaceState instead of pushState for transient updates. Problem solved in one line.</p>
<p>The complexity exists. But it&rsquo;s manageable complexity. The kind engineers solve every day. We just decided it wasn&rsquo;t worth the effort.</p>
<h3 id="the-test">The Test</h3>
<p>Durable, user-chosen facts belong in the URL.</p>
<p>If someone set filters, they go in the URL. If they chose a view mode, it goes in the URL. If they navigated to a specific item, its ID goes in the URL.</p>
<p>The test is simple: if someone shares this link, should they see the same thing?</p>
<p>If yes, it belongs in the URL.</p>
<p>Google ships billions of search results. Every one is a URL with your query in it: google.com/search?q=url+state+management</p>
<p>Figma, Linear, Trello. Every design, every issue, every card has an address.</p>
<p>These aren&rsquo;t clever hacks. They&rsquo;re examples of what happens when you treat the URL as infrastructure instead of decoration.</p>
<h3 id="what-we-gave-up">What We Gave Up</h3>
<p>We chased the native app feel and forgot why the web matters.</p>
<p>Native apps can&rsquo;t share state with a link. Can&rsquo;t bookmark a screen. Can&rsquo;t open three views in separate tabs. The web could do all of this by default. We broke it.</p>
<p>Single-page applications have real benefits. Speed. Smooth transitions. Reactive updates. But those benefits don&rsquo;t require abandoning the URL as a state container.</p>
<p>You can have instant updates and meaningful addresses. Smooth transitions and working back buttons. The reactive experience and shareable links.</p>
<p>The frameworks that win long-term will be the ones that treat the URL as infrastructure. That make it easy to put state there. That default to meaningful addresses instead of treating them as decoration.</p>
<h3 id="the-web-remembers">The Web Remembers</h3>
<p>The URL has been waiting thirty years to be your save code.</p>
<p>Every application that ignores it is one refresh away from losing your work. One shared link away from confusion. One back button away from frustration.</p>
<p>Your eight-year-old self knew better. Drew that grid. Kept that scrap of paper.</p>
<p>The web gave you something better.</p>
<p>Stop building amnesia into your applications.</p>
]]></description>
        </item>
        
        <item>
            <title>The Experience Upload</title>
            <link>https://robertgreiner.com/the-experience-upload/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-experience-upload/</guid>
            <pubDate>Thu, 23 Oct 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/reading-robot.webp" alt="The Experience Upload" /><br/><p>Remember when Neo downloaded kung fu directly into his brain? &ldquo;I know kung fu,&rdquo; he said, eyes snapping open after seconds of upload. That scene from The Matrix doesn&rsquo;t feel like science fiction anymore. It&rsquo;s basically Tuesday for any junior analyst with ChatGPT.</p>
<p>Generative AI has become an experience accelerator that compresses decades of pattern recognition into an afternoon. A newcomer can synthesize 2,000 sales pitches, 500 project postmortems, and years of design reviews, then extract the winning templates. The apprenticeship model, where you shadow masters for years to absorb their judgment, is being disrupted by something closer to direct knowledge transfer.</p>
<p><a href="https://hai.stanford.edu/ai-index/2025-ai-index-report">Stanford&rsquo;s AI Index shows AI tools meaningfully narrow skill gaps</a>, with the biggest performance gains flowing to less-experienced workers. Junior employees get the most lift because they&rsquo;re gaining that &ldquo;I&rsquo;ve seen this before&rdquo; intuition that used to require years in the trenches.</p>
<p>When everyone can download the standard moves, competitive advantage relocates entirely. The edge migrates from having patterns to selecting them. <a href="https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai">McKinsey&rsquo;s research shows companies pulling real productivity gains from generative AI</a>, particularly in content-heavy workflows that traditionally rewarded tenure over tempo. Templates are now abundant. Knowing when and how to deploy them remains scarce.</p>
<p>Writing a product spec? The structure is free. Any AI can provide the template. Choosing which trade-offs to make, what features to kill, whose pain to prioritize? That&rsquo;s where advantage lives. The same dynamic plays everywhere: sales, research, design, strategy.</p>
<p>Pattern abundance breeds pattern addiction. The junior PM who deploys a perfect RICE framework without understanding why their particular product needs different criteria. The analyst who runs flawless regressions without asking if they&rsquo;re measuring the right thing. Templates that look sophisticated crumble on contact with reality.</p>
<p>Think of expertise like a library card. For most of history, the card itself was precious. You earned it through years in the stacks, slowly building your catalog. Now the entire library rolls to your desk on command. The constraint shifted from access to selection.</p>
<p>AI can upload a hundred negotiation tactics into your working memory. Winning the negotiation still requires reading the room, sensing undercurrents, knowing when to break the pattern because this situation is different. Even in <a href="https://arxiv.org/html/2412.11427v2">scientific research, models dramatically accelerate discovery, but validation and framing remain essentially human</a>.</p>
<p>When moves become free, judgment becomes the only scarcity that compounds. You don&rsquo;t win by having seen everything. You win by truly seeing what&rsquo;s in front of you and knowing which of those thousand downloaded patterns actually applies, if any.</p>
]]></description>
        </item>
        
        <item>
            <title>The Three Infinity Stones That Can Erase Your Company</title>
            <link>https://robertgreiner.com/the-three-infinity-stones-that-can-erase-your-company/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-three-infinity-stones-that-can-erase-your-company/</guid>
            <pubDate>Wed, 15 Oct 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/reddit-infinity-bot.webp" alt="The Three Infinity Stones That Can Erase Your Company" /><br/><p>A reputation can be erased with three permissions most companies don’t control.</p>
<p>Think of it like Thanos’s gauntlet, but the stones are Reddit Mod, SERP (Search Engine Result Page), and LLM (like ChatGPT). Slip that glove on and reality bends. Not because the truth changed, but because the places people trust to find the truth did.</p>
<h2 id="the-bootcamp-that-got-slowly-strangled">The Bootcamp That Got Slowly Strangled</h2>
<p><a href="https://larslofgren.com/codesmith-reddit-reputation-attack/">Lars Lofgren documented a chilling case study</a>: a coding bootcamp allegedly watching its reputation dissolve in slow motion. The pattern? Concerned posts pinned to the top. Defensive replies deleted as &ldquo;too aggressive.&rdquo; Critical threads climbing Google rankings while rebuttals disappeared. The moderator shaping the narrative? Someone who happened to run a competing program.</p>
<p>The beauty was in the restraint. No rants, no smoking guns. Just a steady drip of concerned questions that somehow never got answered. Alumni defending the program? Their comments would disappear. Too aggressive, the mod would explain, if anyone asked. Meanwhile, every “I heard some troubling things” post would stick around, accumulating upvotes and anxiety.</p>
<p>Month by month, those threads crept up Google’s rankings until searching the bootcamp’s name meant swimming through doubt. Then the AI models started training on all that helpful Reddit content. Now when someone asks ChatGPT about coding bootcamps, guess whose concerns get recycled as conventional wisdom?</p>
<p>The company’s revenue didn’t explode. It deflated, slowly, like air through a pinhole you can’t quite locate.</p>
<p>Whether or not the mod was misbehaving is irrelevant (to you), the physics at play must be understood to survive and thrive into the future.</p>
<p>Here’s how the whole system works, stone by stone.</p>
<h2 id="stone-one-the-mod">Stone One: The Mod</h2>
<p>A rival grabs moderation of a niche subreddit that prospects actually read. The new mod doesn’t need to invent facts. Just nudge. Seed insinuations. Pin the “open questions.” Delete the boring defenses. The thread hardens into a narrative. It’s cheap, it scales, and it never tires.</p>
<p>This can happen at the individual level, or through mob groupthink.</p>
<p>We learned how much power unpaid moderators really have during the 2023 Reddit blackout, when a mod revolt flipped thousands of subreddits to private. Overnight, a volunteer class reminded a $10 billion platform who actually holds the keys. That same control can shape a market’s first impression of your company for years.</p>
<h2 id="stone-two-the-serp">Stone Two: The SERP</h2>
<p>Google has been giving forums like Reddit prominent real estate because people want “real talk” from peers. <a href="https://blog.google/products/search/discussions-and-forums-results/">Google began highlighting Reddit and other forum threads in results</a> as an explicit strategy. That means a single, active thread can sit next to your homepage for your own brand terms.</p>
<p>Google didn’t conspire against you; it just turned the mic toward the crowd and turned up the volume on forums by design. Search is not a neutral index. It’s a curation engine with preferences, and right now it prefers the places where the narrative about you is being written by someone else.</p>
<h2 id="stone-three-the-llm">Stone Three: The LLM</h2>
<p>Large language models eat the web, and the web now includes a lot of Reddit. Officially so. <a href="https://www.theverge.com/2024/5/16/24158043/openai-reddit-deal-chatgpt-api-content">When OpenAI struck a deal to license Reddit content</a>, it formalized what many already suspected: what rises in those threads will rise again in AI answers.</p>
<p>The model doesn’t know your context or your competitor’s incentives. It knows frequency. It knows what looks “human.” And thanks to the illusory truth effect, we’re wired to believe what we hear repeatedly, even when we know better.</p>
<h2 id="the-feedback-loop">The Feedback Loop</h2>
<p>Put those three stones together and you get a feedback loop: a forum thread gains traction, search promotes it, models repeat it, and repetition hardens into credibility.</p>
<p>If you’ve ever watched a good company’s name become a punchline inside a forum, you know the feeling. Sales calls get weird. Candidates ask sideways questions. Friends send “Saw this, you okay?” texts. You ship the same quality, but the room temperature drops five degrees.</p>
<p>The failure mode isn’t a viral crisis. It’s a slow, durable slant that tilts the playing field just enough to make every win feel uphill. It’s cheaper to capture the gate than to storm the castle.</p>
<h2 id="what-you-actually-do">What You Actually Do</h2>
<p>This isn’t a PR story. It’s an architecture story. Your brand now runs on an information supply chain you don’t control. Reality, for your buyers, is compiled. Mods decide what persists. Search decides what’s seen. Models decide what’s said back to them.</p>
<p>So you design like it’s adversarial:</p>
<p><strong>Map the surfaces where first impressions form.</strong> Which subreddits, forums, and Discords do your buyers actually read? Treat them like production systems, even when you don’t control them.</p>
<p><strong>Watch for asymmetry.</strong> Healthy communities show variance: praise, critique, indifference. When every thread tilts one way, document patterns. You’re investigating a leak, not winning a debate.</p>
<p><strong>Build upstream relationships.</strong> Invest quietly in connections with platform trust and safety teams. You’re not asking for special treatment. You’re asking for a path when the normal appeal ladder leads nowhere.</p>
<p><strong>Create systems that survive ambient doubt.</strong> Build conviction in your team and your market that persists even when the narrative doesn’t. Own more of the conversation through credible third-party reviews, real user communities, and transparent metrics that travel beyond any one forum’s gravity.</p>
<h2 id="the-hard-truth">The Hard Truth</h2>
<p>In a world where those three stones can be snapped by someone else, the rare advantage is building systems (and teams) that stay accurate even when the narrative doesn’t.</p>
<p>You can’t wish this away. You can only design around it, the way good engineers handle single points of failure: assume they exist, monitor them relentlessly, and make sure your fate doesn’t rest in the hands of whoever holds the stones.</p>
]]></description>
        </item>
        
        <item>
            <title>The Server in the Closet</title>
            <link>https://robertgreiner.com/the-server-in-the-closet/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-server-in-the-closet/</guid>
            <pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/one-server.webp" alt="The Server in the Closet" /><br/><p>There's a specific kind of technology leader who has become endangered: the one who builds things.</p><p>Not the kind who orchestrates vendors. Not the kind who manages integration roadmaps between Salesforce, HubSpot, and whatever AI wrapper launched last week. I'm talking about the CTO who looks at a problem and thinks, "We should own this."</p><p><a href="https://dhh.dk/">David Heinemeier Hansson</a> (DHH to most) is one of these people. When his company 37signals wanted to build an innovative email product, he didn't start by evaluating Gmail API limits or building a wrapper on top of existing email platforms. He built an email server. From scratch. His reasoning was elegantly simple: "If you want to do interesting things with email, you have to own the email server."</p><p>This sounds almost quaint in 2025, doesn't it? Like someone suggesting you raise your own chickens instead of buying eggs.</p><p>But here's what happened: <a href="https://37signals.com/podcast/leaving-the-cloud/">37signals pulled their entire infrastructure off AWS</a>. They spent $700,000 on Dell servers (hardware you can actually touch) and <a href="https://world.hey.com/dhh/our-cloud-exit-savings-will-now-top-ten-million-over-five-years-c7d9b5bd">saved $2 million in their first year</a>. Over five years, they'll save more than $10 million. Their operations team didn't grow. Their product didn't slow down. They just stopped renting what they could own.</p><p>The math is almost offensive: <a href="https://shiftmag.dev/leaving-the-cloud-314/">a $350 consumer-grade mini PC provides the same computing power as $1,200 per month on Heroku</a>. The cloud markup isn't a service fee. It's a tax on not thinking.</p><h2 id="the-integration-theater">The Integration Theater</h2><p>Walk into most tech companies today and you'll find an elaborate performance I call "integration theater."</p><p>Everyone's running Salesforce for CRM. HubSpot for marketing. AWS for infrastructure. OpenAI's API for their "proprietary AI." Snowflake for analytics. The technology stack looks identical to their competitors' because, well, it is. They bought it from the same catalog.</p><p>Then everyone sits around conference tables wondering why they have no competitive advantage.</p><p>The delusion is that excellence comes from picking the right items off the menu. It doesn't. It comes from owning the kitchen.</p><p><a href="https://recostream.com/blog/how-does-recommendation-systems-of-netflix-amazon-spotify-tiktok-and-youtube-work">Netflix's recommendation algorithm drives 80% of viewing time</a> and saves roughly $1 billion annually in reduced churn. <a href="https://recostream.com/blog/how-does-recommendation-systems-of-netflix-amazon-spotify-tiktok-and-youtube-work">Amazon's recommendation system generates 35% of the company's revenue</a>. <a href="https://blog.nextideatech.com/build-vs-buy-software-which-solution-is-better-in-2025/">TikTok's algorithm is valued at over $100 billion</a> (more than most Fortune 500 companies are worth in their entirety).</p><p>You can't rent that kind of advantage from a SaaS vendor. You have to build it.</p><h2 id="the-ai-acceleration">The AI Acceleration</h2><p>AI is accelerating the commoditization crisis, and most companies are sleepwalking into it.</p><p>Two years ago, having "AI-powered" anything was a differentiator. Today? There are companies whose entire business model is "ChatGPT with a nice interface." <a href="https://www.etftrends.com/disruptive-technology-channel/ai-disruption-saas-rethinking-platform-value-age-commoditized-intelligence/">Industry analysts are openly asking</a>: "Are you just an LLM wrapper? Because you're replaceable now."</p><p>The models themselves are becoming commodities. LLaMA is open source. The cost difference between AI providers is basically compute pricing. If your "proprietary AI solution" is just an API call to OpenAI with some prompt engineering, your competitor can replicate your entire value proposition by Tuesday.</p><p>But AI is also the great equalizer for building.</p><p>Five years ago, you needed a team of specialists to build custom infrastructure. Today, a talented engineer with Claude or Cursor can build in a weekend what used to take months. The barrier to creating proprietary technology is collapsing at the exact moment that renting commodity technology is becoming worthless.</p><p>This is the inflection point. The companies that realize they can build are going to pull away from the companies that keep renting. Fast.</p><h2 id="who-wins-who-loses">Who Wins, Who Loses</h2><p>The SaaS vendors see this coming. Why do you think <a href="https://foundationinc.co/lab/commoditization-of-saas/">there are 702 CRM solutions on G2</a>? Not because the world needs 702 ways to track customer data. Because CRM is so commoditized that differentiation is nearly impossible. Everyone's selling the same thing with different logos.</p><p>The vendors are trapped in a feature-parity death spiral. You add a feature. Your competitor copies it in three weeks. Customers start choosing based on price. <a href="https://openviewpartners.com/blog/is-your-saas-business-being-commoditized/">Margins compress</a>. Everyone loses except the customer, who still doesn't have a competitive advantage because everyone else bought the same stuff.</p><p>Meanwhile, <a href="https://www.trgdatacenters.com/resource/37signals-expected-to-make-seven-million-leaving-cloud/">companies like 37signals are running on servers they bought six years ago</a>. Still humming. Still paid off. Still creating compounding advantages.</p><p>The winners in the next decade will be companies that wake up and ask: "What are we paying millions for annually that we could own for a fraction of that?" The answer is usually shocking.</p><p>The losers will be the SaaS vendors who can't answer why anyone should keep paying them when AI makes building so much easier. Watch what happens when a CFO realizes their $2 million annual Salesforce bill could be a one-time $500K custom build that does exactly what they need (and nothing they don't).</p><h2 id="the-five-year-question">The Five-Year Question</h2><p>Building sucks for the first year. Maybe two years. It's slower. It's harder. You'll have bugs the SaaS vendor already fixed.</p><p>But ask a different question: What would you have if you'd spent five years building things only you have?</p><p>The answer is <a href="https://maddevs.io/blog/guide-to-build-vs-buy-software-decision/">the only real moat that exists</a> in 2025: proprietary technology so specific to your business that competitors can't buy it, can't rent it, and can't replicate it without years of their own work.</p><p>Modern technology leadership has forgotten patient capital. We think in quarters. In sprints. In OKRs that reset annually. Making a decision that pays off in year four feels almost irresponsible.</p><p>But year four is exactly where competitive advantage lives.</p><p>When 37signals bought those servers, they bought them. Past tense. Done. The servers keep running. The savings compound. The knowledge compounds. Meanwhile, that AWS bill would arrive every month, forever, growing as usage grew.</p><h2 id="what-you-actually-own">What You Actually Own</h2><p>Could you explain to your board what technology you own that competitors don't?</p><p>If the answer involves "our unique Salesforce configuration" or "our sophisticated integration layer," you don't own anything. You're renting shelf space in someone else's store.</p><p>Real ownership sounds different: "We built our own recommendation engine because Algolia couldn't do real-time personalization at our scale, and now our conversion rates are 40% higher than category average." Or: "We built our own data pipeline because we needed sub-second latency, and that's why we can offer same-day delivery when competitors take three days."</p><p>This isn't romanticism about building everything yourself. Buy commodity stuff. Buy your email service and your calendar and your video conferencing. Buy anything where being different doesn't matter.</p><p>But when something is core to how you deliver value? When it's the reason customers choose you? Own it. Build it. Make it yours.</p><p>The leaders who get this (who can code, who understand infrastructure, who think in five-year horizons while everyone else thinks in quarters) are going to build companies that are genuinely difficult to compete with.</p><p>The ones managing vendor relationships are going to wake up one day and realize they're running the same company as everyone else, just with a different logo on top.</p><p>The question isn't whether you can afford to build.</p><p>It's whether you can afford to keep renting.</p>]]></description>
        </item>
        
        <item>
            <title>Tools Create Capacity, Workflows Create Value</title>
            <link>https://robertgreiner.com/tools-create-capacity-workflows-create-value/</link>
            <guid isPermaLink="true">https://robertgreiner.com/tools-create-capacity-workflows-create-value/</guid>
            <pubDate>Thu, 25 Sep 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/ai-workflow-1.webp" alt="Tools Create Capacity, Workflows Create Value" /><br/><p>A team installs AI coding assistants. Engineers report feeling 40% faster. Ship dates don't move.</p><p>A lab buys pipetting robots. Throughput jumps 3x per station. Projects still run late.</p><p>A finance team automates their models. Analysts save hours daily. Deal flow stays flat.</p><p>The pattern is so consistent it's almost boring: tools create capacity, but capacity without workflows dissipates. The energy has nowhere to go, so it converts to higher standards, deeper analysis, or wider scope - anything except the acceleration we expected.</p><p>When factories first installed electric motors in the 1890s, productivity barely budged for 30 years. Factory owners simply swapped steam engines for electric ones, keeping the same line-shaft layouts designed around a central power source. Real gains only came when they redesigned entire floors around distributed power - small motors at each machine, workflows rebuilt from scratch. As <a href="https://web.stanford.edu/~jay/lectures/David_dynamo.pdf">Paul David's analysis of the "productivity paradox"</a> shows, electrification's value came from workflow reorganization, not the technology itself.</p><p>Toyota understood this. Their production system isn't about robots, it's about standardized work that makes problems visible and response immediate. Andon cords, just-in-time delivery, continuous flow. The same equipment in different plants produces wildly different results because <a href="https://hbr.org/1999/09/decoding-the-dna-of-the-toyota-production-system" rel="noreferrer">the choreography matters more than the hardware.</a></p><p>Fred Brooks saw it in software decades ago. In <a href="https://en.wikipedia.org/wiki/The_Mythical_Man-Month">The Mythical Man-Month</a>, he explained why <strong>adding developers to a late project makes it later:</strong> coordination overhead grows faster than individual productivity. AI coding assistants shift this bottleneck rather than eliminating it - from writing code to reviewing, integrating, and deciding what to build.</p><p>The physics are simple: work flows through systems at the rate of the slowest constraint. Speed up one step without addressing the constraint, and you've just created slack that the system will absorb in unexpected ways.</p><p>Value appears when organizations make explicit decisions about how to channel new capacity:</p><ul><li><strong>Speed</strong>: Cut scope to maintain quality while shipping faster</li><li><strong>Quality</strong>: Keep timelines but raise standards with the extra capacity</li><li><strong>Cost</strong>: Maintain output with smaller teams </li><li><strong>Scope</strong>: Do more without changing timelines or headcount</li></ul><p>Without this explicit choice - encoded in workflows, metrics, and incentives - the system makes its own choice, usually defaulting to quality creep or scope expansion. The senior engineer with AI assistance doesn't ship faster; they refactor more elegantly. The analyst with automated data gathering doesn't close more deals; they build more scenarios with more advanced models.</p><p>Systems naturally expand to consume available resources unless specifically constrained otherwise.</p><p>This creates a paradox: the better the tool, the less visible its impact. A mediocre tool that requires workflow changes often delivers more value than a brilliant tool that slots into existing processes. The disruption forces the reorganization that captures the value.</p><p>But most organizations resist this disruption. They want the gain without the pain, the acceleration without the reorganization. Three forces ensure they rarely get it:</p><p><strong>Incentive inertia</strong>: We measure what we've always measured, which drives behavior toward old patterns even with new tools. A coding team measured on features delivered won't naturally convert AI-generated time savings into faster delivery... they'll add features.</p><p><strong>Hidden coordination costs</strong>: Most work involves handoffs, reviews, approvals, and synchronization. These costs often dominate individual task time. Making individuals faster can actually make coordination harder if everyone moves at different speeds.</p><p><strong>Workflow lock-in</strong>: Existing workflows encode years of tacit knowledge about what works. Changing tools is easy; changing deeply embedded routines is hard. The quick experiment with a new AI tool succeeds; the systemic transformation required to capture its value takes quarters or years.</p><p>Not every tool needs workflow change. Calculators, spell-checkers, and search engines delivered immediate value without reorganization. The difference? They accelerate truly atomic tasks with clear inputs and outputs, no coordination requirements, and immediate feedback loops.</p><p>But as tools move from accelerating tasks to augmenting decisions - from "check this spelling" to "draft this strategy" - workflow integration becomes essential. The more complex the task, the more it depends on context, coordination, and downstream processes.</p><p>As AI tools proliferate, competitive advantage shifts from having the tools to having the workflows that exploit them. The race isn't for the best model; it's for the best integration.</p><p>Your AI initiative will probably disappoint not because the technology fails, but because workflows don't change. The pilot will amaze, the rollout will underwhelm, and everyone will blame the tool.</p><p>The fix isn't better tools - it's better workflows. Find your real constraint. Design processes that assume the new capacity. Align metrics with intended outcomes. Make the new way easier than the old way.</p><p>Most organizations are sitting on 30-40% latent capacity from tools they've already deployed. They don't need more tools. They need workflows that channel the capacity they've created.</p><p>The next time someone shows you an amazing demo, ask: "What workflow changes does this assume?" If the answer is "none," you're looking at expensive slack, not transformation.</p><p>Tools are just potential energy. Workflows are what make it kinetic.</p>]]></description>
        </item>
        
        <item>
            <title>The Age of Citation</title>
            <link>https://robertgreiner.com/the-age-of-citation/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-age-of-citation/</guid>
            <pubDate>Mon, 22 Sep 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/web-search.webp" alt="The Age of Citation" /><br/><p>Watch someone use ChatGPT to research. They type, wait, skim, and act. No scrolling endless search results. Just a verdict, a decision, and a click. That behavior is why Answer Engine Optimization (AEO) exists - the craft of showing up in the synthesis, not ranked on a page.</p><p>Google trained us to climb to number one on a page of blue links. Now the “page” is a paragraph stitched from everywhere. The engine isn’t choosing a winner; it’s cross-checking a chorus. In that world, the most-mentioned brand beats the top-ranked page. Visibility shifts from a trophy on one site to a probability across many surfaces.</p><p>You can see the new values whenever an answer engine shows its cards. Perplexity, by design, cites multiple sources and synthesizes across the web. OpenAI’s SearchGPT puts sources inside answers, <a href="https://openai.com/index/introducing-searchgpt/" rel="noreferrer">optimizing for corroboration instead of a lone authority</a>. If you appear in five distinct citations across different websites, videos, and docs, you get pulled into the story. </p><p>We miss this because the old game felt clean. One Search Engine Result Page (SERP). One keyword. One winner. But the retrieval stack changed. These models are pattern matchers with trust issues. They don’t want a single page screaming authority; they want independent witnesses who agree. “Most-mentioned” isn’t about volume for its own sake. It’s breadth of corroboration. Mention velocity over rank.</p><p>This is why a two-paragraph Reddit comment can move more revenue than a 10,000-word pillar page with great SEO. Not because brevity is magic, but because Reddit is where the conversation is happening - and the engines are wired to listen. Reddit inked data deals to feed real-time content into assistants, <a href="https://www.redditinc.com/blog/announcing-partnership-openai" rel="noreferrer">including a partnership with OpenAI</a>. The model prioritizes living dialogue. A short, honest answer in a thread about “Which espresso machine under $500 can I use without waking up my family in the morning?” can reverberate across the internet, showing up in AI-powered search windows as a trusted recommendation. That’s not supposed to beat domain authority. Yet it does.</p><p>Here’s the uncomfortable part: this shift collapses the moat incumbents thought they had. A brand-new startup mentioned by actual users in a few credible places can show up in answers next week. The old gatekeeper - investing in years of link building - lost leverage when the interface began preferring fresh corroboration. I’ve watched unknown names slip into AI summaries overnight because they were present where models cross-check: a YouTube explainer, a help page that reads like a real fix, a handful of community threads. The bar moved from “accumulate PageRank” to “earn believable mentions.” That’s a different company muscle.</p><p>It also flips where the highest-ROI content lives. Ask an LLM a long, messy question and listen to it breathe: “How do I connect Product X to Workflow Y under constraint Z for a team with policy Q?” That’s not a keyword... it’s a paragraph. Your help center is a gold mine because it answers the exact multi-clause questions assistants get. People arrive with intent; assistants surface pages that look like fixes, not funnels.</p><p>There’s a trap here, and teams are already falling into it. If synthesis is the currency, why not flood the web with AI-generated pages and force your way into the chorus? Because the models are learning to ignore their own echoes. Train on synthetic output long enough and you get model collapse: <a href="https://arxiv.org/abs/2305.17493" rel="noreferrer">the system drifts toward its own errors and forgets rare, true signals</a>. Platforms don’t want that. Retrieval pipelines are getting more sensitive to provenance, originality, and human fingerprints. The AI mirror maze looks productive until you notice most of what you’re producing never gets cited - and worse, it makes the real you harder to trust.</p><p>None of this means SEO is dead. It’s upstream of the answer now. Your site feeds the synthesizer, not the other way around. Your best work will be cited, paraphrased, and delivered without a click. That’s scary if your model is “capture the session.” It’s liberating if your model is “win the decision.” If the assistant makes the choice and you’re in the synthesis, you win.</p><p>The internet spent two decades teaching us to chase rank. The next decade rewards citation share. The playbook is simpler than it sounds: be the most-cited truth about the problem you exist to solve. Earn mentions that look like reality. Put your expertise where the model listens. Avoid the mirror maze. In answer-first interfaces, the spotlight doesn’t land on a single podium. It sweeps the room until the story feels true. Be in that story, or be invisible.</p>]]></description>
        </item>
        
        <item>
            <title>Win the Default, Win the Decade</title>
            <link>https://robertgreiner.com/win-the-default-win-the-decade/</link>
            <guid isPermaLink="true">https://robertgreiner.com/win-the-default-win-the-decade/</guid>
            <pubDate>Thu, 18 Sep 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/choices....webp" alt="Win the Default, Win the Decade" /><br/><p>The most expensive real estate in the world isn’t oceanfront - it’s the default button.</p><p>Google reportedly paid Apple $18–$20 billion to be the default search on Safari - not the best search, the one that shows up without a thought. That price isn’t about features. It’s about gravity: the path everything takes when no one is pushing.</p><p>This matters because most teams still try to win by arguing, branding, or persuading. Meanwhile, the winners are quietly grading the slope so the flow moves their way. Control the slope and you don’t have to shout; you just have to be there when the decision makes itself.</p><p>We’ve seen this before. A tiny policy tweak that changes nothing but the starting point can change everything about the outcome. Automatic enrollment boosts 401(k) participation by entire workforces - not by inspiring them with retirement sermons, but by making saving the path of least resistance. When countries switch organ donation to opt-out instead of opt-in, consent rates leap to above 90%. When Apple turned off third-party tracking by default, ad platforms didn’t adjust their pitches; they bled. Different domains, same pattern: defaults quietly bend behavior.</p><p>We keep treating the world like a debate club. It’s closer to a river.</p><p>Rivers don’t negotiate with rocks; they follow the smallest gradient and start carving. The Grand Canyon didn’t appear because the Colorado River was persuasive. Water followed the easiest route, and the route compounded itself: flow deepened the channel, a deeper channel increased speed, and speed accelerated erosion. The flow creates the canyon that then dictates the flow.</p><p>Products, policies, and markets work the same way. The riverbed is the default. The flow is human behavior. Every click you remove, every field you pre-fill, every setting you make the starting point is a millimeter off the riverbank - but across years, it’s a canyon.</p><p>Here’s the important nuance: the riverbed doesn’t need to be perfect; it only needs to be preferred. Users are satisficers. They don’t climb hills for tiny gains; they follow the slope that’s already downhill. A merely OK experience on a well-graded slope beats a great experience you have to hike to.</p><p>This is why the AI wars won’t be won by the smartest model. They’ll be won by whoever becomes the default layer between you and everything else - and can actually deliver.</p><p>Microsoft Copilot looks like it should have already won. It’s embedded in Office, connected to your SharePoint, reading your emails, summarizing your Teams meetings. It’s the perfect default—pre-installed, pre-integrated, pre-authorized. The riverbed couldn’t be better graded.</p><p>But defaults only work if the water actually flows. Copilot shows how even unmatched distribution can backfire if the product misses the minimum bar of usefulness. If every summary misses the point, if every SharePoint search returns nonsense, if the AI can’t actually help with the work… people will climb out of the canyon. They’ll copy-paste into ChatGPT. They’ll try Claude. They’ll find their own rivers.</p><p>That’s the paradox: the default position is priceless, but only if it’s good enough to keep people in the channel. Google Search wasn’t perfect; it was just good enough that climbing out felt pointless. Copilot risks teaching millions of enterprise users the opposite lesson: that the default can be worse than nothing.</p><p>Microsoft owns the most valuable real estate in enterprise AI: every Office toolbar, every Teams window—but they’re fumbling the handoff. The channel is perfect, but the water won’t flow.</p><p>Which means the throne is still empty. The next trillion-dollar company won’t just become the AI default - they’ll be the first one good enough to keep it.</p><p>The riverbed is ready. We’re just waiting for water worth flowing.</p>]]></description>
        </item>
        
        <item>
            <title>Mise en Place for AI Teams</title>
            <link>https://robertgreiner.com/mise-en-place-for-ai-teams/</link>
            <guid isPermaLink="true">https://robertgreiner.com/mise-en-place-for-ai-teams/</guid>
            <pubDate>Mon, 11 Aug 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/miseenplace.webp" alt="Mise en Place for AI Teams" /><br/><p>In every great kitchen, speed and consistency don't come from more gadgets. They come from <a href="https://en.wikipedia.org/wiki/Mise_en_place" rel="noreferrer">mise en place</a> — the small, disciplined set of knives, pans, and staples laid out the same way, every time. I learned this watching a chef glide through a Friday dinner rush with the grace of a violinist. Her secret wasn't molecular gastronomy equipment; it was a ruthless commitment to the few tools she trusted and the rituals that made them predictable.</p><p>Most AI teams are trying to cook a grilled cheese with an immersion circulator and liquid nitrogen. They wrap their models in agents, their agents in orchestrators, their orchestrators in monitoring, and their monitoring in more dashboards with sprawling rule files. The result isn't better food - it's longer prep times, more points of failure, regression defect tickets, technical debt, and confused line cooks. The complexity tax doesn't disappear; it lands squarely on your people.</p><p>Your AI kitchen doesn't need more stations. It needs a mise en place.</p><h2 id="choose-the-knife-not-the-kitchen-remodel">Choose the Knife, Not the Kitchen Remodel</h2><p>A simplicity-first AI workflow should look like a chef's tool roll: a small set of robust, purpose-built tools with conversational prompting as the default UX, and light guardrails for structure and safety. Pair this with some internal training and practice, and you have a recipe for reduced operational drag and increased developer output.</p><p>This is not Luddism; it's throughput. The evidence is clear: minimal stacks avoid heavy abstractions, trim maintenance, and let teams iterate faster on real product problems instead of fighting orchestration glue code. You can see this ethos in the rise of terminal-native tools like Aider, Claude Code, and Warp, which make AI pair programming productive without layers of framework ceremony. The best tech stacks privilege simplicity and maintainability over breadth of tools.</p><p>Light guardrails are your recipe card, not a second chef, or a set of expensive, hard-to-maintain tools. Use structured output with JSON Schema or Pydantic models to get reliable shapes from conversational prompting. Keep rules situational. Engineers in the trenches are documenting approaches that lean on small, explicit constraints rather than sprawling instruction documents that models won't consistently honor. Think of it as a plating ring, not a sous-vide rig for toast.</p><h2 id="why-this-matters-to-your-people">Why This Matters to Your People</h2><p>The human cost of over-orchestration is subtle and corrosive. You get more onboarding time, more brittle assumptions, and a constant hum of "how does this thing actually work?" The pathologies are familiar: dashboards nobody trusts, playbooks no one reads, and incident write-ups that end with "framework edge case." Engineers who wanted to build product now spend their mornings deciphering an agent graph. That's culture drift.</p><p>In other words, agent frameworks solve a class of problems. But adopting them too early is like installing a salamander broiler to toast your bread: impressive, expensive, and unnecessary.</p><h2 id="what-to-standardize-tomorrow">What to Standardize Tomorrow</h2><p><strong>The tool roll:</strong> Pick two or three CLI-first interfaces (e.g., Claude Code or Cursor) and make them the default workflow. Document a 10-minute quickstart. Show how CLI-first, minimal stacks drive speed and reduce overhead.</p><p><strong>The house prompt:</strong> Standardize a short, single-page prompting style guide and a few tested templates. No 20-page rule tomes. Think index card, not encyclopedia.</p><p><strong>The guardrail:</strong> Enforce structured output, plus a tiny set of situational rules for safety and correctness. Lightweight, structured guardrails deliver reliability without orchestration bloat.</p><p><strong>The escalation:</strong> Create one "when to reach for agents/graphs" checklist. Require a written justification tied to product complexity and expected ROI. Frameworks add power and overhead—use them when the problem demands it, not before.</p><p>Jiro Dreams of Sushi is not a movie about fish. It's about the compounding returns of mastering a small number of moves. AI development is heading the same way. Your team doesn't need more stations; it needs a sharper knife and the discipline to put it in the same place, every day.</p><p>Build your AI mise en place: small, robust CLI tools, conversational prompting, and light guardrails. Stop shifting the complexity tax onto your team. The fastest kitchen in town is the one that knows where everything goes.</p>]]></description>
        </item>
        
        <item>
            <title>AI Belongs in Your Dev Pipeline, Not Your Product</title>
            <link>https://robertgreiner.com/ai-belongs-in-your-dev-pipeline-not-your-product/</link>
            <guid isPermaLink="true">https://robertgreiner.com/ai-belongs-in-your-dev-pipeline-not-your-product/</guid>
            <pubDate>Thu, 07 Aug 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/AI-Factory.webp" alt="AI Belongs in Your Dev Pipeline, Not Your Product" /><br/><p>A few months ago, a product lead at a mid-market SaaS company told me about her team’s long, expensive slog to launch an “AI-powered” dashboard. They spent months wrangling data, tuning models, and building a feature that would predict churn and surface insights. The result? A widget that looked impressive in demos, but rarely changed what users actually did. Meanwhile, her backlog ballooned. Customers wanted core features faster, bugs fixed sooner, and the UI modernized. But most of her engineers were busy wrangling the AI “add-on.”</p><p>This pattern repeats across the industry. The last few years have been about embedding AI into existing products, hoping to sprinkle on some magic and get some hype revenue. But what if that’s missing the point entirely? What if the future of software development isn’t about making products “smarter,” but about using AI to build faster, with less? What if the real unlock is not the features AI adds for users, but the time it gives back to builders?</p><h2 id="speed-is-the-feature-why-ai-belongs-in-the-factory-not-the-showroom">Speed Is the Feature: Why AI Belongs in the Factory, Not the Showroom</h2><p>Everyone wants AI in their app: at least, that’s what the headlines and investor decks say. Add an AI button, a chatbot, some “insights,” and you’re future-proof, right? The reality is more sobering. Most companies struggle to integrate meaningful AI features, and even when they do, the user impact is often marginal. Meanwhile, development teams are drowning in technical debt, slow release cycles, and growing feature requests. We have a mountain of legacy technical debt and complexity in the real world, with already deployed applications, to fight against.</p><p>The real story is that AI’s biggest impact so far isn’t in the product - it’s in the process. Recent data shows that <a href="https://www.devopsdigest.com/ai-takes-center-stage-in-2025-software-development">AI accelerates software development by up to 50%</a>, with teams reporting 70% better bug detection and resolution. AI-driven automation in CI/CD pipelines enables <a href="https://moldstud.com/articles/p-the-impact-of-ai-developers-on-accelerating-software-development">2.5 times more frequent deployments</a>, slicing feedback loops and release times from weeks to days. This isn’t about smarter apps; it’s about faster, better builders.</p><p>Tools like GitHub Copilot, Cursor, and Claude Code have become co-developers instead of just autocomplete on steroids. They turn requirements or code stubs into working modules, automate boilerplate, refactor, and catch bugs before code reviews even start. The effect is cumulative: not only are you writing code faster, but you’re avoiding entire classes of human error, and spending more time designing features that actually matter. As <a href="https://ieeechicago.org/the-impact-of-ai-and-automation-on-software-development-a-deep-dive/">case studies show</a>, this shift shaves months off traditional timelines and lets smaller teams punch above their weight.</p><p>Organizations that learn to mitigate the downsides of AI-powered development workflows (like compounding technical debt, polluted codebases, and hallucinated data) are shipping real, unsexy features faster than their competitors.</p><h2 id="the-rise-of-the-ai-native-factory-floor">The Rise of the AI-Native Factory Floor</h2><p>The old model was “add AI to the product.” The new model is “let AI build the product.” This is not a subtle shift. In 2024, <a href="https://www.devopsdigest.com/ai-takes-center-stage-in-2025-software-development">75% of companies applied AI directly to their development workflows</a>, not just as user-facing features. Over half cited task automation as the top reason, with code optimization, diagnostics, and testing close behind.</p><p>But the real inflection point is the emergence of AI-native development platforms and autonomous agents. Microsoft, IBM, and dozens of startups are building environments where AI isn’t an accessory but the primary tool. These platforms offer <a href="https://news.microsoft.com/source/features/ai/6-ai-trends-youll-see-more-of-in-2025/">advanced code generation, real-time bug fixing, and multistep workflow automation</a>. The most ambitious teams deploy <a href="https://dockyard.com/blog/2025/04/22/the-near-future-of-ai-in-software-development-trends-to-watch-2025-beyond">autonomous agents that monitor live applications, optimize code, and fix bugs without human intervention</a>.</p><p>Why does this matter? Because the bottleneck in software is rarely the absence of new ideas. It’s the time and cost to ship, adapt, and maintain those ideas. As <a href="https://www.ibm.com/think/topics/ai-in-software-development">AI-native platforms automate everything from requirement gathering to documentation</a>, development becomes less about brute force and more about orchestration. The result: companies can build, iterate, and respond to the market with a fraction of the traditional headcount.</p><h2 id="human-judgment-still-sets-the-destination">Human Judgment Still Sets the Destination</h2><p>If AI is so capable, why not let it run the whole show? This is where the narrative gets more nuanced. AI excels at automating the repeatable, the tedious, the knowable. But it struggles with ambiguity, context, and the kind of judgment that shapes product vision. The most advanced tools still require <a href="https://dockyard.com/blog/2025/04/22/the-near-future-of-ai-in-software-development-trends-to-watch-2025-beyond">skilled engineers to architect solutions, integrate data, and make strategic decisions</a>.</p><p>There are also open questions about <a href="https://www.mckinsey.com/industries/technology-media-and-telecommunications/our-insights/how-an-ai-enabled-software-product-development-life-cycle-will-fuel-innovation">where to draw the line between AI autonomy and human oversight</a>. Reliability, security, and ethical use are not solved problems. And as AI becomes more specialized and powerful, the risk of subtle bugs or unintended consequences rises. So yes, AI doubles your speed, but it still needs humans to choose the direction.</p><p>Still, this is not a limitation - it's an invitation to focus human talent where it matters. Imagine a world where your smartest engineers spend their time designing architectures, exploring new business models, or engaging customers, while AI sweeps away the friction of routine coding and deployment.</p><h2 id="ai-as-leverage-not-just-a-feature">AI as Leverage, Not Just a Feature</h2><p>There’s a counter-argument that says, “Won’t every competitor have access to the same AI tools, erasing any advantage?” But this misses the point. The advantage isn’t the tool - it’s the leverage. The companies that win will be those that treat AI as a multiplier for their unique talent and strategy, not as a checklist item for investors or a shiny add-on for users.</p><p>AI unlocks <a href="https://moldstud.com/articles/p-the-impact-of-ai-developers-on-accelerating-software-development">faster adaptation to market changes</a>, <a href="https://moldstud.com/articles/p-the-impact-of-ai-developers-on-accelerating-software-development">more frequent releases, and higher customer satisfaction</a> because the teams using it can out-iterate, out-learn, and out-deliver their rivals. In the same way that the assembly line transformed manufacturing, AI is transforming software by making scale and speed the default, not the exception.</p><h2 id="actionable-leverage-three-moves-to-make-now">Actionable Leverage: Three Moves to Make Now</h2><h3 id="1-treat-ai-as-infrastructure-not-an-add-on">1. Treat AI as Infrastructure, Not an Add-On</h3><p>Shift budget and talent from AI features to AI-native development platforms. Invest in tools that automate your build, test, and deploy cycles. If you’re still treating AI like a product differentiator, you’re a step behind.</p><h3 id="2-reframe-developer-roles-around-judgment-and-design">2. Reframe Developer Roles Around Judgment and Design</h3><p>Free your engineers from boilerplate and bug-chasing. Let them focus on system design, product strategy, and customer engagement. Use AI to automate everything else that can be automated.</p><h3 id="3-measure-success-by-cycle-time-not-feature-count">3. Measure Success by Cycle Time, Not Feature Count</h3><p>Adopt deployment frequency, lead time, and customer responsiveness as your north stars. If your build times, release velocity, and feedback loops aren’t at least twice as fast as three years ago, you’re leaving leverage on the table.</p><h2 id="the-future-is-built-not-added">The Future Is Built, Not Added</h2><p>The biggest opportunities in software aren’t about what AI puts in the hands of users. They’re about what AI puts in the hands of builders. Companies that keep treating AI as a feature risk missing the real story: AI is the new factory floor. It’s how you build faster, cheaper, and smarter with the same or fewer people.</p><p>In the end, AI isn’t the “smart” feature your customers are waiting for. It’s the silent partner that lets you deliver the features they actually want, twice as fast. The future belongs to those who stop asking, “How do I add AI to my product?” and start asking, “How do I let AI build my product for me?” The difference, as always, is speed. And speed, in software, wins.</p>]]></description>
        </item>
        
        <item>
            <title>Why Your Enterprise AI Strategy Is Failing</title>
            <link>https://robertgreiner.com/why-your-enterprise-ai-strategy-is-failing/</link>
            <guid isPermaLink="true">https://robertgreiner.com/why-your-enterprise-ai-strategy-is-failing/</guid>
            <pubDate>Tue, 29 Jul 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/ai-adoption-2.webp" alt="Why Your Enterprise AI Strategy Is Failing" /><br/><p>Last week, I spoke with a CTO running technology for a $500M distribution company. He's a sharp executive with two decades of experience, overseeing custom-built systems moving twice as fast as the industry standard. They'd already deployed ChatGPT licenses, engaged an AI automation vendor on a six-figure deal, and achieved such productivity with developers using AI that they were moving faster than ever.</p><p>Then he dropped a statement that stopped me cold:</p><blockquote>"We have 300 corporate staff, but only 40 are using AI. Honestly? They think I'm trying to replace them."</blockquote><p>Here was someone doing everything right, yet still failing at what mattered most:&nbsp;<strong>adoption</strong>.</p><p>After 18 months of enterprise AI consulting, I've identified three distinct leadership approaches. </p><p>First, the Blockers: still debating the merits of allowing AI, falling exponentially behind. </p><p>Second, the Panic Buyers: executives rushing to mandate AI adoption without a clear strategy. </p><p>And finally, the Strategic Adopters: thoughtful leaders who meticulously map use cases, choose vendors carefully, and implement effectively - yet still often fail just as spectacularly as the Panic Buyers, just at higher costs.</p><p>Our CTO fit neatly into the third category, committing to an automation project at a "bargain" multiple-six-figure contract. The plan was straightforward: automate document reconciliation for 10-12 million annual documents: a seemingly perfect AI use case. But there was one glaring issue:&nbsp;<strong>user adoption</strong>.</p><p>According to McKinsey, 70% of digital transformations fail due to poor adoption. And even the successful 30% often triumph despite their technology choices, not because of them. Investing heavily in AI without user buy-in is like giving a Formula 1 car to someone whose only racing experience comes from playing Mario Kart on the weekends.</p><p>Underlying this technical complexity is an even bigger issue: trust. Employees fear AI-driven efficiency. When management says, <em>"AI makes you productive," </em>employees often hear, <em>"AI makes you redundant."</em> Companies successfully adopting AI have reframed this narrative. Microsoft's Copilot didn't sell <em>"efficiency"</em>; it offered to <em>"skip boring tasks."</em> Another client succinctly redefined AI's role: <em>"We're replacing tasks nobody enjoys doing."</em></p><p>Beyond job replacement anxieties, there's another emerging concern: creative ownership.</p><blockquote>"If AI generates our designs, do we legally own them, or are they stuck in public domain limbo?"</blockquote><p>The U.S. Copyright Office already restricts registrations for purely AI-generated works, posing a genuine existential threat. Enterprises may inadvertently be creating competitive advantages they don't legally own.</p><p>Having observed many enterprises navigate these challenges, I've identified patterns of AI adoption that actually deliver:</p><ul><li>Successful companies start with personal productivity. Rather than imposing broad automations, they focus initially on helping individuals with specific tasks. Early adopters naturally evangelize their experiences, organically spreading adoption.</li><li>Companies embrace their "shadow IT." Unauthorized AI users are often innovation drivers, discovering valuable use cases independently. Rather than shutting them down, turning these users into official AI champions significantly boosts adoption.</li><li>Addressing data chaos is foundational. Layering AI on disorganized data only multiplies confusion and increases costs. Those who first unify their data infrastructure ultimately realize substantially higher ROI from their AI investments.</li></ul><p>Our CTO realized another uncomfortable truth: Most "AI agents" today are essentially advanced robotic process automation (RPA) marketed under a more glamorous name. Genuine autonomous AI remains years away from widespread deployment, according to Gartner. Often, vendors sell costly solutions to problems traditional tools could solve more effectively. Yet, the hidden value often lies in the forced process documentation these initiatives require, a task beneficial regardless of vendor success.</p><p>Consider the simple math of real ROI:</p><ul><li>260 additional staff adopting AI</li><li>Saving just 3 hours weekly each</li><li>At $50/hour, this translates into <strong>over $2M annually</strong></li></ul><p>No elaborate AI agents required - just smarter usage of available tools. Even better, the multiplier effect emerges when teams are freed from mundane tasks and shift to innovation, becoming engines of value creation rather than mere maintenance crews.</p><p>The essential question driving successful adoption isn't technical—it's psychological:</p><blockquote>"What would it take for our people to see AI as a career accelerator rather than a career threat?"</blockquote><p>The CTO's cautious six-month AI vendor contract buys crucial time - not to validate the technology but to realize the fundamental problem is adoption, not automation. Competitors prioritizing grassroots AI literacy will be far ahead in just a few months, thriving on small wins and earned trust.</p><p>Ironically, the enterprises poised to dominate AI aren't tech giants or flashy startups. They're pragmatic, mid-sized firms succeeding by placing trust at the center of their AI approach. They're carefully turning skeptics into advocates, one small, meaningful step at a time.</p><p>Five years from now, these firms will dominate - not through superior technology or larger budgets, but because their people genuinely want to use AI.</p>]]></description>
        </item>
        
        <item>
            <title>The Human Side of AI: Giving People Back Their Time</title>
            <link>https://robertgreiner.com/the-human-side-of-ai-giving-people-back-their-time/</link>
            <guid isPermaLink="true">https://robertgreiner.com/the-human-side-of-ai-giving-people-back-their-time/</guid>
            <pubDate>Thu, 17 Apr 2025 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/work-life-2.webp" alt="The Human Side of AI: Giving People Back Their Time" /><br/><p>In 1927, Henry Ford made a revolutionary decision to transition his workforce to a five-day workweek. The reason wasn't just altruism - he discovered that productivity actually increased when people had more personal time. Nearly a century later, we're facing a similar inflection point with artificial intelligence.</p><p>The most compelling AI solutions don't replace humans. They free us to be more human.</p><p>I've spent time with dozens of organizations implementing AI, and there's a pattern I keep seeing: the most successful deployments don't start with the technology - they start with understanding human workflows, pain points, and aspirations. When implemented thoughtfully, AI doesn't just save time; it transforms how we work at a fundamental level.</p><h2 id="the-one-hour-per-day-revolution">The One-Hour-Per-Day Revolution</h2><p>The most valuable currency in modern business isn't money… it's time.</p><p>Imagine giving everyone in your organization an extra hour each day. That's 250+ hours per employee annually. For a company of 30 people, that's 7,500 hours of creative potential unlocked. What could your team accomplish with that gift?</p><p>This isn't just about efficiency. It's about creating space for the deep thinking, creativity, and relationship-building that machines can't replicate. The irony is striking: we need AI to help us be more distinctly human.</p><p><strong>Why this matters</strong>: In knowledge-intensive fields, small time savings compound dramatically. A director-level employee earning $150,000 annually costs roughly $75/hour. Saving just one hour daily represents nearly $20,000 in value per person yearly: before accounting for the enhanced quality of work produced during those reclaimed hours.</p><h2 id="the-fly-on-the-wall-approach-to-ai-implementation">The Fly-on-the-Wall Approach to AI Implementation</h2><p>Most AI implementations fail not because of technology limitations but because of a fundamental misunderstanding of workplace dynamics.</p><p>The traditional approach is backward: select a tool, then force your workflow to adapt. The smartest organizations reverse this—they observe how people actually work, identify true pain points, then select or develop tools that enhance existing workflows.</p><p>This "fly-on-the-wall" strategy reveals: </p><ul><li>Which repetitive tasks drain creative energy</li><li>Where human judgment is truly irreplaceable</li><li>How information bottlenecks slow progress</li><li>What unique value your people provide that no AI can match</li></ul><p><strong>Why this matters</strong>: A multi-week observation and discovery period might seem excessive, but it prevents spending months implementing a solution people won't use. The best AI deployments feel like they were built specifically for your team—because in a way, they were.</p><h2 id="balancing-innovation-with-risk-management">Balancing Innovation with Risk Management</h2><p>When Polaroid invented instant photography, they simultaneously created new legal questions about image ownership and privacy. AI tools create similar new territories - especially regarding document retention, intellectual property, and liability.</p><p>Organizations that thrive with AI don't just focus on capabilities; they establish clear protocols for:</p><ol><li>What information should never be processed by AI systems </li><li>How AI-generated content should be reviewed and verified </li><li>When human judgment must supersede algorithmic recommendations</li><li>Where generated content is stored and how long it's retained</li></ol><p>For legal, financial, and healthcare organizations, these considerations aren't afterthoughts—they're fundamental requirements.</p><p><strong>Why this matters</strong>: AI systems create new forms of institutional memory. Unlike casual conversations, AI interactions are typically documented and potentially discoverable in litigation. Without proper governance, the very tools meant to enhance productivity could create significant exposure.</p><p>The balance isn't about restricting AI use but establishing guardrails that allow confident innovation. As one executive put it: "We don't want to tie people's hands; we just need to protect what matters most."</p><h2 id="from-tools-to-transformation">From Tools to Transformation</h2><p>The most profound insight about organizational AI adoption isn't about technology at all: it's about people.</p><p>Companies that see AI as merely another productivity tool miss the larger opportunity: reimagining how work happens. When Smartsheet replaced Excel for one real estate development team, the value wasn't just in features, it was in creating a unified framework for collaboration and decision-making.</p><p>Transformative AI implementation follows a similar pattern:</p><ol><li>Start with understanding existing workflows (2-4 weeks)</li><li>Identify high-impact opportunity areas (not just pain points)</li><li>Develop clear implementation and training plans</li><li>Establish governance protocols 5. Measure actual impact against expected outcomes</li></ol><p>The most successful organizations approach AI as an organizational change initiative, not a technology deployment.</p><p><strong>Why this matters</strong>: The ROI equation for AI isn't just about time saved—it's about unlocking human potential. When routine work is automated, people naturally redirect energy toward higher-value activities that machines cannot replicate.</p><h2 id="the-path-forward">The Path Forward</h2><p>The world doesn't need more AI tools. It needs more thoughtful implementation of the right tools in the right contexts.</p><p>Your organization's AI strategy should begin not with capabilities but with questions:</p><ul><li>Where do your people spend time that doesn't leverage their unique talents?</li><li>What knowledge work could be enhanced with better pattern recognition?</li><li>How might freeing up an hour per day change your culture?</li><li>What boundaries need protection as you adopt these technologies?</li></ul><p>The organizations that thrive won't be those with the most advanced AI, but those who use AI most thoughtfully to amplify what makes their people exceptional.</p><p>Remember Henry Ford's insight: sometimes the most productive thing you can do is give people back their time.</p><p>What would your team do with an extra hour every day?</p>]]></description>
        </item>
        
        <item>
            <title>When Products Think For Themselves</title>
            <link>https://robertgreiner.com/when-products-think-for-themselves/</link>
            <guid isPermaLink="true">https://robertgreiner.com/when-products-think-for-themselves/</guid>
            <pubDate>Thu, 19 Dec 2024 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/AI-Toaster.webp" alt="When Products Think For Themselves" /><br/><p>Not too long ago, if you wanted to picture a technology that made decisions on its own, you might think of Tony Stark chatting with J.A.R.V.I.S. - that all-knowing AI butler from the Iron Man movies. J.A.R.V.I.S. wasn’t just an assistant; it was an active partner, anticipating needs, solving problems, and pushing Tony forward. Once, that kind of autonomy seemed squarely in the realm of Hollywood imagination. Now, it’s creeping into our real world.</p><p>As another year begins, there’s a quiet but profound shift unfolding in how we build products. It’s something called “agentic architecture,” and while that phrase sounds like it belongs in a sci-fi script, it captures a simple, transformative idea: products are evolving from passive tools you operate into active teammates you rely on.</p><p>Until recently, products just sat there, waiting for you to poke, prod, and instruct them. Today they’re starting to think on their own. Imagine a logistics system that reroutes deliveries mid-journey because it senses a bottleneck ahead. Or a home heating system that adjusts to changing energy prices without you lifting a finger. We’re moving from products that quietly wait for orders to products that act on their own judgment. That’s the essence of agentic architecture.</p><p>At this year's Microsoft Ignite conference, keynote speakers introduced proof-of-concepts for employee-self-service agents to answer common policy questions, meeting facilitators to take notes and nudge participants to stay on track, retail store assistants, warehouse assistance agents, and active translators to help people communicate in different languages.</p><p>This shift is both exhilarating and unsettling. On the upside, these agentic products can help companies become more adaptable and efficient. They can spot patterns before we do, self-correct when something’s off, and free up human time for more creative work. On the flip side, letting products make decisions raises questions about trust, ethics, and alignment with our broader goals. How do we know these autonomous systems won’t drift off course?</p><p>Heading into the new year, the first step is accepting that we’re not just tweaking features; we’re redefining relationships. We need cross-functional teams: engineers who can think about ethics, designers who get data, and product managers who see the forest, not just the trees. It’s about building a culture that values curiosity and breadth as much as depth.</p><p>We also need the right foundation: sturdy data pipelines, flexible infrastructure, and clear processes for continuously learning what works and what doesn’t. Agentic architecture isn’t something you master from day one. It’s an ongoing experiment. The companies that treat it as such—trying, measuring, refining—will find themselves better positioned to adapt as products inch closer to becoming partners.</p><p>Like Tony Stark learning to trust J.A.R.V.I.S., we’re learning to trust our creations in new ways. The key takeaway this year is to shift from making better products to cultivating better relationships with them. That’s the real heart of this transformation.</p>
]]></description>
        </item>
        
        <item>
            <title>Don't Wait for January</title>
            <link>https://robertgreiner.com/dont-wait-for-january/</link>
            <guid isPermaLink="true">https://robertgreiner.com/dont-wait-for-january/</guid>
            <pubDate>Mon, 18 Nov 2024 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/Hourglass.jpeg" alt="Don't Wait for January" /><br/><p>We romanticize beginnings. January 1st becomes a sanctuary for our procrastination, a mirage of the perfect time to start. But here's the harsh truth: big projects don't wait on calendar dates. They require momentum, and momentum starts now.</p><p>Waiting for January feels safe. It gives us a cushion, a grace period to "prepare." But our cushion quickly turns into quicksand. The first weeks of the new year are a haze. People are returning from holidays, inboxes are overflowing, and meetings are pushed back. By the time everyone's settled, it's mid-January.</p><p>By the time we get started in earnest, assigning tasks, setting goals, and aligning teams, it all takes time. Weeks slip by. Suddenly, it's February, and meaningful progress hasn't even begun while the year is already 10% over. Meanwhile, others who took the plunge earlier are already leagues ahead, riding the waves you hesitated to catch.</p><p>Five months down the line, you will find yourself no further along than you are today. But it's not just about stagnation; it's about regression. Standing still is moving backward in a world that spins faster every day. While you were waiting for the "right time" to start, your competition seized the moment, pushing forward, leaving you not just months but perhaps a year behind.</p><p>Starting now builds momentum. It's the first push that gets the rock rolling. You're setting the stage even if progress is slow during the holiday season. You're preparing the soil so that when the new year arrives, you're not planting seeds - you’re watching sprouts grow.</p><p>Don't let the arbitrary turn of the calendar dictate your actions. The "next big thing" won't move itself, and time won't grant you favors for waiting. Start now. Embrace the discomfort of beginning when others are winding down. When January comes, you're already in motion, propelled by the momentum you started building today.</p><p>Ready. Steady. Go.</p>
]]></description>
        </item>
        
        <item>
            <title>AI Rule #1 - Customer First</title>
            <link>https://robertgreiner.com/ai-rule-number-1/</link>
            <guid isPermaLink="true">https://robertgreiner.com/ai-rule-number-1/</guid>
            <pubDate>Tue, 04 Jun 2024 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/AI-Use-Case-Venn-Diagram.png" alt="AI Rule #1 - Customer First" /><br/><p>In 1985, Warren Buffett wisely said, </p><blockquote>"The first rule of investment is don't lose. And the second rule of investment is don't forget the first rule, and that's all the rules there are."</blockquote><p>Similarly, the first rule of AI investment is to <strong>focus on the customer first</strong>. And the second rule of AI investment is don't forget the first rule.</p><p>In the four decades since Buffett's quote, investors around the world have shown how hard these words are to live by behaviorally. We just can't seem to collectively manage our irrational behaviors in the market. Business leaders are facing a similar "hot stock" siren call in the AI arms race - throwing enough funds into random investments and speculative moonshots that we could have sent another space station into orbit. This manifests in several ways, but there are a few red flags that commonly pop up:</p><ul><li>Investing in a hot new technology to appear cutting-edge (resume-driven)</li><li>Creating features that would be better built programmatically, but using AI for the sake of using AI</li><li>Creating splashy, over-generalized features that don't meet real needs</li></ul><p>During my time as a consultant, I've had a front-row seat to the AI frenzy, watching companies pour millions into ambitious projects. They often solve fascinating problems but miss the mark on what their customers truly want or are willing to pay for. The allure of cutting-edge technology and the hype of flaunting "Powered by AI" on their websites lures them into a costly trap. They end up with expensive solutions no one asked for or needs. </p><p>These companies run costly models on borrowed cloud infrastructure, tying up their brightest minds on use cases lacking business viability and customer appeal. It's like constructing a bridge to nowhere - technically impressive but ultimately pointless. A few of the more public strikeouts serve as a reminder to us about the dangers of getting our AI investments wrong:</p><ul><li><a href="https://www.theverge.com/2018/7/26/17619382/ibms-watson-cancer-ai-healthcare-science">IBM’s Watson gave unsafe recommendations for treating cancer</a></li><li><a href="https://www.forbes.com/sites/kalevleetaru/2016/03/24/how-twitter-corrupted-microsofts-tay-a-crash-course-in-the-dangers-of-ai-in-the-real-world/?sh=64ffff8126d2">How Twitter Corrupted Microsoft's Tay: A Crash Course In the Dangers Of AI In The Real World</a></li><li><a href="https://apnews.com/article/new-york-city-chatbot-misinformation-6ebc71db5b770b9969c906a7ee4fae21">NYC's AI chatbot was caught telling businesses to break the law. The city isn't taking it down</a></li></ul><p>Buffett emphasizes that the most crucial quality of an investment manager is temperament, not intellect. We've found this also applies to managing AI investments. <strong>It's not about mastering the technology first; it's about having the wisdom to prioritize the right use cases from the start.</strong></p><p>Instead of treating AI as a hammer and viewing every problem as a nail, we need to begin with the voice of the customer—their journey, hopes, desires, and needs. You already know where to start with this. It's how your company has a moat in the first place. What does your customer value, and what is AI especially suited to solve? Use AI as a secondary tool to enhance a customer-driven use case.</p><p>Many organizations are finding tremendous success with AI today by leveraging the technology to supercharge various use cases while adding additional value to their customers, which allows them to charge more:</p><ul><li><a href="https://www.theverge.com/2023/9/13/23871537/adobe-firefly-generative-ai-model-general-availability-launch-date-price">Adobe Creative Cloud</a><a href="https://www.theverge.com/2023/9/13/23871537/adobe-firefly-generative-ai-model-general-availability-launch-date-price">Adobe’s Firefly generative AI tools are now generally available</a></li><li><a href="https://www.ciodive.com/news/walmart-AI-ML-retail/638582/">How Walmart enhances its inventory, supply chain through AI</a></li><li><a href="https://www.manmonthly.com.au/siemens-elevates-predictive-maintenance-with-generative-ai/">Siemens elevates predictive maintenance with generative AI</a></li><li><a href="https://finance.yahoo.com/news/headstorm-unveils-agpilot-revolutionizing-agricultural-191000509.html">Headstorm Unveils AGPILOT: Revolutionizing Agricultural Retail with Gen AI</a></li></ul><p>We believe that the success in these examples is rooted in design thinking. Instead of starting from the "left" side of the user journey with AI capabilities as a solution in search of a problem, start from the "right to left" with the customer's voice and work backward. Seek technology solutions to meet those demands; sometimes, AI use cases will be the perfect fit.</p><p>We've seen how building AI products without a rigorous focus on user needs can be a company's bridge to nowhere: an expensive and potentially impressive product that ultimately remains disconnected and unused. A customer-centric approach gives your AI efforts direction and purpose. It ensures that the technology serves customers, not vice versa.</p><p>Feeling overwhelmed about where to start? Imagine having a head start in the AI race with a roadmap crafted by experts who've faced the same challenges you are. We've distilled decades of experience into a dynamic 45-minute presentation on Leveraging AI for Your Business. If you'd like to discuss having us come out and deliver the presentation, <a href="mailto:robert@robertgreiner.com" rel="noreferrer">reach out</a> to learn more.</p>
]]></description>
        </item>
        
        <item>
            <title>Navigating the Upside Down as a Technology Leader</title>
            <link>https://robertgreiner.com/navigating-the-upside-down-as-a-technology-leader/</link>
            <guid isPermaLink="true">https://robertgreiner.com/navigating-the-upside-down-as-a-technology-leader/</guid>
            <pubDate>Fri, 01 Mar 2024 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/CIO.jpeg" alt="Navigating the Upside Down as a Technology Leader" /><br/><p>CIOs have the hardest job in the C-Suite in 2024. The pace of change is eating your lunch, while the constant pressure to innovate within shrinking budgets makes the role particularly challenging. This pain radiates at every level of the organization.</p><p>I love <em>Stranger Things</em>, the era, the mystery, and the ability to watch reality break down in real time, right in front of our eyes. One day, you are a regular school kid with normal kid problems, and the next, you save the world from an extinction-level event. It's messy, and the laws of the universe no longer apply. As a technology leader, you are in transition from the ordinary to facing extraordinary challenges, pushing us into leadership trials we never anticipated.</p><p>In our real-life Hawkins, the technology world is a bit Upside Down. Things don't work like they used to, and success requires a band of friends venturing into the unknown. In this adventure, the technology leader's mission is clear: to illuminate a path through uncertainty using all the tools at their disposal: collaboration, technology expertise, leadership, experimentation, and placing effective strategic bets. Navigating through another cybersecurity threat while trying to integrate AI without a clear ROI can feel like confronting the Shadow Monster with nothing but a flashlight and a walkie-talkie.</p><p>Budgets are not keeping up with revenue growth or inflation. Organizations are playing catch-up with AI, dealing with human friction and sporadic ad-hoc investments diminishing future business cases. Business problems require increased capabilities and collaboration across the enterprise, requiring CIOs to relinquish control to marshal the resources and support required to succeed. How do we, as leaders, adapt when the playbook no longer applies?</p><p>If you are in a technology leadership position - here are four ideas that will help you think differently about your organization and provide areas to focus on to ratchet up your effectiveness. Over the next few weeks, we will dive deeper into each one.</p><ol><li><strong>Maximize Existing Investments</strong> - You don't need another tool. Before evaluating additional long-term contracts, ensure you get the most out of what you already pay for. Chances are, you are leaving value on the table. This is a good time to evaluate your "rental" agreements and make sure you need all of the compute resources, human capital, and SaaS licensing/features you are paying for.</li><li><strong>Extend the Olive Branch</strong> - You can no longer meet your long-term objectives in the luxurious walled garden of the IT silo. To achieve progress toward the organization's strategy, you must mobilize resources from across the enterprise. Jim Hopper couldn't achieve his mission without a cross-functional team. Chances are, you can't either. You need your peers' resources, support, and budget to keep your job.</li><li><strong>Capture Value from AI Investments</strong> - Your organization has likely over-corrected on the AI fervor with sporadic investments and suspect business cases. Now is the time to narrow your focus and place strategic bets on AI that will move your business forward. We don't need another low-code document summarization POC. Remember when everyone was creating their own social media platform? Let's not make AI our next <em>Quibi</em>.</li><li><strong>Align Talent for Results</strong> - Organizations have taken their eye off the talent development ball. When the market heats up, capturing growth depends on your organization's capability portfolio. Instead of chasing the hottest trends in <em>skillset</em> - focus on productivity, throughput, teamwork, and work ethic. Joel Spolsky had it right - screen your talent for "<a href="https://www.joelonsoftware.com/2007/06/05/smart-and-gets-things-done/" rel="noreferrer"><em>Smart and Gets Things Done</em></a>." We are already seeing benefits from this shift in the wild.</li></ol><p>Just as the kids in Hawkins navigate the unknown with courage and unity, technology leaders can guide their teams through today's challenges with strategic insight, collaboration, and a focus on core strengths. Use the four trends above as your roadmap for change and exploration.</p>
]]></description>
        </item>
        
        <item>
            <title>Call to Adventure</title>
            <link>https://robertgreiner.com/call-to-adventure/</link>
            <guid isPermaLink="true">https://robertgreiner.com/call-to-adventure/</guid>
            <pubDate>Mon, 05 Feb 2024 00:00:00 +0000</pubDate>
            <description><![CDATA[<img src="https://robertgreiner.com/images/Pariveda-Memories-1.png" alt="Call to Adventure" /><br/><p>I remember the first time I read <em>The Lord of the Rings</em>. I understood viscerally why Frodo, Bilbo, and company decided to leave The Shire in search of adventure. They were drawn by a calling and an urge to break free from the everyday hassles of a ho-hum life. One hundred eleven birthday parties are a lot to celebrate in a single place.</p><p>Once the Fellowship got to Rivendell, though, that's another story. Rivendell is a sanctuary of tranquility. Its gardens and flowing streams provide a level of comfort and stability. Rivendell is a place of refuge, learning, and growth. Within its gates, the world seems at peace; it's hard to imagine ever leaving.</p><p>After twelve years at the same company, I am departing for a new adventure. We rarely measure jobs in decades, yet here I am, having spent a significant chapter of my life at one of the best companies on the planet to work for. I had a strong reputation, a level of comfort, predictability, familiarity, and certainty that I likely could have ridden to retirement. But today, I'm trading that in for a new journey toward an adventure with different opportunities, challenges, and the thrill of the unknown that you can't get within the walls of the familiar.</p><p>I'm joining a boutique consulting firm called Headstorm. As soon as I met them, I knew it would be a fit. They remind me of the Fellowship, a small, intrepid group hyper-focused on a North Star. Everyone brings their own experience, skills, and perspectives to forge a formidable force greater than the sum of its parts. The allure of being part of a nimble, high-impact team was too good to pass up. I love the idea of <a href="/becoming-the-ideal-team-player/" rel="noreferrer">a small, focused, well-functioning team changing the world around them for the better</a>, and I think I have that in Headstorm. I'm also excited about stretching my skills in new directions, particularly around helping clients develop new strategies and implementing them in a human-centric way.</p><p>Over the last twelve years, I've grown in ways I could have never imagined. I retooled my career completely from a technical implementer to a leader of teams. I've built a robust <em>talent stack</em> of skills and experiences with dozens of people I can genuinely call my friends. Looking back, my core memories are not of what was accomplished over the years but of the people I worked with and the stories we wrote together.</p><p>The thing I'm most grateful for, though, is that Pariveda helped me better understand myself. Before joining, I thought I was introverted and detail-oriented. I figured all software developers were introverted and detail-oriented, so why not me? In my first three months, I took <a href="https://www.predictiveindex.com/" rel="noreferrer">Predictive Index</a> training and realized I am extroverted and not detail-oriented at all (big surprise). The tension I felt in my career over the several years leading up to my time at Pariveda is hard to describe. How much longer would I have experienced it without being part of an organization that fervently focuses on human development? I'm grateful for Pariveda giving me the gift of understanding a little bit about how I'm wired, giving me a vocabulary to express that, and giving me feedback over the years to help shape it into something more productive and balanced.</p><p>As I step into the next phase of my journey, the legacy of Pariveda accompanies me. The lessons learned, relationships forged, and insights gained are not just part of a farewell; they are integral components of my evolving narrative. I leave with phenomenal memories, an enriched perspective, an expanded community of colleagues-turned-friends, and a heart full of gratitude.</p>]]></description>
        </item>
        
    </channel>
</rss>
