The One Big Benefit Of NOT Going AI …

You don’t have to worry about your AI vendor going toes-up when power costs go through the roof and your AI vendor can no longer charge pennies for compute when its costs rapidly become dollars and it can’t pass them on due to contractual commitments to existing clients (or to new clients who won’t pay dollars for computations that might return hallucinations).

The new generation of AI tech — Gen-AI LLMs / AGI — requires way more compute power than the last generation, 100 to 10000 times more on average, for most requests. Grids are stretched and beginning to break. We’re at the point where only nuclear can power the data centre needed for a modern Gen-AI/AGI offering. And, as per Koray Köse’s recent article on AI leadership is about who controls the power, U.S. nuclear plants operated at 92.3% capacity last year. OUCH!

THERE IS NO ENERGY LEFT!

You can’t build a new nuclear plant overnight — if you can even build one at all anymore! Last year, DOGE’s Firing Fiasco at the NNSA stretched an already stretched organization even more. Many returned to work, but not all, but budget cuts likely left them without the capacity to even properly monitor existing aging nuclear infrastructure, yet alone approve more plants.

And it’s not even clear how much know-how is left in the US to build new plants. The Vogtle Units 3 and 4 in Georgia were the first units built from scratch in over three decades. The experience and expertise isn’t there to safely build these plants en-masse.

And the last thing the US wants to risk is another meltdown. Three Mile Island wasn’t a Chernobyl, but all it takes is a rushed private sector job with a lack of proper oversight and testing and one small mistake to trigger the next meltdown on US soil.

In other words, the power isn’t there for more AI.

So those organizations that can do without modern AI, that can use classic solutions with fit-for-purpose last generation AI that requires a fraction of the power and can run on already strained, non-nuclear, grids will be the big winners when the power squeeze hits and the Big AI players start dropping like flies.

AI is Exacerbating the Need for Global Data Centres NOT Controlled By US Firms!

A recent post by Joël Collin-Demers on why Your LLM Doesn’t Need a US Passport pointed out two very important facts that you’re probably not aware of but should be:

1. Your company is feeding sensitive data to US-based LLMs every single day.

2. The US CLOUD Act lets American authorities demand data from any US-based provider REGARDLESS of where their servers sit in the world!

In other words, you’re giving the USA full access to all of your proprietary and confidential data anytime they want it — in full breach of your data localization laws if you’re NOT in the US and in a country with such laws (and if you’re not in the US and don’t yet have data localization laws to adhere to you will soon have such laws to deal with as a result of the US global over-reach for your data to feed its AI).

This is not just an AI problem (which, if you think you really need, you have other non-US options if you are not a US company as per Joel’s extensive list), it’s an overall SaaS/SaS problem. If you’re not a US company, you need to make sure that not only your data, but all of your applications (including, but not limited to, AI) are hosted in non-US owned data centres off of US soil without safe harbour agreements.

The Best Article Xavier Olivera Has Ever Written!

In what “good” looks like today, and what it enables next, Xavier writes:


The next phase of P2P evolution will not be defined by who adds the most AI features fastest. It will be defined by who builds systems that make better decisions easier, safer and more repeatable, without losing the discipline that P2P was designed to enforce in the first place.

Truer words have never been spoken, especially in the Age of AI hype where the A.S.S.H.O.L.E. floods us with AI BS faster than we’ve ever been flooded with tech propaganda before!

Gen-AI LLMs (which are now powering the AGI craze, because if the first offering flops, just tweak and relaunch it with a few new buzzwords and claim it just needed more time, processing power, and tweaking) are not intelligent. They’re not even reliable. Hallucinations are a core function, Predictions are based on data available, even if it’s incomplete, incorrect, or indicative of actions known to be wrong for the situation in question that is typically an exception to the rule (or pattern). And many actions that can be taken automatically by these systems can’t be reversed (as there is not only no mechanism, but when they trigger an external event, the ability to reverse an incorrect action is completely out of your control).

Given this harsh reality, while they can monitor and make suggestions on how to govern, they can not govern and they do not count as governance. Governance is the only way to get to better, safer, and repeatable decisions. In reality, these Gen-AI /AGIs count as risk. Any error made with respect to a commitment (transaction, obligation, contract, large financial transfer) is an error that increases organizational jeopardy!

Governance is predictability, determinism, explainability, and traceability. This is not modern LLM-based Gen-AI / AGI system, but a traditional RPA or modern ARPA system (where all suggested rule and workflow changes and adaptations to prevent a future exception from occurring must be approved by a human) where all actions are governed by unbreakable rules, all exceptions are approved by a human, and all actions are completely traceable and 100% explainable — with no lies.

Remember that when you’re looking for your next Procurement solution, or you’ll end up with one that is worse, more dangerous, and less repeatable than the last generation solution you have now. For example, let’s say you implement an agent that monitors the inbound email channel for supplier communications regarding payment instructions and invoices. A communication comes in requesting a change of banking details for a supplier. The IPs and source domain look good so the change, and the change is to another bank local to the supplier (that they did business with in the past), so the update is sent to the AP system. The next day, an invoice comes in from the supplier for 10 times the number of units on the last PO. It’s from a supplier where shipment quantities never match the PO and where the buyer always approves the discrepancies, so the invoice is automatically paid. The next day another request comes in to change the bank account back to the original. It also passes the AI’s sniff test, so it happens. No one notices that a multi-million dollar payment was made to a fake supplier on a fake invoice, until the real invoice comes in a few days later, gets rejected because the PO has been matched, and the supplier flags an issue two weeks later when its AR team finally gets around to processing the exception, the AP team investigates, tells the supplier an invoice was paid, a back and forth occurs, and when the supplier finally gets the “proof”, informs the buyer that is NOT their bank account. By now, over three weeks and a day have passed, and the funds are unrecoverable as the thieves transferred the money out of the country and closed the fake account the day the fake invoice was paid. This is the “governance” you’ll get from an unintelligent agentic solution (masquerading as an AI employee) that does everything on probabilities.

When a Conflict Starts, It’s Already Too Late For Procurement To Pay Attention!

Supply Chains are not only hurting, they are breaking, and they have been since the US and Israel renewed the conflict with Iran and more-or-less brought the Strait of Hormuz to a close for pretty much every western country that is associated with the US.

A Strait that is critical not only for

  • global energy (as it normally sees 20% to 25% of global oil passing through it daily)

but also for

  • natural gas (up to 25%, at least it will further delay the AI Data Centers)
  • fertilizer (as it saw up to 50% of urea, ammonia, and sulphur supply passing through it daily, with the former a key fertilizer component)
  • methanol (but at least bootleggers will have to use real grain alcohol now) and petrochemicals
  • etc.

In other words, the Strait being close off is not just a logistics nightmare for the shipments you were expecting that needed to pass through the Strait on time, it’s a nightmare across your entire supply chain as all of your suppliers dependent on the oil, natural gas, chemicals, gasses, etc. that normally pass through the Strait daily are also suffering their own nightmares. Delays will compound through the chain for the lucky ones, and the rest will see shipments just stop.

And articles that tell you this is a leadership moment are missing the point.

Where it was critical, you should already have known your exposure, had monitoring in place, and been alerted the day the conflict started that an issue was coming your way.

Where supplier Force Majeure was unacceptable, you should already have had the flexibility in your contract to shift, pause, or end the contract immediately upon supplier failure.

Where supply was critical, you should have been geographically dual-or-tri sourcing with order escalation clauses built into the contracts so you can quickly secure supply when potential shortages are detected.

Where margins are tight or costs can vary widely based upon external events, your cost models should already be taking this into account, should be monitoring for market price changes, and should be updated upon such changes with immediate alerts if prices shift beyond typical market fluctuations.

And strategic and critical suppliers will already be treated as such. They will be given fair margins, access to buyer expertise that will help them with efficiency and negotiating their own raw material contracts, and placed in a financial position where they too can dual or tri-source and explore optionality in their own supply chains.

Because, as Paul Martyn commented on one of the many articles on why the conflict is apparently time to pay attention and step up (even though, as we stated in our opening, it’s already too late):

If you:

  • defer supplier investment –> you pay in disruption
  • squeeze supplier margin –> you pay in resilience loss
  • ignore (supply chain) optionality –> you pay in constrained decisions and lack of supply

The answer, of course, is to be paying attention to any high risk or high impact category from the day you identify it to the day you end the last product line that uses it. And to use the Busch-Lamoureux Exact Purchasing model to properly place your category, determine which cost factors and risks you need to track, how often, when alerts should be triggered, what mitigations can be taken up front, and what actions need to be taken when an issue likely to cause a disruption arises.

Analytics Must Drive Source-to-Pay, but not necessarily Gen-AI

Xavier recently penned another great piece on Analytics in P2P: From visibility to actionability where he highlighted the failures in analytics in traditional P2P:

  • static, backward looking, spend by category, invoice cycle time, approval rates, compliance rates
  • insights only after transactions are processed, payments are made, and cycles completed
  • late payments multiplying, exceptions accelerating, and supplier risk accumulating
  • lack of operational insight

According to Xavier, P2P can only be modernized if the embedded analytics shift from descriptive to diagnostic.

  • don’t report KPIs, explain the root causes (which approval paths contributed the most to approval time)
  • don’t report exception rates, identify suppliers that consistently cause them
  • don’t report spend anomalies, break it down and identify root causes

It’s a great start, but where it needs to get to is actionability. Xavier begins to address this point by stating the next step is “predictive awareness” where the system anticipates likely outcomes within active processes, such as predicting which invoices are likely to miss payment terms, which requisitions are likely to stall in approval or which suppliers are likely to generate disputes based on current patterns as that allows a Procurement professional to intervene before issues arise.

Finally, Xavier gets to the main point — the real inflection point comes when analytics begin to recommend actions and influence execution paths. Prescriptive analytics in P2P requires tight coupling between insight and control. If analytics identify a high-risk transaction, the system must be able to route it differently, apply additional validation or prompt a specific decision. If analytics detect a low-risk, repetitive transaction, the system must be able to reduce friction without manual intervention.

But it needs to go one step further. It must not only route differently, and apply more controls, but it must still do so automatically based on the diagnostic and predictive analytics. It can’t just apply a “one-size-fits-all” approach for automation and kick every exception out for human processing. You can’t always make the default path smarter because there should be different paths depending on the cost of the purchase, the risk associated with the purchase, the discrepancy between the invoice, goods receipt, PO, and/or contract terms and conditions. You need multiple streams that are auto-selected by predictive analytics that support the right actions given the assessment of the conditions.

The reality is this — except for truly exceptional situations, once you’ve made the decision on what to purchase, procurement should be 100% automated. It’s all e-document exchange, analysis, authorizations, and (payment) transactions. Unless something is really off, a buyer should never be involved once all the workflows, rules, and authorizations are setup.

But this automation should extend back into, and through, source-to-contract. Building on the Busch-Lamoureux Exact Purchasing pocket-cube framework, there are categories that are low risk, low value, and low complexity — you should NOT be buying these manually. “Agentic” automation should be taking care of these for you, considering that even a worst-case screw up will be of little impact. Then there are categories of moderate risk, value, and/or complexity which can be fully automated if all of the necessary data is available and there is a cost and supply history to build on, there are no special situations that need to be taken into account, and a worst-case analysis indicates that even a statistically unlikely “bad buy” will be of minimal impact. These should be 90%+ automated from the decision to buy to the recommended award, with extensive analytics and augmented intelligence for human review. And if the buyer likes the default recommendation, it should be just one click for the process to go from award to e-signed contract.

All of this requires very extensive descriptive, diagnostic, predictive, and actionable analytics and intelligence with extensive, adaptive, robotic process automation ([A]RPA) that can automate everything that should be. The reality is that while everything should be sourced (or exactly purchased), when you have all of the (market) intelligence, the standard processes, and the organizational goals encoded, then there’s no reason that the systems shouldn’t do the majority (or the entirety) of the work for you.

While buyers won’t be replaced by agentic systems (despite the over-hyped BS claims of AI Employees), they will be heavily augmented by them when most categories aren’t complex, risky, or strategic enough to require human review or intervention.