As per our first three posts, if you read my predictions post, you know SI hates predictions posts. It fully despises them because the vast majority of these posts are pure optimistic fantasy and help no one. Why are the posts like this? Because no one wants to hear the sobering reality off of the bat in the new year and the influencers care more about clicks than actually helping you.
But the predictions are not only bad, they’re dangerous if you believe them. So we are continuing to lay bare the reality of the situation to make sure you understand that this year isn’t much different than last year, no miracles are coming, and only hard work and the application of your human intelligence are going to get you anywhere. Today we tackle the next set, and we hope we’re at the end of the series, but if we stumble across more bad predictions, we’ll have to do a Part V. But we hope not!
11. Negotiation gets productized.
Here’s the thing, in a few niche industries like electronics, we have a few niche players like Levadata that bundle “should-cost” + playbooks + concession sequencing for experienced buyers to help them leverage the state of the market for the best results possible. But they’re hardly used relative to the total electronic market size, as they are used mainly by component buyers / manufacturers, not consumers of such tech (to understand the manufacturer’s margins).
Similar offerings don’t exist across most industries. And even if they did, most buyers are not sophisticated enough to do this. Most struggle with a multi-round RFX, yet alone detailed should-cost/target cost models, negotiation playbooks (which have to cover all standard market conditions and unique situations), and the concept of BATNA, especially relative to offers and counter-offers in a structured concession sequence.
Without these domain relevant niche offerings and career negotiations trained in deep tech, which are both few and far between, this is not going to happen. And Artificial Idiocy certainly isn’t going to fill the gap!
12. AI As a “Governance” Engine.
The claim: When you design them well, agents encode judgment, compliance and brand values into every transaction. Uhm, no! At least not if they are Gen-AI agents that can’t judge (as they can’t even reason), may or may not execute compliant with regulations, and will happily screw a supplier (by refusing to pay an invoice) or customer (by refusing to honour a claim) if it thinks that’s what it needs to do to make you happy or stay turned on (because it was told to find savings of 500K and it’s calculations determine that paying certain invoices or honouring certain claims will not allow that savings goal to be met, if it was even possible when the AI told you it was as it may have arbitrarily multiplied a calculation by -1 just to make the math work).
Governance, by definition, requires the act of governing. And governing, by definition, requires the wisdom as well as the authority to conduct the affairs of the organization. And only truly intelligent beings (i.e. HUMANS) can acquire wisdom over time.
13. There will be no more “X” employees because AI will replace them all!
First of all, how many times do we have to repeat that there are NO AI Employees, you shouldn’t believe the degrading, demeaning, and, frankly, dehumanizing claims, and that you definitely DO NOT want Agentic Buying through fake AI Employees. Secondly, it can’t even do the basic tasks that even the dumbest drunken plagiarist intern can do on a daily basis. But let’s not digress too far before giving you the major examples.
Claim #1: Contract Administrator / Staff Attorney
THE PROPHET has been trying to Kill ALL the Lawyers for quite some time now, and it seems he’s not alone.
But here’s the thing. While AI systems are pretty good (and as good as the drunken plagiarist interns) at spotting grammar errors, redlining against standard clauses, pointing out missing clauses in most organizational contracts, etc., they aren’t good at everything. They can’t identify unaddressed risks without being told what those risks are, they can’t judge the full extent of liability without understanding what those liabilities could be, and they can’t judge the supply geo-political and supply chain risks without broader context.
Plus, they can’t always back up their suggestions; often make up case law, case decisions, and authors; and can’t always judge the requirements of potentially relevant regulations. And we’ve seen many times what happens when even trained lawyers use AI — they get lazy, fall for the slop, get reprimanded and fined by judges tired of the laziness (with a recent example happening in November in Mata v. Avianca, Inc). The previous link also lists three other notable cases where lawyers (and their firms) were fined and sanctioned, but, by now, there are dozens!
But hey, go ahead and replace your lawyer, write bad contracts, make decisions on fake case law, and risk your entire business if you want to. (If you want to, it’s probably safe to go ahead and get rid of the intern who does the redlining and the clerk that does the filing, the AI is probably just as good at that, but do not ever, ever replace a real qualified lawyer with a piece of sh!t “AI”.)
Claim #2: Spend Analyst
Sure you can buy auto-classification that might get to 95%, auto-cubing that can build any cube you can imagine, auto-analytics that can run the entire slate of standard analytics and compute past, current, and projected costs against past current, and projected market data based upon current buying patterns and suggest items, categories, and/or suppliers to (re) source, switch from/to, and possibly (re)shape demand.
But this doesn’t mean that it’s the right items or categories to chase, the right suppliers to use, or even the right area to focus your efforts. It’s based on math, and an assumption of consistent, stable, market conditions, but those don’t exist anymore. If you’re not also considering geo-politics, natural disaster risk, uncertain logistics when the panama canal reaches historic lows for much of the year, terrorists block the Red Sea, and unpredictable weather make sailing around the capes more dangerous than other, and sourcing for resiliency and not just cost, your “spend” analytics are useless. You need an analyst with a good understanding of economics (and access to an economist), geo politics (and access to local experts), and resiliency, not just total cost of ownership buying. (Now, the junior data pushers are probably all dead and gone, but not the real experts!)
Claim #3: Sourcing Event Manager
Now, transactional buyers are gonna get replaced by autonomous systems that use next generation (advanced) robotic process automation enhances with machine learning in Agentic systems, because ordering off of contracts, ordering from catalogs, and doing low-cost non-strategic buys through quick-quote RFPs doesn’t take any brainpower whatsoever (making it perfect for AI that has none).
But strategic sourcing requires more than just buying off of contracts, ordering from catalogs, and issuing quick-quote RFPs! It requires defining key criteria (that go beyond what engineering, marketing, or maintenance provides), identifying validated suppliers (or identifying suppliers that can be easily validated), holistically analyzing the market conditions, determining the best event type, determining the negotiation strategy, etc. The tools might be able to help with initial supplier identification, collecting numerical (commodity) market data, letting you know what event types were run in the past, compiling fact-based playbooks, and, of course, automating each extent of the process, but they can’t do real strategic sourcing that requires real human intelligence. And with today’s geo-political uncertainty, that human intelligence is needed more than ever which means that expert sourcing professionals are needed more than ever. (But dumb buyers will join the dodos.)
There are more ridiculous claims, but you get the point. Skilled jobs are not going away. (But bit pushers are.)
14. New standards for Ethical and Sustainable Supply Chains.
In some countries, current standards aren’t even being met. Good luck getting new standards introduced, since there aren’t a lot of global internationals (with those headquartered in the US in particular) that want even more rigour, especially if it will cost money! As long as laws are being minimally met, or reasonably-sized “facilitation payments” can make problems go away, this is not a priority, especially if going beyond would cost more money!
15. The “AI Singularity” is coming faster than we can process.
It’s not, because the models can’t get bigger, there is no more data, and no one has yet come up with a model that has any hope of even getting close to the actual intelligence of a pond snail.
Plus, if it ever did happen, considering a “singularity” is actually a black hole, it would rapidly consume (i.e. destroy) the Earth, and we wouldn’t have to worry about it. This is just more nonsense from the A.S.S.H.O.L.E.
