FinOps

Finabeo Team
Dec 2, 2025
What you'll learn
In this post, we’re going to look at the recent announcements on October 28th regarding the Microsoft and NVIDIA's partnership. and translate them into plain English for your finance strategy.You’ll learn:
The Basics: What the new "Blackwell" chips and "Agentic AI" actually are (without the headache).
The Change: Why moving from chatbots to "agents" changes how you need to think about cloud spending.
Assessing Risk: How autonomous AI can accidentally drive up your bill if you aren't careful.
The Solution: Simple steps to govern these new tools and keep your cloud costs optimised.
The News: Microsoft & NVIDIA collaborating
On October 28th, 2025, at the GTC conference in Washington D.C., Microsoft and NVIDIA announced a deeper partnership where they unveiled a new type of infrastructure designed to support the next generation of Artificial Intelligence. Specifically, Microsoft Azure is integrating NVIDIA's new "Blackwell" processors.
To put this in perspective, they aren't just adding a few new computers. They are deploying massive clusters, some featuring over 4,600 of these ultra-powerful GPUs linked together. Think of it as upgrading from a standard family car to a Formula 1 car. It is incredibly fast and powerful, but it burns fuel at a much higher rate.
For a business executive looking at a monthly cloud bill that’s already exceeding £50,000, this signals a change. The infrastructure is becoming more capable, but also potentially more expensive if we don't keep an eye on it.
Enter "Agentic AI": The new digital workforce
The main driver behind this massive upgrade is something called "Agentic AI". Until recently, most AI was passive; you typed a prompt into ChatGPT, and it gave you an answer. It waited for you.
Agentic AI is active. It acts as an autonomous agent. You might give it a broad goal, such as "Improve our supply chain logistics," and the AI will independently analyse data, run simulations, and perhaps even execute orders. It works in the background, often 24/7.
This sounds brilliant for productivity, but here is the financial reality: every minute that digital agent spends "thinking" or working costs you money in cloud computing fees. Unlike a human employee on a fixed salary, an AI agent is paid by the hour—or even by the second—for the resources it consumes.
Handling Scale Without Waste
We already know that cloud waste is a significant issue. Industry reports often estimate that businesses waste around 30% to 32% of their total cloud spend on resources they pay for but don't actually use. If your bill is £100,000 a month, that is £30,000 effectively thrown away.
With Agentic AI, the risk of waste increases. Because these agents can scale up their own resources to solve problems, you could theoretically wake up to a bill that has spiked overnight because an agent decided it needed more power to finish a task. The announcement on October 28th highlighted that these new systems operate at "gigawatt scale"—a term usually reserved for power plants, not IT departments. That alone should tell us that energy and cost efficiency need to be top of mind.
Cost Governance: How to stay in control
So, how do we embrace this innovation without bankrupting the company? The answer lies in governance. Microsoft's announcement included updates to Azure AI Foundry, which is essentially a management platform. I’d suggest looking at this as your "Control Tower."
We need to apply the same rigour to these digital agents as we do to human staff. You wouldn't let a new employee sign off on a £10,000 expense without approval, and you shouldn't let an AI do it either.
It’s probably time to ask your technical leads to implement "Policy-Based Governance." This means setting limits.
How to keep cloud costs low: A few tips
If you want to reduce your current cloud spend to make room for this new AI budget, there are two very effective levers you can pull right now.
The first is Tagging. This is simply the practice of labelling every digital asset with a department name or project code. It sounds basic, but it is powerful. Companies that implement strict tagging policies often see cost reductions of 15% to 25% within the first few months. Why? Because when a Department Head sees a bill with their name on it, they suddenly become very interested in turning off unused machines.
The second is Commitment. The "pay-as-you-go" model is the most expensive way to buy cloud. If you know you will be using these new NVIDIA Blackwell chips for the next year or three years, you should commit to that upfront. By using "Savings Plans" or "Reserved Instances," you can typically lower the unit cost by around 40% to 60% compared to the on-demand price.
Ending thoughts
The October 28th announcement shows how businesses need to prepare for the age of AI and how it can help speed up enterprises. However, speed has a price. As we move from simple chatbots to complex, autonomous agents, the complexity of our bills will increase.
For the finance executive, the goal isn't to stop the innovation; it is to build the guardrails. By focusing on visibility, setting strict budget caps, and optimising the resources you already have, you can ensure that your journey into AI drives profit, rather than just driving up costs.




