Measuring IT value-for-money: Am I overpaying for software engineers…?

You are currently viewing Measuring IT value-for-money: Am I overpaying for software engineers…?

Sales people have sales targets: revenue, ARR, profit, leads conversion, you name it. It helps executives catch, if this particular person delivers the objectives and helps their business grow. Accountants have operational targets: number of invoices properly booked, days to close the books, accounting cost as % of revenue etc. It helps executives catch, if the revenue generated by sales people are not being consumed too much by operations.

So… why Software Engineers usually don’t have productivity / value-for-money metrics? Maybe we should as well. It wasn’t an issue few years back: money was cheap, investment rates from software projects were satisfactory, there were a shortage of engineers on the market, so we were happy if we found anyone to deliver business applications. But those old-good-days are over, and today we hear more about layoffs and AI replacing engineers than another salary records in IT. What I hear more and more often as a consultant is: are my engineers worth the money they earn? How can I evaluate, if this highly-paid senior engineer really delivers 2x more and better than his 2x less paid mid engineer? Do I have well-organised, best value-for-money IT organisation, or… bunch of duchies or small kingdoms, simulating work, over engineering my systems just to be irreplaceable?

Can I measure developers / teams / whole IT value-for-money just like I measure value delivered by sales departments?

DORA Metrics & why it’s not enough?

Let’s first clarify the key though from this article vs what DORA Metrics concept. DORA Metrics Authors didn’t asked themselves “is IT worth the money we spend on it?“- instead they asked “how to check, if we are progressing or regressing with our team productivity“. It’s a completely different question to ask: business case vs continuous improvement. Allow me to decompose that claim first.

DORA Metrics is a set of four key indicators developed by the DevOps Research and Assessment (DORA) team to measure the performance of software delivery and operational practices. The metrics selection process was data-driven, using statistical analysis of surveys from thousands of organizations worldwide. Here’s how and why these specific four were chosen:

  • Deployment Frequency – how often your organization deploys code to production;
  • Lead Time for Changes – time from code committed to code successfully running in production;
  • Change Failure Rate – percentage of deployments that cause failures in production;
  • Mean Time to Restore Service (MTTR) – how long it takes to restore service when an incident occurs in production.

These metrics are widely used as industry standards for assessing DevOps maturity and SDLC (Software Delivery Lifecycle) efficiency.

But Efficiency is not the same as Value-for-Money. Why? Because:

  • Deployment Frequency – we can deploy features every day, which won’t bring our business any value: changes, that the business does not even need, or yet another perfected code for already working feature;
  • Lead Time for Changes – we can have fully automated, AI-driven code review and test automation… for non-critical business application used by 10 business users;
  • Change Failure Rate – we can have zero, just because we hired 1000 offshore testers and ask them to perform all tests every time we want to deploy to production;
  • Mean Time to Restore Service (MTTR) – we can have close to zero, just because we hire 1000 offshore software engineers for our small app to fix it immediately after issue detection.

Simply speaking – DORA Metrics validates, if the oars are not breaking, and we row fast keeping the direction. It does not validate, if the ship is likely to achieve the destination – or if the crew is not too big or overpaid for what we transport. With an assumption, that IT is correctly sized & delivering what is truly needed, DORA Metrics are great. But it’s a really big assumption usually. Don’t get me wrong – I am not attacking the DORA Metrics as insufficient – I am claiming that it’s just not enough if we want to measure IT value added, or definitely the worse way to measure single developer productivity (Marcin explained such antipatterns it well in this article on LinkedIn.)

IT Metrics Cascade

I had a customer recently, who tasked me with building a Cascade Model upon which we can name metrics, which will help us track progress/regress of IT Organization Efficiency, Teams & Engineers Performance. There are three pillars upon which we have built the metrics, followed by those statements:

  • We pay Engineers for the job done. Not for beautiful code, not for newest frameworks introduction, not for skills – for taking responsibility for part of the solution and delivering / operating it. I prefer an engineer, who has some knowledge gaps in newest frameworks, but I can rely that if I ask the person “will this be done on time & will this be done well enough?” – the person takes it as a personal goal. We really don’t need perfectionists with PhD in every technology area – we need reliable business partners, who understands what pays their salaries: problems they solve, not technical perfection of their solutions.
  • We pay Teams for business problems solving – in time, with capability to solve those problems in the future as well. We need a reliable unit, which will be a business partner, and which will improve over time. We need teams, who can find a balance between speed and quality – and this balance is linked to the business purpose of the application they build.
  • We pay for IT, because the solutions they create are cheaper than doing this work manually / without the tools they provide. Each delivery should have a financial business case (operational savings, de-risking or growth potential). We pay IT management to organise IT in a way that business cases are improving.

The model we have built has multiple metrics on each level – link with business capabilities & objectives, architecture complexity and tracking changes in time. The idea we had behind it was to draw a picture, not a single dot – where one metric should be analysed with corresponding others (for example, metrics for speed are visible in the same dashboards with metrics of quality; metrics of individual contributions are presented with the ones related to cooperation etc). We didn’t want to build a KPI model (every single metric can be cheated), our goal was to analyse the metrics change in time & evaluation of the whole picture (if we are in general progressing – or regressing in IT).

None of the metrics we have created are an infinite oracle or a clear signal, but their collection allows us to assess whether we, as IT, are heading in the right direction. But, all those metrics I’ve shared can be tracked automatically… if you have standardised IT-for-IT Landscape and similar delivery process around the Company 🙂 so again, Platform Engineering proves it’s value as a concept here, because having distributed, diversified DevSecOps tooling, gathering all those data and calculating metrics will be… super tricky.

If you see a value in building such a model – and organising your IT-for-IT Landscape in a way that data will be possible to gather automatically, helping you track your IT value-for-money in real time – I can help you build such an IT Organization. Do not hesitate to contact me here!