You already know that comparing headline rent across a multi-country portfolio is meaningless. The challenge isn’t understanding the metrics — it’s building a normalisation framework that holds up when you’re comparing a Mietfläche-quoted office in Munich with a NIA-measured site in London and a BOMA-rentable space in Dublin, each with different service charge inclusions, different tax treatments, and different lease structures. This is where most corporate portfolios break down — not because the RE team doesn’t understand cost benchmarking, but because the data doesn’t allow it.

The normalisation problem is structural, not analytical

Most RE teams can build a benchmarking spreadsheet. The problem is that the spreadsheet breaks every time a new site is added, a lease rolls, or a currency moves. The structural issues are well-known but rarely solved systematically:

  • Area measurement divergence — GIF Mietfläche includes a common area factor that NIA excludes. BOMA rentable area includes load factors that vary by building. A 5-15% variance in quoted area translates directly into a 5-15% error in your cost/sqm. Without mapping every lease to a common area standard (ideally NIA or equivalent), your entire benchmarking exercise is built on inconsistent denominators.
  • Service charge opacity — In some markets, the service charge is a transparent pass-through with annual reconciliation. In others, it’s a bundled fixed fee that includes items the next market charges separately. The gap between a “fully inclusive” rent in Amsterdam and a “triple net” lease in Frankfurt isn’t obvious from the headline numbers. You need to decompose both to compare.
  • FX noise — Spot rates create false movements in cost performance. A site in Stockholm that looks 8% more expensive this quarter may have had zero cost change — the SEK simply moved. Quarterly average rates dampen this, but for portfolios spanning 5+ currencies, you need to distinguish real cost changes from FX artefacts in every reporting cycle.
  • Mixed contract types — A managed agreement, a serviced desk, and a conventional lease represent fundamentally different cost structures. The managed agreement bundles FM, utilities, and sometimes furniture. The serviced desk includes IT. Unless you either decompose all-in costs or gross up conventional leases to a true total occupancy cost, your comparison is apples-to-oranges by design.

None of this is new information. What’s new is the expectation that you solve it continuously, not in a quarterly fire drill.

Beyond cost/sqm: the metrics that actually drive decisions

Cost/sqm is the foundation, but it’s a blunt instrument for a 50-site portfolio. The metrics that surface actionable outliers are the ones that connect space cost to business context:

Effective cost per occupied desk

In a hybrid world, nominal cost per desk is nearly useless. What matters is the effective cost — total occupancy cost divided by the number of desks that are actually used on an average day. A site with 300 desks at 35% average utilisation has an effective cost per occupied desk that’s nearly 3x the nominal figure. This is the number that makes leadership pay attention, because it quantifies the cost of empty space in terms they can act on.

Cost variance to market

Ranking sites by internal cost/sqm tells you which sites are expensive relative to each other. It doesn’t tell you whether that’s because of the market or because of the deal. A site at €650/sqm in Stockholm CBD might be well-positioned against a prime market benchmark of €680. A site at €520/sqm in a secondary German city might be 20% above comparable space. The variance to market — your cost vs. current market benchmark for equivalent space in that submarket — is what separates portfolio insight from portfolio data.

This requires maintaining market benchmarks at the submarket level, updated at least quarterly. City-level averages are too coarse. An office in Kungsholmen and an office in Norrmalm are in the same city but different rent tiers.

Occupancy cost intensity

For sites that can be tied to headcount or revenue — which in most corporates is all of them — the intensity ratio (cost/employee or cost as % of local revenue) reveals strategic misalignment. A regional sales office running at €9,200 per employee when the portfolio average is €5,800 isn’t just an outlier — it’s a signal that the space allocation, the location, or the contract terms are disconnected from the business need at that site.

Total cost of remaining lease obligation

Benchmarking point-in-time costs is useful. But for portfolio strategy, the forward-looking metric matters more: what’s the total remaining cash commitment on each lease, including rent, service charge, and dilapidations? A site that looks cheap per sqm but has 7 years of unexpired term and a full repairing obligation represents a very different risk profile than a site at the same cost with a 12-month break clause.

The data architecture problem

The real blocker isn’t methodology — it’s data architecture. In most 50-site portfolios, the cost components needed for true total occupancy cost are scattered across:

  • The lease abstract (base rent, break dates, indexation)
  • The FM contract (service charge, maintenance, cleaning)
  • The finance system (actual payments, accruals, recharges)
  • The property tax statement (rates, local levies)
  • The utilities provider (energy, water — separately metered or not)

Assembling these into a single cost stack per site is the actual work. And it needs to happen continuously, not once a year for the board pack. The portfolios that benchmark effectively are the ones where this data flows into a single system automatically — through API integrations, structured file imports, or AI-assisted extraction from lease documents.

Segmentation: comparing like with like

A flat portfolio ranking by cost/sqm or cost/employee is a starting point, but it conceals more than it reveals. Meaningful benchmarking requires segmentation:

  • By function — HQ offices, regional hubs, client-facing sites, back-office operations, labs, and warehouses have structurally different cost profiles. Comparing an HQ with a spec build is uninformative.
  • By market tier — Prime CBD, secondary CBD, business park, suburban. Group sites by comparable market contexts before ranking.
  • By lease maturity — A lease signed in 2019 at pre-pandemic rates and a 2024 renewal in the same building represent different market conditions. Indexation, rent review mechanisms, and remaining incentive periods all affect the effective cost.
  • By space efficiency — A site at 8 sqm/employee and a site at 14 sqm/employee shouldn’t be compared on cost/sqm alone. The denser site is delivering more capacity per unit of cost, even if its nominal rent is higher.

The segmented view is where real portfolio intelligence lives. It answers: given the type of site, the market it’s in, and the business it serves, is this cost justified?

From benchmarking to action

A benchmark without a decision framework is a dashboard nobody uses. The output of a benchmarking exercise should feed directly into three workstreams:

  • Renegotiation pipeline — Sites significantly above market with upcoming break or expiry dates should be flagged for renegotiation or market testing. The variance-to-market metric gives you the business case.
  • Consolidation candidates — Sites with high cost/occupied desk and low utilisation are candidates for consolidation into neighbouring locations. The effective cost metric quantifies the prize.
  • Budget forecasting — The total remaining obligation and indexation schedules across the portfolio feed directly into finance’s forward cost model. When this is automated, the CFO gets a continuously updated view instead of a quarterly snapshot that’s already stale.

Benchmarking 50 locations isn’t an analytical challenge — the maths is simple. It’s a data engineering challenge. The teams that solve it are the ones who treat cost normalisation as infrastructure, not as a quarterly project — and who connect the output directly to the decisions it should inform.