Skip to content

Your double materiality questions answered: Insights from our expert webinar

12min
Ellipse 5

Double materiality assessments are often described as a structured process: identify ESRS topics, assess impacts risks and opportunities, and determine what is material. 

In practice, however, things quickly become more complex. Questions arise around scope, scoring IROs consistently, linking dependencies to risks, and translating all of this into clear, defensible outcomes. It’s no surprise that many organizations are still figuring out how to approach DMA in a practical way in 2026. 

But if there’s one thing we’ve learned, it’s that double materiality becomes much more manageable when you see how it works in practice.

That’s why for the eighth edition of our Making Sustainability Work webinar series, we hosted an interactive session, where with the attendee’s help, one of Dazzle’s double materiality experts — Kathrin Jansen — conducted a live DMA using a realistic company example. 

As always, the session was hands-on, with attendees raising practical questions throughout, as well as helping to build the DMA itself. 

In this blog, Kathrin has provided her own answers to all the questions asked during the webinar, organized into relevant categories for your convenience!  So whether you’re starting your first DMA or refining your approach, we hope these answers bring clarity to your next steps.

You asked, we answered: Insights from our recent double materiality Q&A 

DMA process year-on-year

DMA Meth and Process


Q: I assume you don’t need to perform a full DMA every year. Do you have suggestions for how to refresh a DMA in the following years? For example, what should be done in the first year after the full DMA, and what should be done in the second and third years?

A: You’re right that a full DMA doesn’t need to be repeated every year. Here’s a practical approach:

Year 1: Conduct a light review. Verify that your stakeholder list is still valid. Check for significant changes in your business model or context (e.g., new markets, mergers and acquisitions, regulatory shifts). Validate that your top 5–10 IROs haven’t shifted dramatically.

Year 2: Conduct a mid-cycle review with targeted stakeholder consultations on IROs close to or right under the materiality threshold. Update the scoring if the context has changed. 

Year 3: A full refresh with new stakeholder engagement, updated scoring, and a full review of the IRO long list.

Reassess earlier if there are major business changes (e.g., acquisitions, new geographies, or new product lines), significant new regulations, or a sector-specific shock.

IRO scoring, methodology, and subjectivity

IRO scoring


Q: Isn’t there a standard rule or theory on how to assign numbers to IROs or quantify any topics?
 

A: There is no universal standard for evaluating IROs. The ESRS provides a conceptual framework, including a scale, scope, and irremediability for impact materiality and likelihood and magnitude for financial materiality, but leaves the scoring methodology and threshold setting to each company.

The most important thing is internal consistency. Clearly define your scale (e.g., 1–5, 1–10, or qualitative tiers), document your rationale, and apply it consistently across all IROs. Calibration workshops, in which multiple experts independently score the same IRO, help reduce subjectivity.

Q: Is it not very subjective to assign these numbers? What is the concept?

A: Subjectivity is inherent and acknowledged in the ESRS framework. This is why the process and documentation are as important as the numbers. The idea is that you are making a structured, evidence-based judgment about relative significance. You can manage subjectivity by using multiple experts, calibration workshops, external data anchors (ILO, IPCC, and sector benchmarks), and stakeholder validation. Document your reasoning for key scores.

Q: Is there no globally accepted standard that can be used to calculate the risk?

A: There is no universally accepted formula for calculating risk magnitude in a DMA. In practice, most companies use one of the following approaches:

1) A simple probability x impact matrix, where you score the likelihood and magnitude on a defined scale (e.g., 1–5) and multiply them, or 2) Scenario-based financial modeling, where you estimate financial exposure under different scenarios (e.g., what if this regulation applies or this raw material becomes unavailable?).

Benchmarking against peers or sector data: Using industry reports or ratings to anchor your estimates.

The key is not the formula itself, but rather, applying it consistently across all your IROs and documenting your reasoning. Ideally, you should define what “high magnitude” means for your company upfront, anchoring it in your own financial data (e.g., as a percentage of revenue or EBITDA).

Q: Have you also applied the new option (according to the amended ESRS 2.0) to use a single “severity” factor (including scale, scope, and irreversibility) multiplied by probability? If so, was it significantly more efficient?

A: Yes, EFRAG’s simplification allows you to combine scale, scope, and irremediability into a single severity assessment, which is significantly more efficient when dealing with many IROs. The trade-off is less granularity. I suggest using the single-factor approach for the initial screening of a large IRO longlist and applying the three-factor approach to shortlisted material topics where more detail is needed.

Q: Why are E4, E1, and E2 treated differently? For example, why are scale, scope, and irreversibility used for E4, whereas E2 uses scale, scope, irreversibility, and likelihood?

A: The difference isn’t between the topics E4 and E2; both are assessed using the same ESRS framework. The type of IRO drives the difference. Actual negative impacts are assessed based on severity alone (scale, scope, and irremediability) because the impact is already occurring, so likelihood is irrelevant. Likelihood is added to the severity assessment for potential negative impacts. In the example shown, the E4 IRO was an actual impact, while the E2 IRO was a potential one, which is why the formulas looked different.

Q: How do you know which scoring approach to use for each topic? Or are these just four different examples that you could apply?

A: Depending on the type of IRO, the ESRS prescribes a methodology. Actual negative impacts are assessed based on severity alone (scale, scope, and irremediability). Potential negative impacts are assessed based on severity and likelihood. Financial IROs use financial scale multiplied by likelihood. You develop the scoring scale that you apply consistently across all topics. For example, you decide whether to use a scale of 1–5 or 1–10, and how to calibrate it internally. 

Financial materiality and finance inputs

Financial impacts


Q: How do you define/determine/calculate/justify the financial amount impact during stages 3-4?

A: Although there is no required formula, here are the approaches I most commonly use:

  • Revenue at risk: What percentage of revenue could be affected if this risk materializes?
  • Cost exposure: Regulatory fines, remediation costs, litigation costs, and insurance premiums.
  • Cost of capital/financing: Could this topic affect your ESG rating or access to financing?
  • Operational disruption costs: Supply chain interruption, production losses
  • Stranded asset risk: This is especially relevant for climate topics.

Work with the finance team to base estimates on real business data. Aim for a well-reasoned order-of-magnitude estimate rather than an unsubstantiated exact number.

Q: Which figures do you usually need from the finance team?

A: The most important financial inputs for a DMA are:

  • Your key products/services and revenue breakdown: What does your company actually make money with? This information is critical for assessment.
  • Raw material costs and purchasing volumes (procurement might have these numbers). What do you buy, where do you buy it from, and how much do you buy?
  • Have there been significant fluctuations in raw material prices? These can signal supply chain vulnerability.
  • Revenue by segment, geography, and product line to estimate revenue at risk.
  • Existing provisions, such as environmental liabilities, legal disputes, and remediation costs.
  • Capex and opex plans, especially for transition costs.
  • Financing conditions, including any sustainability-linked loan terms where ESG performance affects your interest rate.

The goal is to understand where the business is most exposed. A risk that affects your best-selling product or your most critical raw material has a far greater financial impact than one that affects something on the periphery.

Stakeholder engagement and governance

Stakeholder engage


Q: In this approach, where do you include the inputs from external stakeholders?

A: Stakeholder input can be incorporated into every stage of the DMA, not just one fixed point.

For example, stakeholders can help with IRO identification by surfacing impacts or risks that internal teams might overlook. This can occur through formal surveys, workshops, or informal conversations with customers, suppliers, employees, and community representatives.

For scoring and assessment, their perspective on severity is important: How significantly are people actually affected? For example, a procurement manager may score a supplier risk differently than the workers in that supply chain.

Once material topics are identified, stakeholder input is invaluable for developing realistic and effective responses.

The format is flexible and can include structured interviews, online surveys, focus groups, or documented conversations. The important thing is that the input is captured, traceable, and visibly reflected in your conclusions. ESRS 2 requires you to disclose who you engaged with, how you engaged with them, and what changed as a result.

Q: When determining the scores, do you consult with the client? Who has the final say? Does anyone higher up ever override the score, given that the process can sometimes be somewhat subjective?

A: In practice, I usually develop the IROs and scoring either collaboratively from the start or by making proposals that the company then reviews and refines. Subject matter experts in procurement, operations, or finance often provide crucial knowledge that significantly improves the assessment. Their input is an essential contribution, not an ‘override.’

I recommend that every IRO be assessed by at least two people to reduce subjectivity and improve quality.

Ultimately, the company owns the DMA. They make the final decisions and accept responsibility, even in the face of an auditor. My role is to guide the process and provide methodological expertise. However, the scores must reflect the company’s own informed judgment.

Q: What happens if many of the decisions lead to new suppliers, but this negatively impacts the P&L and risks making the company financially unsustainable? These situations can be difficult to explain to boards.

A: The DMA is, at its core, an analysis tool. It makes IROs visible and helps prioritize them. It doesn’t automatically require immediate action.

When a human rights issue is identified, the first step is not to switch suppliers, but rather to define concrete improvement measures with the supplier. What needs to change? By when? And how will progress be measured? These targets are set and monitored. Disengagement only becomes a consideration if those measures consistently fail to deliver results.

In practice, this process involves identifying, prioritizing, defining measures and targets, monitoring, and reassessing. Supplier relationships aren’t ended lightly, not least because doing so can cause harm to the workers involved.

The P&L concern is real but usually only becomes acute if you skip the improvement phase and switch suppliers immediately, which is rarely the right initial response anyway.

Data collection and supply chain engagement

Data collection


Q: How do you suggest we should improve engagement when suppliers within the supply chain are slow to reply, or provide evidence that is not robust enough?

A: 

  • Would it be possible to reach out to some of them in person? A personal call can be very effective, and it makes requests harder to ignore.
  • Set clear expectations upfront. Include response requirements in supplier contracts or codes of conduct.
  • Simplify the request by providing pre-filled templates that show exactly what evidence is needed.
  • Tiered approach: Focus intensive engagement on high-risk suppliers.
  • Offer support. Workshops or templates can significantly improve response quality.
  • Use industry platforms. Tools like EcoVadis and SEDEX distribute the questionnaire workload among buyers.

Q: In cases where a company has a long value chain, does not purchase raw materials directly (only components), or does not engage with end users (e.g. its product is integrated into another product), how should impacts be assessed?

A: There is a general understanding that issues such as human rights, working conditions, and environmental impacts (e.g. pollution affecting local communities near extraction sites) are highly relevant in the value chain. However, companies often have limited direct influence over these areas. In such cases, should value chain impacts be assessed at the same level as those directly linked to the company’s own operations?

The impact of the value chain must be assessed, and companies are often better placed to do so than they realise. If you purchase components, you know what they contain, how they are manufactured and where they come from. This is the starting point. From there, you should ask your direct suppliers for data about their own supply chains.

The key principle is that limited direct influence does not mean zero responsibility. The UK High Court’s judgment on the Dyson case in January 2026 is a powerful illustration of this principle. Dyson was held civilly liable for human rights violations in their supply chain, even though the affected workers were not directly employed by Dyson or a first-tier supplier. ‘We didn’t have a direct relationship’ was not accepted as a defence.

Practically speaking:

  • Map your components and their origins.
  • Ask your direct suppliers for data and certifications.
  • Use sector and country risk data to fill in the gaps.
  • Assess indirect impacts proportionally, but don’t ignore them.
  • The assessment does not need to be as thorough as that for your own operations, but it must exist and be documented.

Tools, templates, and DMA infrastructure

Tools and templates


Q: Do you have experience with specialized DMA software solutions such as the Materiality Master or others?

A: Yes, I have experience working with different tools and solutions. Key things to evaluate:

What kind of scoring system is used? Is it flexible? Can I adapt it? Does the tool reflect the ESRS structure?

Audit trail: Can you document your methodology to support assurance?

Stakeholder engagement: Does the tool support survey distribution and response tracking?

In my opinion, for a first DMA, many companies are better off with a well-structured Excel setup, considering that many tools come with hefty price tags and unnecessary functions. 

Q: Is there a template available to download to complete a DMA online?

A: There isn’t a free, ready-to-use template out there. Most practical templates are available through a paid tool or consultancy. EFRAG publishes methodological guidance  on its website, which can serve as a helpful reference.

If you need a template to get started, just reach out to Dazzle. I’ll be happy to help you build one that fits your company.

Visualization and reporting

Vizualization


Q: We have an issue where a few IROs rank far ahead of the others, which pushes many still-important risks far behind and distorts the overall picture. Could a logarithmic scale be a viable option, or are there other methods to better align the impact and financial ratings?

A: Logarithmic scales are a valid option for improving readability when there are a few dominant outliers. Other approaches include:

  • Threshold/zone-based visualization: Define zones (“high,” “very high,” and “critical”) and present IROs within bands.
  • Separate matrices: Create one matrix for the top 10 material IROs and another for all others.

The key principle is that the underlying methodology should remain consistent. The log scale is merely a display choice and does not change your scoring.

Making double materiality work in practice

Work in practice


Conducting a double materiality assessment often raises practical questions, from scoring methodologies and financial quantification to supplier engagement and value chain impacts. As Kathrin’s demonstration showed, these challenges are common, and they are manageable with the right structure.

The ESRS framework provides guidance, but many methodological choices remain company-specific. What ultimately makes a DMA robust is not a single formula, but a consistent approach, clear assumptions, and well-documented decisions.

It is also important to remember that a DMA is not static. Regular reviews between full refreshes help ensure the assessment remains aligned with changes in the business, the value chain, and the wider regulatory landscape.

When approached this way, a DMA becomes more than a compliance requirement. It provides a structured way to understand where the most significant impacts, risks, and opportunities lie, and where action will matter most.

If you’re currently working through your own DMA and would value more hands-on, personalized guidance, Kathrin, and Dazzle’s other double materiality experts are ready to support you.

Reach out today, and within 48 hours we can connect you with a specialist who can help you move forward with confidence.

Other resources