Evaluating Interreg: t33's methodological approach across four transnational programmes


Written By Pietro Celotti
Publishing 20 March 2026

Transnational cooperation programmes are complex instruments in EU cohesion policy. They span multiple countries, governance layers, and thematic priorities – and evaluating them requires methods which are both analytically rigorous and contextually sensitive. Over the past year, t33 has been engaged together with Spatial Foresight in evaluation assignments covering four Interreg transnational programmes: Interreg CENTRAL EUROPE, Interreg Baltic Sea Region, Interreg North-West Europe, and the Interreg NEXT Black Sea Basin Programme. Together, these contracts reflect a consistent and evolving approach to programme evaluation – one that combines methodological breadth with a practical focus on what programmes and managing authorities actually need to know.

A portfolio built on recurring methodological questions

Across these four assignments, certain analytical questions recur as genuine challenges for transnational Interreg programmes in the 2021–2027 period – questions that t33 has explicitly placed at the centre of its evaluation approach. How are Simplified Cost Options (SCOs) changing the experience of beneficiaries and the workload of programme bodies? Are project partners genuinely understanding the new Interreg output and result indicators, or applying them mechanically? What factors enable or hinder the involvement of newcomer organisations? And how effectively do programmes communicate – not just for visibility, but as a tool for implementation support and knowledge transfer?

These questions are not merely evaluative: they carry direct implications for programme management decisions, for the design of future calls, and for the post-2027 policy debate. Embedding them at the core of evaluation frameworks ensures that findings are actionable, not only descriptive.
 


 

Mixed methods as standard practice

All four evaluations employ mixed method designs combining quantitative and qualitative tools. In transnational programme evaluation, neither approach is sufficient on its own. Quantitative methods, such as indicator data and financial implementation statistics, provide an objective baseline, but they rarely explain why results have or have not been achieved. Qualitative methods – structured interviews, focus groups, case studies – are essential for capturing the perspectives of managing authorities, joint secretariats, monitoring committee members, national contact points, and beneficiaries, and to explain the institutional dynamics that shape programme performance. Ultimately, triangulation is what allows evaluators to move from describing what happened to understanding why.

The operational evaluation of Interreg CENTRAL EUROPE, coordinated by t33 and co-led by Pietro Celotti and Dea Hrelja, offers a clear illustration of how triangulation works in practice. Beneficiary survey data showed that 91% of applicants found SCOs clear and 80% confirmed they had simplified budget preparation – a strong quantitative signal of successful simplification. Yet interviews with National Controllers revealed that some auditors at national level continued to expect documentary evidence as if real costs were being verified, creating friction in the transition to the new system. The qualitative layer did not contradict the quantitative findings – it explained their limits and identified a structural tension that survey data alone could not have surfaced.
 

Evaluating what is hard to measure

Some of the most policy-relevant questions in Interreg evaluation concern dimensions where standard programme monitoring data provides only partial answers – and where the combination of methods becomes most consequential.

The involvement of newcomers is a case in point. Tracking whether an organisation is new to a programme is straightforward; understanding why newcomers engage, where they are geographically concentrated, and what structural factors enable or prevent their participation requires a more layered approach. In the CENTRAL EUROPE evaluation, combining geographical analysis of newcomer distribution at NUTS 3 level with beneficiary interviews revealed that metropolitan institutional ecosystems play a decisive role in channelling new actors into the programme, and that the small-scale project format introduced in Call 3 can partially offset this concentration bias – enabling organisations from less represented and rural regions to participate for the first time.

A second example concerns the Interreg indicator framework. Quantitative monitoring can track whether output targets are being met – but it cannot detect whether beneficiaries fully understand what they are measuring. In the CENTRAL EUROPE evaluation, the beneficiary survey and interviews  revealed that jointly developed solutions (RCO 116) were often understood by project partners as summaries of pilot results. Similarly, beneficiaries frequently treated ‘uptake’ and ‘upscaling’ as interchangeable when reporting on RCR 104. These conceptual gaps are invisible in the monitoring data – they only emerge through qualitative inquiry, and they have direct consequences for the quality and evaluability of reported results.

Communication presents another example. Programme monitoring typically captures outputs – website traffic, event attendance, social media reach. Assessing whether communication functions strategically – supporting applicants in developing stronger proposals, attracting newcomers, enabling capitalisation of results, or strengthening the capacity of national contact points – requires connecting these outputs to implementation processes and stakeholder perceptions, through a combination of user feedback analysis, interviews, and case-based assessment.

Scope and coverage of the current portfolio

The four assignments cover different geographies, governance structures, and evaluation purposes.

The operational evaluation of Interreg CENTRAL EUROPE – completed in December 2025, coordinated by t33 in collaboration with Spatial Foresight, and co-led by Pietro Celotti and Dea Hrelja – covered the full spectrum of programme management and implementation, from application and selection processes to partner involvement, monitoring, communication, and coordination with other programmes.

The mid-term performance evaluation of Interreg Baltic Sea Region – led by Spatial Foresight in consortium with t33, with Pietro Celotti leading t33's efforts – focuses on the relevance of thematic priorities, the long-term effects of 2014–2020 interventions, and the programme's new features for projects, with a strong emphasis on the relationship between programme management and the EU Strategy for the Baltic Sea Region.

The evaluation of Interreg North-West Europe (Phase 1), awarded to a Spatial Foresight–t33 consortium with Dea Hrelja coordinating t33's contribution, concentrated on the application process and programme relevance, drawing on stakeholder interviews and desk and geographical analysis.

The operational evaluation of the Interreg NEXT Black Sea Basin , coordinated by t33 with Rebeca Nistor as evaluation coordinator, extends the methodological framework to a neighbourhood cooperation context, adding dimensions of administrative capacity, external relations, and cross-border governance that are specific to NEXT programmes.
 

What this means in practice

Running concurrent evaluations across programmes with different regulatory frameworks, partner country compositions, and thematic focuses is methodologically demanding. It requires the ability to adapt core evaluation frameworks to specific programme logics while maintaining comparability where relevant. It also creates opportunities: insights from one programme can sharpen the analytical questions posed in another, and patterns that emerge across multiple evaluations carry stronger evidential weight than findings from a single assignment.

For t33, this portfolio represents both a commitment and a body of practice – one that continues to develop with each evaluation cycle.





CONTENTS