Next Article in Journal
The Feasibility and Acceptability of an Experience-Based Co-Design Approach to Reducing Domestic Abuse
Previous Article in Journal
WASH and Health in Sindhupalchowk District of Nepal after the Gorkha Earthquake
Previous Article in Special Issue
Policies to Reduce Child Poverty in Norway: Can Municipalities Ensure Positive Functionings for Children through Housing Policies?
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Evaluating ‘Health in All Policies’ in Norwegian Municipalities

by
Ellen Strøm Synnevåg
Faculty of Social Science and History, Volda University College, 6101 Volda, Norway
Submission received: 11 April 2022 / Revised: 5 June 2022 / Accepted: 6 June 2022 / Published: 10 June 2022
(This article belongs to the Special Issue The Role of Municipalities in Health Promotion)

Abstract

:
The Health in All Policies (HiAP) approach has emerged as a central strategy for promoting health at local, regional, and state levels in different countries. Representing a complex and complicated strategy, evaluations of HIAP at the local level are scarce, and scholars call for more knowledge and critical discussions regarding how to evaluate at this level. In this conceptual paper, I discuss how summative and formative evaluation approaches might supplement each other when evaluating HiAP at the local level. First, I discuss the possibilities of using summative and formative evaluation of HiAP. Further, I discuss how formative-dialogue research might represent possibilities for evaluation by combining the two approaches. Finally, I ask if there has been a shift in the Norwegian evaluation discourse, from the promotion of summative evaluation alone to a combination of both summative and formative methods.

1. Introduction

Mindsets for addressing public health have shifted from stressing the health sector’s contribution alone to promoting collaboration between policy fields. Stating collaboration as the new imperative for health and wellbeing [1], the Health in All Policies (HiAP) approach has emerged as a central strategy for promoting health at local, regional, and state levels [2]. HiAP systematically considers the health implications of policy-making decisions made across sectors at all levels. This acknowledges public-health and health equity as whole-of-government responsibilities, recognizing public health as a concern for and responsibility of all municipal sectors or departments [3].
In Norway, the HiAP approach is one of five principles guiding the Norwegian Public Health Act (NPHA) introduced in 2012 [4]. For implementing HiAP, planning has been raised as an important tool. During the last decade, planning has been central for implementing public-health policies in Norway, and the NPHA is closely connected to the Planning and Building Act [5], representing a central government tool for regional development. Using planning structures and processes that already exist within the municipalities, the NPHA links public-health concerns to the routines for knowledge-based practice and overall administrative and political determination processes across departments [6]. When using planning as tool for implementing HiAP, Norwegian municipalities are required to follow the steps of the planning circle according to Figure 1 [7]. They should produce health overviews (§ 5) containing insights into the status of local health and the local determinants for health. This overview should form the basis for planning strategies (§ 6) and the development of public-health goals in strategic and operative plans (§ 6), which in turn guide implementations of public-health measures (§§ 4 & 7). Measures are to be evaluated (§§ 30 & 5), and these evaluations would, in turn, establish a foundation for further rounds of health overviews (see Figure 1 and English translation).
Internationally, there is a growing amount of research on the implementation of HiAP in general (see for example [8,9,10,11,12]). However, even though HiAP is a much-used strategy, evaluations of HIAP at the local level are scarce, and scholars are calling for more knowledge and critical discussions regarding how to evaluate at this level [13,14,15,16,17]. Several scholars focus the challenges of evaluating HiAP. Based on Roger’s [18] classification, Baum et al. [14] claim that the Health in All Policies approach is both complicated and complex. It is complicated because of its many different actors at different levels and in different sectors with a variety of working methods [13], and because of its many alternative causal relationships [14]. Furthermore, Health in All Policies is complex because cause-and-effect relationships are not linear; that is, the objectives and methods change and arise during the process of implementation. In addition, this is taking place in a context that is in a constant state of flux [14]. According to Rogers [16], this is the most challenging form of evaluation, one that aims to evaluate both complex and complicated measures/programs.
In this conceptual paper, I discuss the evaluation of HiAP at the local level, using the methodological approaches of ‘summative evaluation’ and ‘formative evaluation’. I ask the following question: How can summative and formative evaluation approaches supplement each other when evaluating HiAP at the local level? This overall question breaks down into the following three sub questions: 1. What are the possibilities for using summative and formative evaluation? 2. How can formative-dialogue research contribute to the evaluation of HIAP? 3. Has there been a shift in the Norwegian discourse regarding the methods of evaluation of HiAP? I discuss these questions based on the HiAP literature in general and the results from two research projects in particular. One study investigates how Norwegian municipalities use planning as tool for implementing HiAP [19,20,21]. And one study evaluates ‘The Local Environment Project (LEP)’ run by the Norwegian Directorate of Health in the period of 2015–2019 [22].

2. What Are Possibilities with Summative and Formative Evaluation of HiAP?

2.1. Evaluation of Output and Process

Summative and formative evaluations [23] are often used to categorize different methodological perspectives on evaluation. Summative evaluations aim to assess the quality and value of a program or measure, based on its achievement. One aim is to measure whether the strategy has had any effect [24]. In summative evaluations, a critical distance is expected [25]. The aim is to observe and assess more objectively and passively. The point of departure is often the result, the product, or the output. Summative evaluations are often carried out at the end, when the measure, program, or process has been completed [25,26,27].
Scholars call for more knowledge linking intersectoral collaborations to population health outcomes [28,29]. The complexity and long-term aspects of Health in All Policies make it difficult to evaluate the strategy’s effect on outcome—that is, to what extent the HiAP approach does in fact promote health, which after all is the ultimate objective [13,14,30,31]. Furthermore, some believe that it is problematic to evaluate HiAP as a sub-goal because one still lacks sufficient knowledge about what contributes to a successful implementation. For example, Lawless et al. [31] recommend a theory-based evaluation to tackle this uncertainty. In this type of evaluation, goals and sub-goals are established, with each in turn representing a theory about cause and effect. If a sub-goal (output) is achieved, this forms a basis for evaluating whether the main objective (outcome) has been accomplished [32]. For example, Baum et al. [33] used a program theory model to demonstrate how HiAP created conditions for progress towards health improvements and to identify why little progress was made on equity goals. Furthermore, Such et al. [29] presents the causal loop diagram as a guide to constructing a better picture of how collaboration on health may link to health outcomes. Identifying the internal indicators of progress for HiAP in organizations such as municipalities, these determinants will then be established as indicators of further progress towards the outcome of improved health and less social inequality [31]. In Norwegian municipalities using planning as a tool for implementing HiAP, outputs available for summative evaluation might, for example, represent the production of health-overview documents or the writing of public-health plans. Additionally, it could take the form of implementing specific public-health measures, such as, for example, constructing bike lanes, introducing social meeting points, and controlling water quality. [7]. The evaluation of such outputs gives valuable information about sub-goals, forming the basis for evaluating outcomes [28]. An overview of Norwegian research [34] (p. 21) shows that that much of the earlier research on the implementation of the NPHA has measured these significant outputs, as such as plans, health overviews, the establishment of cross-sectoral groups, etc. However, less research has investigated the more relational or processual aspects of the HiAP implementation. Given the complexity of HiAP, scholars stress the need to perform evaluations that provide information about the ongoing processes when evaluating HiAP, evaluations that do not only focus on outputs and products [13,14,16,30,31,35].
Formative evaluations intend to improve or strengthen a programme, a strategy, or a measure [23]. Formative evaluations are to a large extent concerned with evaluating the process and with continually evaluating this process as it unfolds [26,27,36]. Unknown factors can be of importance for understanding why one has achieved the objective, or not [26,27,37]. In our earlier research on how Norwegian municipalities use planning as tool for implementing HiAP, we found that the process of implementing HiAP represented a complex process in several ways throughout the different steps in the planning circle (Figure 1) [19,20,21]. This complexity complicates evaluation. For example, when making health overviews, informants experienced a bias towards quantitative knowledge forms and a bias against qualitative knowledge, which influenced their knowledge foundation for planning [19]. Further, when producing public-health goals, informants experienced that plans do not always govern actions [19] and do not necessarily represent a common understanding of its policies [20]. As another example, our informants experienced the risk of health imperialism when implementing HiAP, which could result in distrust and opposition to the implementation process [20,21].
Imagining a more instrumental and summative measurement [23] of the planning circle (Figure 1) allows each of the legal requirements to be seen and evaluated as outputs or planning products. By using summative evaluation, one might reveal that a municipality has completed an overview document providing knowledge about health conditions and determinants (§ 5), that they have integrated this knowledge in their planning strategy (§ 6), and have drafted a public-health plan including public-health goals (§ 6). Finally, a summative evaluation might reveal that public-health measures are conducted in various sectors (§§ 4, 7), and that an internal control system has been established (§§ 30, 5) [7]. By measuring outputs, one might conclude that a successful implementation of HiAP has been achieved in accordance with the requirements laid down by law. However, what if knowledge in the overview does not include valuable qualitative knowledge from different sectors [19]? Or what if the public health plan ends up in an office drawer, not being used, lacking ownership and legitimacy [19]? And furthermore, what if the planning processes lead to opposition to intersectoral collaboration, because of HiAP’s tendency towards ‘health imperialism’ [20,21]? Experiences like these could be important and useful in an evaluation of the implementation of HiAP. They might, however, be difficult to register in an assessment that evaluated only products.
According to Baklien [26,37], the distinction between product and process is crucial and evaluations should not focus on the products alone. However, she claims that a poor process can still result in a good product, and vice versa. Our research on the use of planning as tool for implementing HiAP in Norwegian municipalities does not only show the complexity of the implementation processes. Our results further show that the regulating products as planning document, health overviews, reporting systems, etc. are essential for gaining legitimacy of HiAP in the municipal institutions [19]. In line with a theory-based evaluation [31] measuring sub-goals, these outputs or products would therefore be essential to evaluate, reflecting the need for evaluating both product and process in order to find a complete answer.

2.2. Internal and External Evaluators

Summative evaluations often use external evaluators, such as persons or organizations with professional expertise in evaluation. External evaluation is often synonymous with evaluation research [32,37]. In summative evaluations, a critical distance to the measure and to what is being evaluated is expected [25]. One should observe and assess more objectively, not influencing the measure or the evaluation along the way. The use of external evaluators makes knowledge more transferable to other situations [37]. Formative evaluations, on the other hand, often value internal evaluators, someone participating in the process or programme that is being evaluated [26,27,36,37]. Formative evaluation aims to shape what is being evaluated, by facilitating learning and self-development amongst actors involved. According to Patton [27], learning is the main purpose of evaluation. In formative evaluation, the evaluator has a participatory or active role and can be both instructor and supervisor, supporter, advisor, and resource person [24].
Internal evaluators may possess important internal knowledge needed for understanding what is happening in the specific context and why [36,38]. According to Baklien [26,37] and Weiss [32], municipalities that conduct evaluations themselves would benefit by learning from so doing. It provides them with methodological knowledge and initiates valuable reflection processes that can lead to self-development and change.
With the characteristics of Health in All Policies in mind, scholars point to the difficulty in evaluating HiAP with the help of more instrumental, objective, and linear models only [30,35]. Further scholars argue that it is necessary to also have evaluations that provide information about the ongoing processes and that use internal evaluators with clear insights into the specific context [13,14,16,30,31]. However, internal evaluators can be ‘near-sighted’, and may overlook structural grounds or explanations for what is happening [26]. Some of the great advantages of external evaluation are therefore the evaluative possibilities of an outside perspective in addition to access to methodological knowledge that researchers possess [36]. Several scholars argue that internal and external assessments complement each other, suggesting a combination [24,26,36,37].

3. How Can Formative-Dialogue Research Contribute When Evaluating HIAP?

Formative-dialogue research or follow-up research is a method of evaluation that combines both summative and formative approaches to evaluation [37,38]. Formative-dialogue research is often used when the aim is to develop a best practice, and especially when the purpose is to promote welfare. A researcher is then recruited to help further develop a measure or programme and to maximize the chances of achieving its anticipated values and benefits. According to Sverdrup [24], the formative dialogue researcher has both a proximity and a distance to what is being evaluated, in the twin role of both participant and observer. As an external researcher, one gains a critical distance and outside perspective. However, by being present and participating actively in constructive dialogue and guidance, the researcher is close to the actors and their work. These discussions and reflections together with the researcher can also give the participants a renewed understanding of their own situation, and thus create change and development [39]. However, the use of external evaluators in formative-dialogue research might be time consuming and expensive, and therefore hard to prioritize for municipalities [37]. According to Baum et al. [33], research partnerships between health-promotion actors and researchers might be challenging; however, if done well, it might enable a critical examination of practice. In particular, by using program theory, researchers might extrapolate from short-term achievements or outputs to outcomes.
One example of formative-dialogue research is our evaluations of ‘The Local Environment Project (LEP)’ run by Norwegian Directorate of Health in the period of 2015–2019 [22]. We evaluated the project by following it as it progressed, and we supported the municipalities in their efforts to implement HiAP by producing health overviews, focusing on the collection of qualitative information by participatory processes. Eight county authorities, 41 municipalities, and 8 research and teaching institutions participated in the project. Although we evaluated the project, the research and teaching institutions filled the role of regional evaluators, each for their own county with its respective municipalities [22]. By following the process, we observed the municipalities’ and counties’ challenges and resources as it progressed. At the same time, we measured and counted the different participation methods used in the project, evaluating the outputs. The evaluation of the outputs supplemented the evaluation of the processes and was important for evaluating the results from the project [22].
By holding presentations at project conferences, discussing with participants, and writing reports, we gave feedback and suggestions to the municipalities and counties, which helped them reflect on and adjust their actions during the project period. For example, early in the project period we observed a tendency amongst municipalities to pay consultants for conducting participation processes instead of the municipalities carrying them out themselves [22]. By discussing the use of consultants at conferences and in reports during the project period, we addressed both possibilities and challenges, stressing the need to debate the aim of their actions and discuss the arguments for their choices. As a result of such discussions, some municipalities changed their use of consultants dramatically. In being close to the municipalities’ processes, we could easily facilitate dialogue and reflection processes, which promoted self-development and change [22]. At the same time, by being external, not taking part in the implementation in practice, we maintained a certain distance as researchers. As external evaluators in the Local Environment Project, we had the perspectives necessary to supplement the participants’ views, in addition to the methodological knowledge needed [22].
One of the aims of the LEP project was to build expertise in collaboration with teaching and research environments. To begin with, the cooperation was dominated by contributions from the mentoring teaching institutions. However, as the project period progressed, the municipalities, county authorities, and teaching institutions became more conscious of the fact that the expertise-building, learning, and development were mutual. The teaching institutions also increased their expertise in several ways [22]. For example, they developed new perspectives of their role as evaluators, as well as about public health and local environments. The formative-dialogue research project produced a «bonus prize» whereby learning and development became mutual benefits as a result of this collaboration [22].
According to Sverdrup [24], formative dialogue research is particularly well suited to programmes that are elaborate and complex, or where the implementation does not develop as anticipated. Due to the complex and complicated processes of HiAP [14], formative-dialogue research could therefore represent possibilities when evaluating these policies. This method helps to secure an outside perspective and methodological expertise, while at the same time conserving internal knowledge by promoting self-development and change and by closely following the participants and processes over time.

4. Is There a Shift in the Evaluation Discourse?

Since the implementation of the Norwegian Public Health Act [4], municipalities have been expected to evaluate their public-health policies and measures. However, since the implementation started, I argue that there seems to be a shift in how the national government conveys evaluation, and how it guides municipalities in performing evaluations. The Public Health Act, guidelines, and reports guiding the municipalities’ implementation and evaluation of HiAP seems, until recently, to promote a rather instrumental and summative understanding of evaluation. In the Public Health Act from 2012 [4], evaluation is referred to as documentation and internal control in paragraphs § 30 and § 5. In the circle diagram (Figure 1), presented by the Norwegian Directorate of Health, evaluation is presented as something that generally takes place at the end [7]. Furthermore, it mainly emphasizes evaluation of public health measures, and not the implementation of the circle as a whole, the implementation of HiAP, or the planning process in general [7]. Similarly, a report from The Office of the Auditor General’s [40] focused on the municipalities’ evaluation of public health measures and the use of measures with a documented effect. In addition, to follow up on the implementation of the Public Health Act, a Centre for evaluation of public-health measures at The Norwegian Institute of Public Health was established in 2018. It was established to safeguard a knowledge-based practice of public health. Their priorities for evaluations are 1. «effect evaluations of specific measures», and 2. «studies based on natural experiments» [41]. In line with the discussion in this paper, these priorities align with the summative methods for evaluation [14,18]. According to Green, Cross, Woodall & Tones [42], the evaluation of research within the field of public health has for a long time been characterized by an evidence-based practice, where knowledge about the effect of public-health measures has been considered important and perhaps the correct way to view evaluation. However, reading the more recent publications of the Norwegian Directorate of Health, we find a different discourse presenting evaluation. On their website presenting their guidelines on systematic public-health work, they now present evaluation as «Learning and improvement» [43] instead of documentation and internal control. In addition, KS (The Norwegian Association of Local and Regional Authorities), in cooperation with the Norwegian Directorate of Health, published a Handbook on Evaluation in 2018 [37]. Here, formative evaluation is clearly a dominant discourse, supplemented by summative aspects.
It seems to me as if the Directorate of Health has revised its standpoint regarding evaluation. When it introduced the Public Health Act in 2012, its ambitions of evaluation seemed to be set according to a somewhat summative-evaluation discourse. However, this appears to be nuanced or changed. Now, the Directorate of Health uses more formative-evaluation perspectives in their guidelines, promoting a combination of summative and formative evaluation rather than summative evaluation alone.

5. Conclusions

In line with the continual development and shift in views of health and public health, methods for planning, implementation, and evaluation need to be debated. In this paper, I aim to contribute to these academic discussions by focusing on the complexity of implementing public-health policies and highlighting how this complexity influences methods for evaluation. In this paper, I argue that the summative or instrumental methods for evaluation seem to have dominated the evaluation discourse regarding HiAP implementation in the Norwegian context for some time. However, both regulations from the national level and research now seem to be more nuanced. In light of the discussions in this paper, I argue that there is a risk of missing significant information about the implementation process when using summative evaluation methods alone to evaluate complex policies as HiAP. Furthermore, I argue that that formative-dialogue research might present several possibilities for evaluation: first, by presenting a method suitable for understanding the HiAP implementation process, supplementing summative evaluations of outputs; and second, by presenting an intermediary position between internal and external evaluation, providing outside-perspectives, and facilitating mutual processes for learning, development, and change.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Kickbush, I.; Gleicher, D. Governance for Health in the 21st Century; WHO Regional Office for Europe: Copenhagen, Denmark, 2012. [Google Scholar]
  2. Ståhl, T.; Wismar, M.; Ollila, E.; Osterberg, E.; Perttila, K.; Sihto, M.; Kauppinen, T. Health in All Policies. In Prospects and Potentials; Ministry of Social Affairs and Health: Helsinki, Finland, 2006. [Google Scholar]
  3. World Health Organization. The Helsinki Statement on Health in All Policies; WHO: Helsinki, Finland, 2013. [Google Scholar]
  4. Public Health Act. LOV-2011-06-24-29. 2012. Available online: https://lovdata.no/dokument/NL/lov/2011-06-24-29 (accessed on 10 March 2022).
  5. Planning and Building Act. LOV-2008-06-27-71. 2008. Available online: https://lovdata.no/dokument/NL/lov/2008-06-27-71 (accessed on 10 March 2022).
  6. Paulssen, E.; Moltumyr, A. Hvorfor folkehelse i kommunal planlegging? [Why public helath and public municipal planning?]. In Folkehelse og Kommunal Planlegging; Helsedirektoratet: Oslo, Sweden, 2013. [Google Scholar]
  7. Norwegian Directorate of Health. Systematic Public Health Work. Available online: https://www.helsedirektoratet.no/veiledere/systematisk-folkehelsearbeid/metode-og-prosess (accessed on 8 April 2022).
  8. Guglielmin, M.; Muntaner, C.; O’Campo, P.; Shankardass, K. A scoping review of the implementation of health in all policies at the local level. Health Policy 2018, 122, 284–292. [Google Scholar] [CrossRef] [PubMed]
  9. Shankardass, K.; Solar, O.; Murphy, K.; Greaves, L.; O’Campo, P. A scoping review of intersectoral action for health equity involving governments. Int. J. Public Health 2012, 57, 25–33. [Google Scholar] [CrossRef] [PubMed]
  10. E Van Vliet-Brown, C.; Shahram, S.; Oelke, N.D. Health in All Policies utilization by municipal governments: Scoping review. Health Promot. Int. 2017, 33, 713–722. [Google Scholar] [CrossRef] [PubMed]
  11. Weiss, D.; Lillefjell, M.; Magnus, E. Facilitators for the development and implementation of health promoting policy and programs—A scoping review at the local community level. BMC Public Health 2016, 16, 140. [Google Scholar] [CrossRef] [PubMed]
  12. Cairney, P.; St Denny, E.; Mitchell, H. The future of public health policymaking after COVID-19: A qualitative systematic review of lessons from Health in All Policies. Open Res. Eur. 2021, 1, 23. [Google Scholar] [CrossRef]
  13. Gase, L.N.; Schooley, T.; Lee, M.; Rotakhina, S.; Vick, J.; Caplan, J. A Practice-Grounded Approach for Evaluating Health in All Policies Initiatives in the United States. J. Public Health Manag. Pract. 2017, 23, 339–347. [Google Scholar] [CrossRef]
  14. Baum, F.; Lawless, A.; Delany, T.; MacDougall, C.; Williams, C.; Broderick, D.; Wildgoose, D.; Harris, E.; McDermott, D.; Kickbusch, I.; et al. Evaluation of Health in All Policies: Concept, theory and application. Health Promot. Int. 2014, 29, i130–i142. [Google Scholar] [CrossRef] [PubMed]
  15. Storm, I.; Harting, J.; Stronks, K.; Schuit, A.J. Measuring stages of health in all policies on a local level: The applicability of a maturity model. Health Policy 2014, 114, 183–191. [Google Scholar] [CrossRef]
  16. Shankardass, K.; Renahy, E.; Muntaner, C.; O’Campo, P. Strengthening the implementation of Health in All Policies: A methodology for realist explanatory case studies. Health Policy Plan. 2015, 30, 462–473. [Google Scholar] [CrossRef]
  17. Guglielmin, M.; Shankardass, K.; Bayoumi, A.; O’Campo, P.; Kokkinen, L.; Muntaner, C. A Realist Explanatory Case Study Investigating How Common Goals, Leadership, and Committed Staff Facilitate Health in All Policies Implementation in the Municipality of Kuopio, Finland. Int. J. Health Policy Manag. 2022. [Google Scholar] [CrossRef]
  18. Rogers, P.J. Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions. Evaluation 2008, 14, 29–48. [Google Scholar] [CrossRef]
  19. Synnevåg, E.S.; Amdam, R.; Fosse, E. Intersectoral Planning for Public Health: Dilemmas and Challenges. Int. J. Health Policy Manag. 2018, 7, 982–992. [Google Scholar] [CrossRef] [PubMed]
  20. Synnevåg, E.S.; Amdam, R.; Fosse, E. Public health terminology: Hindrance to a Health in All Policies approach? Scand. J. Public Health 2018, 46, 68–73. [Google Scholar] [CrossRef] [PubMed]
  21. Synnevåg, E.S.; Amdam, R.; Fosse, E. Legitimising Inter-Sectoral Public Health Policies: A Challenge for Professional Identities? Int. J. Integr. Care 2019, 19, 1–10. [Google Scholar] [CrossRef]
  22. Bergem, R.; Dahl, S.L.; Olsen, G.M.; Synnevåg, E.S. Nærmiljø og Lokalsamfunn for Folkehelsa. Sluttrapport frå Evaluering av Prosjektet Kartlegging og Utviklingsarbeid om Nærmiljø og Lokalsamfunn som Fremmer Folkehelse. 2019. Available online: https://bravo.hivolda.no/hivolda-xmlui/bitstream/handle/11250/2617619/Rapport%20nr%2095%20N%c3%a6rmilj%c3%b8%20.pdf?sequence=8&isAllowed=y (accessed on 10 March 2022).
  23. Scriven, M. Goal-free evaluation. In School Evaluation: The Politics and the Process; House, R., Ed.; McCutchan: Berkeley, CA, USA, 1973. [Google Scholar]
  24. Sverdrup, S. Følgeforskning som en nyere tendens i norsk evaluering: Hva er det, og hvordan kan det gjennomføres? In Evaluering. Tradisjoner, Praksis og Mangfold; [Evaluation. Traditions, Practice and Diversity]; Halvorsen, A., Madsem, E.L., Jentoft, N., Eds.; Fagbokforlaget: Bergen, Norway, 2013. [Google Scholar]
  25. Askim, J.; Døving, E.; Johnsen, Å. Evalueringspraksis i den norske statsforvaltningen 2005–2011. In Evaluering. Tradisjoner, Praksis og Mangfold; [Evaluation. Traditions, Practice and Diversity]; Halvorsen, A., Madsem, E.L., Jentoft, N., Eds.; Fagbokforlaget: Bergen, Norway, 2013. [Google Scholar]
  26. Baklien, B. Evaluering i Praksis [Evaluation in Practice]. 2007. Available online: http://www.forebygging.no/Artikler/2007-1998/Evaluering-i-praksis/ (accessed on 15 March 2022).
  27. Patton, M.C. Evaluation for the Way We Work. Nonprofit Q. 2006, 13, 28–33. [Google Scholar]
  28. Hagen, S. “Helse i alt Kommunen Gjør?…” [Health in All the Municipalities Do?]—En Undersøkelse av Samvariasjoner Mellom Kommunale Faktorer og Norske Kommuners Bruk av Folkehelsekoordinator, Fokus på Levekår og Prioritering av Fordelingshensyn Blant Sosioøkonomiske Grupper. Ph.D. Thesis, University of Bergen, Bergen, Norway, 2020. [Google Scholar]
  29. Such, E.; Smith, K.; Woods, H.B.; Meier, P. Governance of Intersectoral Collaborations for Population Health and to Reduce Health Inequalities in High-Income Countries: A Complexity-Informed Systematic Review. Int. J. Health Policy Manag. 2022. [CrossRef]
  30. Holt, D.H.; Ahlmark, N. How Do We Evaluate Health in All Policies? Comment on “Developing a Framework for a Program Theory-Based Approach to Evaluating Policy Processes and Outcomes: Health in All Policies in South Australia”. Int. J. Health Policy Manag. 2018, 7, 758–760. [Google Scholar] [CrossRef] [PubMed]
  31. Lawless, F.B.; Delany-Crowe, T.; MacDougall, C.; Williams, C.; McDermott, D.; van Eyk, H. Developing a Framework for a Program Theory-Based Approach to Evaluating Policy Processes and Outcomes: Health in All Policies in South Australia. Int. J. Health Policy Manag. 2018, 7, 510–521. [Google Scholar] [CrossRef] [PubMed]
  32. Weiss, C.H. Have We Learned Anything New About the Use of Evaluation? Am. J. Eval. 1998, 19, 21–33. [Google Scholar] [CrossRef]
  33. Baum, F.; van Eyk, H.; MacDougall, C.; Williams, C. Researching Health for All in South Australia: Reflections on Sustainability and Partnership. In Global Handbook of Health Promotion Research; Potvin, L., Jourdan, D., Eds.; Springer Nature: Cham, Switzerland, 2022; pp. 759–780. [Google Scholar] [CrossRef]
  34. Synnevåg, E.S. Planning for Public Health. Balancing Top-Down and Bottom-Up Approaches in Norwegian Municipalities. Ph.D. Thesis, University of Bergen, Bergen, Norway, 2019. [Google Scholar]
  35. De Leeuw, E.; Clavier, C.; Breton, E. Health policy—Why research it and how: Health political science. Health Res. Policy Syst. 2014, 12, 55. [Google Scholar] [CrossRef]
  36. Weiss, C.H. Evaluation Research: Methods for Assessing Program Effectiveness; Prentice-Hall: Englewood Cliffs, NJ, USA, 1972. [Google Scholar]
  37. Baklien, B. Veileder i Egenevaluering [Guide for Selfevaluation]; Kommunesektorens Organisasjon (KS): Oslo, Norway, 2018. [Google Scholar]
  38. Conly-Tyler, M. A Fundamental Choice: Internal or External Evaluation? Eval. J. Australas. 2005, 4, 3–11. [Google Scholar] [CrossRef]
  39. Baklien, B. Evalueringsforskning for og om forvaltningen. In Evaluering av Offentlig Virksomhet; Foss, O.M.J., Ed.; NIBR: Oslo, Norway, 2000; Volume 4. [Google Scholar]
  40. The Office of the Auditor General. Riksrevisjonens Undersøkelse av Offentlig Folkehelsearbeid; [The Office of the Auditor General’s Investigation of Public Health Work]; 3:11; The Office of the Auditor General: Oslo, Norway, 2015. [Google Scholar]
  41. The Norwegian Institute of Public Health. Om Senter for Evaluering av Folkehelsetiltak [Abot the Centre for Evaluating Public Health Measures]. Available online: https://www.fhi.no/div/forskningssentre/senter-for-evaluering-av-folkehelsetiltak/om-senter-for-evaluering-av-folkehelsetiltak/ (accessed on 11 April 2022).
  42. Green, J.; Cross, R.; Woodall, J.; Tones, K. Health Promotion. In Planning & Strategies, 4th ed.; SAGE Publications Ltd.: Thousand Oaks, CA, USA, 2019. [Google Scholar]
  43. The Norwegian Directorate of Health Læring og Forbedring av det Systematiske Folkehelsearbeidet [Learning and Improvement of the Systematic Public Health Work]. Available online: https://www.helsedirektoratet.no/veiledere/systematisk-folkehelsearbeid/evalueringinternkontroll#vurdering-og-forbedring-av-kommunens-folkehelsearbeid (accessed on 12 February 2022).
Figure 1. Illustration of the systematic public health work, Reprinted with permission from [7] (Norwegian Directorate of Health, 2018) [Translation to English: § 5 Overview (Oversikt), § 6 1st Paragraph Planning strategy (Planstrategi), § 6, 2nd paragraph Determine goals in plans (Fastsette mål i plan), §§ 4 and 7 Measures/Action (Tiltak), §§ 30 and 5 Evaluation (Evaluering)].
Figure 1. Illustration of the systematic public health work, Reprinted with permission from [7] (Norwegian Directorate of Health, 2018) [Translation to English: § 5 Overview (Oversikt), § 6 1st Paragraph Planning strategy (Planstrategi), § 6, 2nd paragraph Determine goals in plans (Fastsette mål i plan), §§ 4 and 7 Measures/Action (Tiltak), §§ 30 and 5 Evaluation (Evaluering)].
Societies 12 00092 g001
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Synnevåg, E.S. Evaluating ‘Health in All Policies’ in Norwegian Municipalities. Societies 2022, 12, 92. https://0-doi-org.brum.beds.ac.uk/10.3390/soc12030092

AMA Style

Synnevåg ES. Evaluating ‘Health in All Policies’ in Norwegian Municipalities. Societies. 2022; 12(3):92. https://0-doi-org.brum.beds.ac.uk/10.3390/soc12030092

Chicago/Turabian Style

Synnevåg, Ellen Strøm. 2022. "Evaluating ‘Health in All Policies’ in Norwegian Municipalities" Societies 12, no. 3: 92. https://0-doi-org.brum.beds.ac.uk/10.3390/soc12030092

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop