Skip to main navigation menu Skip to main content Skip to site footer

Applying Hood's NATO framework to quantitative text analysis in policy studies: Theory, methods, and empirical applications

Abstract

Hood’s “tools of government” framework conceptualizes governing as the deployment of four fundamental resource types: nodality (information and network position), authority (legal power), treasure (financial resources), and organization (direct provision and administrative capacity). Originating as a parsimonious typology for describing how governments act, the NATO framework has become influential in policy design and policy instrument research because it offers a stable “instrument language” that is portable across sectors and political systems. Meanwhile, the “text-as-data” turn in political science and public policy has enabled researchers to analyze policy documents at scale, shifting instrument research from small-N interpretive coding toward replicable measurement of instrument presence, design features, and policy mixes. This review synthesizes how NATO has been operationalized for quantitative text analysis and situates those practices within wider advances in policy instrumentation theory and computational methods. Building on systematic review principles, it organizes contributions along three linked research programs: (1) conceptual refinement of NATO and related policy tool taxonomies in policy design and policy mixes; (2) methodological foundations of automated content analysis, including dictionary methods, supervised learning, topic models, scaling models, and span-level annotation; and (3) empirical applications that code policy instruments from legal and policy texts, including emerging annotated corpora that enable model training and benchmarking. Across these streams, recurring methodological challenges include construct validity (“instrument” as intent versus effect), unit-of-analysis selection (document, section, sentence, clause, span), multi-label coding (co-occurring instruments), calibration and intensity measurement, cross-jurisdiction comparability, and reliability of human labels. The review concludes with a research agenda for NATO-based text measurement, emphasizing transparent codebooks, hierarchical and multi-task models, evaluation against human annotation with appropriate agreement metrics, and integration of instrument coding with causal and design-oriented policy evaluation.

Keywords

policy instruments, NATO framework, tools of government, text as data, automated content analysis, policy design, policy mixes, supervised learning, annotation, policy measurement

PDF

References

  1. Bali, A. S., Capano, G., & Ramesh, M. (2019). Anticipating and designing for policy effectiveness. Policy and Society, 38(1), 1–13. https://doi.org/10.1080/14494035.2019.1579502
  2. Bali, A. S., Howlett, M., Lewis, J. M., & Ramesh, M. (2021). Procedural policy tools in theory and practice. Policy and Society, 40(3), 295–311. https://doi.org/10.1080/14494035.2021.1965379
  3. Benoit, K., & Laver, M. (2007). Estimating party policy positions: Comparing expert surveys and hand-coded content analysis. Electoral Studies, 26(1), 90–107. https://doi.org/10.1016/j.electstud.2006.04.008
  4. Bruinsma, B., & Gemenis, K. (2019). Validating Wordscores: The promises and pitfalls of computational text scaling. Communication Methods and Measures. Advance online publication. https://doi.org/10.1080/19312458.2019.1594741
  5. Capano, G. (2020). The knowns and unknowns of policy instrument analysis. SAGE Open. https://doi.org/10.1177/2158244019900568
  6. Capano, G., Pritoni, A., & Vicentini, G. (2020). Do policy instruments matter? Governments’ choice of policy mix and higher education performance in Western Europe. Journal of Public Policy, 40(3), 375–401. https://doi.org/10.1017/S0143814X19000047
  7. Devlin, J., Chang, M.-W., Lee, K., & Toutanova, K. (2018). BERT: Pre-training of deep bidirectional transformers for language understanding (arXiv:1810.04805). arXiv. https://doi.org/10.48550/arXiv.1810.04805
  8. Fernández-i-Marín, X., Knill, C., & Steinebach, Y. (2021). Studying policy design quality in comparative perspective. American Political Science Review, 115(3), 931–947. https://doi.org/10.1017/S0003055421000186
  9. Grimmer, J., & Stewart, B. M. (2013). Text as data: The promise and pitfalls of automatic content analysis methods for political texts. Political Analysis, 21(3), 267–297. https://doi.org/10.1093/pan/mps028
  10. Hannah, A. (2021). Procedural tools and pension reform in the long run: The case of Sweden. Policy and Society, 40(3), 362–378. https://doi.org/10.1080/14494035.2021.1955487
  11. Hase, V. (2022). Automated content analysis. In The international encyclopedia of communication research methods. https://doi.org/10.1007/978-3-658-36179-2_3
  12. Hayes, A. F., & Krippendorff, K. (2007). Answering the call for a standard reliability measure for coding data. Communication Methods and Measures, 1(1), 77–89. https://doi.org/10.1080/19312450709336664
  13. Hjorth, F. (2015). Comparing automated methods for estimating party positions. Research & Politics. https://doi.org/10.1177/2053168015580476
  14. Hood, C. C. (1983). Tools of government. Palgrave Macmillan. https://doi.org/10.1007/978-1-349-17169-9
  15. Howlett, M. (2018). The criteria for effective policy design: Character and context in policy instrument choice. Policy and Politics. https://doi.org/10.1080/17516234.2017.1412284
  16. Howlett, M. (2019). Procedural policy tools and the temporal dimensions of policy design: Resilience, robustness and the sequencing of policy mixes. International Review of Public Policy. https://doi.org/10.4000/irpp.310
  17. Howlett, M., & Mukherjee, I. (2014). Policy design and non-design: Towards a spectrum of policy formulation types. Politics and Governance, 2(2), 57–71. https://doi.org/10.17645/pag.v2i2.149
  18. Howlett, M., & Rayner, J. (2007). Design principles for policy mixes: Cohesion and coherence in “new governance arrangements.” Policy and Society, 26(4), 1–18. https://doi.org/10.1016/S1449-4035(07)70118-2
  19. Howlett, M., Giest, S., Mukherjee, I., & Taeihagh, A. (2025). New policy tools and traditional policy models: Better understanding behavioural, digital and collaborative instruments. Policy Design and Practice. https://doi.org/10.1080/25741292.2025.2495373
  20. Isoaho, K., Gritsenko, D., & colleagues. (2021). Topic modeling and text analysis for qualitative policy research. Policy Studies Journal. https://doi.org/10.1111/psj.12343
  21. Jin, Z., & Mihalcea, R. (2023). Natural language processing for policymaking. In Handbook of computational social science for policy (pp. 141–162). Springer. https://doi.org/10.1007/978-3-031-16624-2_7
  22. John, P. (2013). All tools are informational now: How information and persuasion define the tools of government. Policy & Politics, 41(4), 605–620. https://doi.org/10.1332/030557312X655729
  23. Klemmensen, R., Binzer Hobolt, S., & Hansen, M. E. (2007). Estimating policy positions using political texts: An evaluation of the Wordscores approach. Electoral Studies, 26(4), 746–755. https://doi.org/10.1016/j.electstud.2007.07.006
  24. Lascoumes, P., & Le Galès, P. (2007). Understanding public policy through its instruments: From the nature of instruments to the sociology of public policy instrumentation. Governance. https://doi.org/10.1111/j.1468-0491.2007.00342.x
  25. Laver, M., Benoit, K., & Garry, J. (2003). Extracting policy positions from political texts using words as data. American Political Science Review, 97(2), 311–331. https://doi.org/10.1017/S0003055403000698
  26. Linder, S. H., & Peters, B. G. (1989). Instruments of government: Perceptions and contexts. Journal of Public Policy, 9(1), 35–58. https://doi.org/10.1017/S0143814X00007960
  27. Lowe, W. (2008). Understanding Wordscores. Political Analysis, 16(4), 356–371. https://doi.org/10.1093/pan/mpn004
  28. Margetts, H. (2024). How rediscovering nodality can improve democratic governance in a digital world. Public Administration. https://doi.org/10.1111/padm.12960
  29. Mukherjee, I., Çoban, M. K., & Bali, A. S. (2021). Policy capacities and effective policy design: A review. Policy Sciences, 54(2), 243–268. https://doi.org/10.1007/s11077-021-09420-8
  30. Pacheco-Vega, R. (2020). Environmental regulation, governance, and policy instruments: Where are we now, 20 years after the stick, carrot and sermon typology? Journal of Environmental Policy & Planning. https://doi.org/10.1080/1523908X.2020.1792862
  31. Page, M. J., McKenzie, J. E., Bossuyt, P. M., & colleagues. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, Article n71. https://doi.org/10.1136/bmj.n71
  32. Restemeyer, B., van den Brink, M., & Arts, J. (2024). A policy instruments palette for spatial quality: Lessons from Dutch flood risk management. Journal of Environmental Policy & Planning, 26(3), 249–263. https://doi.org/10.1080/1523908X.2024.2328072
  33. Rice, D., Siddiki, S., Frey, S., Kwon, J. H., & Sawyer, A. (2021). Machine coding of policy texts with the Institutional Grammar. Public Administration, 99(2), 248–262. https://doi.org/10.1111/padm.12711
  34. Roberts, M. E., Stewart, B. M., & Airoldi, E. M. (2016). A model of text for experimentation in the social sciences. Journal of the American Statistical Association, 111(515), 988–1003. https://doi.org/10.1080/01621459.2016.1141684
  35. Roberts, M. E., Stewart, B. M., & Tingley, D. (2019). stm: An R package for structural topic models. Journal of Statistical Software, 91(2), 1–40. https://doi.org/10.18637/jss.v091.i02
  36. Roberts, M. E., Stewart, B. M., Tingley, D., & colleagues. (2014). Structural topic models for open-ended survey responses. American Journal of Political Science, 58(4), 1064–1082. https://doi.org/10.1111/ajps.12103
  37. Rogge, K. S., & Reichardt, K. (2016). Policy mixes for sustainability transitions: An extended concept and framework for analysis. Research Policy, 45(8), 1620–1635. https://doi.org/10.1016/j.respol.2016.04.004
  38. Ruedin, D. (2013). The role of language in the automatic coding of political texts. Swiss Political Science Review. https://doi.org/10.1111/spsr.12050
  39. Schneider, A. L., & Ingram, H. (1990). The behavioral assumptions of policy tools. The Journal of Politics. https://doi.org/10.2307/2131904
  40. Schoenefeld, J. J., Schulze, K., Hildén, M., & Jordan, A. J. (2021). The challenging paths to net-zero emissions: Insights from the monitoring of national policy mixes. The International Spectator, 56(3), 24–40. https://doi.org/10.1080/03932729.2021.1956827
  41. Slapin, J. B., & Proksch, S.-O. (2008). A scaling model for estimating time-series party positions from texts. American Journal of Political Science, 52(3), 705–722. https://doi.org/10.1111/j.1540-5907.2008.00338.x
  42. Stead, D. (2021). Conceptualizing the policy tools of spatial planning. Journal of Planning Literature, 36(3), 297–311. https://doi.org/10.1177/0885412221992283
  43. Steinebach, Y., Hinterleitner, M., Knill, C., & Fernández-i-Marín, X. (2024). A review of national climate policies via existing databases. npj Climate Action, 3, Article 80. https://doi.org/10.1038/s44168-024-00160-y
  44. Uyarra, E., Flanagan, K., & Laranja, M. (2011). Reconceptualising the “policy mix” for innovation. Research Policy, 40(5), 702–713. https://doi.org/10.1016/j.respol.2011.02.005
  45. Vabo, S. I., & Røiseland, A. (2012). Conceptualizing the tools of government in urban network governance. International Journal of Public Administration, 35(14), 934–946. https://doi.org/10.1080/01900692.2012.691243
  46. Viehmann, C., Beck, T., Maurer, M., Quiring, O., & Gurevych, I. (2023). Investigating opinions on public policies in digital media: Setting up a supervised machine learning tool for stance classification. Communication Methods and Measures, 17(2), 150–184. https://doi.org/10.1080/19312458.2022.2151579
  47. Xie, M., Liao, X., & Yaguchi, T. (2025). The policy spatial footprint: Causal identification of land value capitalization using network-time exposure. Land, 14(11), 2240. https://doi.org/10.3390/land14112240
  48. Young, L., & Soroka, S. (2012). Affective news: The automated coding of sentiment in political texts. Political Communication, 29(2), 205–231. https://doi.org/10.1080/10584609.2012.671234
  49. Zhao, J., & Li, C. (2022). Research on the classification of policy instruments based on BERT model. Discrete Dynamics in Nature and Society, 2022, Article 6123348. https://doi.org/10.1155/2022/6123348