The History and Purpose of Risk Assessments

October 9, 2025
-
Brian Glas

Risk assessments have evolved from ancient intuitive practices to relatively sophisticated modern frameworks that serve as foundational tools for business operations and cybersecurity programs. We’ll explore how risk assessment methodologies have evolved over time, adapting to changing threats and technological advancements while maintaining the core principles of identifying, analyzing, and mitigating potential risks. Risk assessments shifted from “gut feeling” and learned experiences to a more calculated process with the introduction of probability theory in the 17th century by Pascal and Fermat [1]. The Industrial Revolution accelerated risk management as new technologies and larger-scale operations created increased risks that required effective management. The late 1800s witnessed the establishment of the first credit bureaus, which provided lenders with detailed information about borrowers. In 1909, the first mathematical model for non-life insurance was developed, laying the groundwork for modern insurance risk theory, which emerged in the 1930s.

The 20th century marked the formalization of risk management as a discipline, driven by the advent of computing during and after World War II. The introduction of the Monte Carlo method in the 1940s by Ulam and von Neumann revolutionized risk assessment by enabling the simulation of probabilistic outcomes. The 1950s brought formalization of risk management as a profession. Universities began offering specialized risk management degrees in the 1970s, further solidifying the field’s professional status. Enterprise Risk Management (ERM) is a natural evolution from transaction-based decisions to comprehensive risk event management with sophisticated analytics. Businesses have developed and quantified risk management and risk assessments for many, many decades. Cybersecurity’s focus on risk assessments is relatively young by comparison.

Some of the early documented risk analysis was produced by the National Bureau of Standards, which is now the National Institute of Standards and Technology (NIST). In 1978, the OMB Circular A-71 on the Security of Federal Automated Information Systems required a risk analysis; however, no standard method for conducting this analysis had been developed. In 1979, NIST published FIPS 65: Guidelines for Automatic Data Processing Risk Analysis, which suggested an “order of magnitude approach” [2]. This approach is very similar to what we see today in the Factor Analysis of Information Risk (FAIR). NIST continued to develop a standard risk-based approach to system security. In 1995, they published NIST SP 800-12 An Introduction to Computer Security: The NIST Handbook, based on the British Standard 7799: A Code of Practice for Information Security Management (1993), which also became the basis for ISO/IEC 27001. In 2002, NIST published NIST SP 800-30 Risk Management Guide for Information Technology Systems (now titled Guide for Conducting Risk Assessments), which included methodologies for risk assessments. This led to the development of the NIST Risk Management Framework (RMF) in NIST SP 800-37 Risk Management Framework for Information Systems and Organizations: A System Life Cycle Approach for Security and Privacy. The RMF, coupled with the NIST 800-53 Security and Privacy Controls for Information Systems and Organizations, now provides a more complete system to classify, categorize, assess, and implement controls.

Other standards and frameworks have been developed to help organizations manage and assess risk.

  • ISO/IEC 27005 provides a comprehensive approach that complements ISO/IEC 27001, following six steps to establish context, identify, analyze, evaluate, treat, and accept risks.
  • FAIR (Factor Analysis of Information Risk) enables quantitative analysis by expressing risks in financial terms rather than subjective ratings. This approach enables organizations to prioritize cybersecurity investments based on their potential financial impact.
  • OCTAVE (Operationally Critical Threat, Asset, and Vulnerability Evaluation) emphasizes a business-driven approach focusing on critical assets, threat analysis, and business impact assessment.
  • COBIT (Control Objectives for Information and Related Technologies), published by ISACA (formerly the Information Systems Audit and Control Association), is an IT governance framework that integrates security, risk management, and compliance. Unlike some other frameworks focused on cybersecurity, COBIT provides a broader perspective on enterprise IT management. (Side note: ISACA bought the CMMI Institute, which manages the Capability Maturity Model Integration (CMMI), which we’ll get to shortly.)

To perform security risk assessments, you need a process that employs qualitative and quantitative methodologies. Risk assessments need to be measured against a standard or maturity model. Historically, information security risk assessments, in the form of threat models and penetration tests, would produce risks with qualitative values such as “High”, “Medium”, or “Low”. I had an undergraduate student ask me a couple of weeks ago, “So what?  What am I supposed to do with that?” I told her that was an excellent question. A purely qualitative result of “High” doesn’t provide enough information to help the business make an informed prioritization choice. We also discussed the most common risk formula of Likelihood x Impact = Risk. Again, we are typically looking at H/M/L qualitative scoring for likelihood and impact, which doesn’t help prioritize outside of the assessment report itself. Likelihood is usually missing the element of time and is rarely clear about “likelihood of exploit” vs “likelihood of discovery”. Impact is greatly simplified and rarely includes both primary and secondary impacts to systems and the business.

Cybersecurity risk assessments are essential for identifying, evaluating, and prioritizing potential threats, allowing organizations to allocate resources effectively and make informed decisions to manage risk. The ultimate purpose is to safeguard critical information and systems, ensure operational continuity, meet legal or regulatory obligations, and support strategic business objectives. They can help clarify specific vulnerabilities and threats to assets, moving security decisions away from guesswork and toward evidence-based practices. By understanding the risk levels, organizations can balance risk exposure against available resources, aligning security with business goals and avoiding unnecessary spending or dangerous risk acceptance. One of the biggest challenges facing risk assessments is ensuring they are an accurate analysis of risk, with a solid understanding of both likelihood and impact, to enable the business to make well-informed decisions on what controls need to be put in place to manage the risk.

Ultimately, we are getting to performing risk assessments with a focus on cloud deployments. Next, we will look at maturity models in a similar manner. Then, we will work our way through how well risk assessments in the cloud can be performed using existing methodologies and maturity models, and potentially new ways to look at things. Hopefully, this will be a helpful series that provides a common understanding of risk assessments, maturity models, and what we can do to improve them for the future.

 

References

[1] https://risksciences.com/history-of-risk/

[2] https://csrc.nist.gov/nist-cyber-history/risk-management/chapter

Stay in the loop.
Subscribe for the latest in AI, Security, Cloud, and more—straight to your inbox.

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Back to blogs