

A couple of weeks ago, I participated in a panel discussion at the 2025 State Certification Testing of Election Systems National Conference (SCTSNEC) in Washington, D.C. The SCTSNEC conference, started in 2011, was created by Ball State University’s Voting System Technical Oversight Program (VSTOP). This year, it was hosted by the U.S. Election Assistance Commission (EAC). The conference is typically attended by election administrators, voting system certification specialists, testing laboratory professionals, and members of the EAC, according to VSTOP (https://sites.bsu.edu/vstop/conferences/).
The panel I was part of was covering the topic of “Verifying Non-Voting Election Technology: The RABET-V Model”. I was one of four people on the panel, representing Cloud Security Partners, as we have participated in the RABET-V process as an assessor. The Rapid Architecture-Based Election Technology Verification (RABET-V) program is a rapid, reliable, and cost-effective approach to verifying both vendor-provided and homegrown non-voting election systems (https://rabetv.org). Non-voting election systems include electronic pollbooks, voter registration systems, election night reporting systems, ballot delivery, and other supporting systems.
RABET-V is designed with a unique approach to verification, leveraging three types of assessments. It starts with an organizational assessment based on the OWASP Software Assurance Maturity Model (SAMM) that evaluates the maturity and reliability of the technology provider’s secure development processes, including management and supply chain oversight. The results help provide insight into the capability of the organization to consistently produce secure, high-quality systems and manage the risk of change. Next, an architecture assessment is conducted. The architecture is evaluated at both a system level, with logical diagrams and relationships between components, and a software level with tooling analysis of the software architecture. Ten security control categories are assessed in the architecture, and a high-level threat model is also completed. The results are quantified into a scoring model to measure the maturity and security of the architecture, similar to the organizational assessment. Finally, the product verification is conducted leveraging the outputs of the organizational and architecture assessments. There are 153 security controls assessed for each primary component of the product. The verification is a combination of attestation, configuration review, and penetration testing. The end result is scores in each area that are measured against a benchmark. In addition, there are recommendations at the organizational and architecture levels for improvements based on the findings; this is a unique output and is intended to help advance the quality and security of election supporting technology.
On the panel, we talked about the current state of election-supporting technology and how it is expanding rapidly. While voting machines are typically offline or air gapped, election supporting technology is more commonly connected to the internet to interact with APIs and other services. Like other technologies, cloud-based services are becoming more common all the time. For instance, an electronic pollbook may need to communicate with a central API to help ensure that someone doesn’t check in to vote multiple times at multiple locations. A voter registration database may need to synchronize changes with a system of record in the cloud. Election night reporting has to be accessible from the internet to provide the service of letting people know the current state of tabulation. Election-supporting technology is very similar to other systems in these ways, so RABET-V was developed to provide a robust assessment methodology for these modern systems that can update and change frequently. The initial testing is holistic, but based on the maturity levels measured, future updates may require less testing if the maturity levels are sufficiently high. This is a risk-based approach to testing to help with efficiency.
Here at Cloud Security Partners, we have conducted all three types of assessments for technology providers going through the RABET-V process. If you have products going through the RABET-V process, we may see you there. If not, and you would like to gain some of the benefits of these types of assessments, please let us know. We can conduct a secure development process assessment using the OWASP SAMM framework and provide a comprehensive understanding of your current maturity. Along with tailored recommendations of areas to improve your maturity and the security of your development processes. We can provide architecture assessments of your systems, whether they are locally hosted in your data center or cloud-based. We have a wealth of experience and tooling specifically built to help with cloud-based assessments. We can also provide more traditional penetration testing for system verification.
Brian Glas is a Fellow at Cloud Security Partners and has worked in IT for 25 years and in information/application security for the last two decades. He started as an enterprise Java developer, then transitioned to helping build an application security program as both tech lead and manager. Brian spent several years as a consultant helping clients build AppSec programs, create and update SDLCs, and other related initiatives. He has worked on the Trustworthy Computing team at Microsoft and is now the chair and an assistant professor of Computer Science at Union University, where he builds and reimagines the Computer Science and Cybersecurity programs. He is a Fellow at Cloud Security Partners, focusing on Programs and Policies, Organizational Assessments using SAMM, as well as threat modeling and Architecture Reviews. He has also been a co-lead for SAMM v1.1-2.0+ and the OWASP Top 10 since 2017, as well as helping to develop the RABET-V program to assess non-voting election technology.
Stay in the loop.
Subscribe for the latest in AI, Security, Cloud, and more—straight to your inbox.