Las Vegas 2023

Architecture-As-Code: DevOps for System Engineering

Architecture-As-Code: DevOps for System Engineering


Jondavid "JD" Black

Director of Digital Transformation, Northrop Grumman





So, uh, to explain the next, uh, speaker, uh, one of the highlights of working, uh, with Dr. Spear on, uh, wiring the winning organization was learning more about the birth of systems engineering, uh, at nasa. So over the years, many speakers have shared the challenges of integrating the systems of the systems engineering discipline into DevOps and, uh, the ever-growing successes of, uh, doing so for mission critical systems. And so think about, uh, cyber industrial systems. That means like rockets, the electric motors in battery powered cars, MRI and CT scanners, pacemakers, and so much more. So next speaker is JD Black. He's the director of Digital transformation for Missile Defense Solutions at Northrop Grumman. He's going to teach us about what systems engineering is all about, uh, the challenges of integrating that discipline into modern software methods and some exciting tools he built to help enable systems engineering as code to help bridge the gap between dev operations and now systems engineers. Here's jd,


My bags last night.


Thank you.


Thanks, Jane. Zero out.


All right. Good morning. It's really great to be here. And first of off, you know, first off, huge thanks to Jean and Anne and the IT revolution team for the opportunity to be here with you. And a huge thanks to our teams and individuals who's helped us learn and explore and innovate on some of the concepts I'm gonna, uh, share with you today. So I'm JD Black. I work in Northrop Grumman's space sector. Uh, this week I've had a few people come up and say, that's a familiar name. What is it that you guys do? So you probably recognize some of our more public work. We do things like resupply missions to the International Space Station, or we also built the James Webb Space Telescope, but we do a lot more than that. And today I'd like to talk about some of the challenges and opportunities of applying the DevOps principles that we discuss here to the practices of systems engineering in a complex mission critical world.


So I've been supporting government customer missions for about the past 25 years. I've also had the, I've been fortunate to have the opportunity to come to this conference for the past few years, and every year something comes and hits a little too close to home. That moment for me yesterday was when Paul Gaffney said, PowerPoint is the second most destructive force on the planet <laugh>. It made me realize this talk is really all about a strategy to defend against that destructive force in engineering. When you hear people mention PowerPoint engineering, they're usually talking about systems engineering. Then I realized I'm actually someone who spent the past 18 years dedicated to defending against the first most destructive force on the planet. We literally spend our days in partnership with the government customers discovering new and better ways to hit a bullet with a bullet in space from the other side of the planet. So I'd like to share some of the stories over that, that time with you today,


But let's get oriented first. Uh, so what is systems engineering anyway? And Gene and Steve's new book, they discussed systems engineering in the context of the US space program, and they rightly assert that where other nations failed to achieve their goals in space. It was sound systems engineering that allowed the US to put boots on REGOLITH on July 16th, 1969 to describe systems engineering. Simply, it tends to be all the physics, math, architecture, simulation analysis, and other engineering activities that help us establish high confidence in the systems we deliver and operate. So unlike so many of you, we build systems we hope never get used for their intended purpose, but if they're ever needed, they absolutely have to work perfectly the first time. And when real world system tests cost hundreds of millions of dollars, continuous testing in a truly representative environment is not feasible. And for the operations people in the room, imagine a world where an unplanned system outage literally results in a call to the White House. How would that impact your culture? So we rely on rigorous systems engineering practices to help us achieve the high confidence necessary for our mission. We rely on systems engineering to help us succeed in this world, and we take pride in our systems engineering work,


But we face a dilemma. Highly successful experienced systems engineers look at all these DevOps ideas and they can't see how to apply them to systems engineering work. Systems engineering products have historically been document oriented, not code oriented. If we were to deliver 50 pounds of software, there would probably be 250 pounds of engineering documentation that goes with it. And there's no compilers scanners or automated tests that can comprehend and provide rapid feedback on these artifacts. Much of the work tends to be orchestrated through the cab or ERB or a RB, depending on your team's terminology. So how can we tailor DevOps practices to help us accelerate and succeed in the world of systems engineering?


Well, the government provides extensive guidance on what good systems engineering processes should look like. We usually refer to this as the systems engineering V for context, just getting down to the first milestone of that V can take anywhere from months to over a year, depending on the scale and complexity of your system. But the real world mission needs are rapidly changing and accelerating. So how can we do all of this critical work at mission speed, but without sacrificing rigor? We see in the commercial world how DevOps practices has have helped to overcome the wall of confusion between development teams incentivized for rapid feature delivery and operations teams incentivized for maintaining system availability. We see how unified cross-functional baselines and automation bring speed as well as quality. We eagerly learn from all of you and tailor your knowledge and learning to our work to continuously improve.


But imagine when all you know, how do you work in a systems engineering world when you're incentivized for completeness? That's measured at gate reviews are all the requirements defined, derived, allocated and traced, including all the regulatory constraints and time for the systems. Requirement review are all the trade studies analysis interface documents, high level design documents completed, reviewed, approved, and ready for preliminary design review. These incentives often create divides between systems, engineering teams and product development teams who don't have the budget or schedule to wait on complete approved documentation if they're gonna meet their milestones. So a few years back, I had the privilege to work under one of our partners on a complex global system. Shortly after arriving on the team, one of the lead architects sat down with me and we spent a good half day just walking through the system architecture that was developed in a model-based systems engineering tool.


It was nothing short of impressive. After seeing all this complex detailed work, I wanted to follow this through the flow of our value stream. I went over to one of the product managers who's responsible for building and delivering the working system, and I asked, how do you use that architecture model to improve the work of your team? The immediate response was what architecture model, product delivery had so much to accomplish in such a short amount of time that they started early, they worked as fast as they could and they delivered product that met all their milestones. But at the end of the day, when it was time to release the system, the details and the architecture models and the documents didn't align to the systems we actually built, tested, and delivered. It turns out the product teams didn't have access to or the budget for licenses to use the systems engineering tools. The real world interface between systems engineering and product development was mostly an export of the requirements database to Excel, which was sent to the product teams as an email.


So as a result, we had to embrace some unplanned mandatory investment to rework our architecture models to align with the as-built system. So with your value stream mapping hats on, you can probably immediately see the opportunity for improvement here. So we understand the critical value of rigorous systems engineering, and we've seen technical teams employ DevOps with amazing results In this same program, we saw our operations team create an infrastructure as code baseline that automated the deployment and configurations of complex highly secure systems. We asked our level two operations lead to form a cross-functional team with developers and cybersecurity to shift that work left in our process. This new team and close partnership with a fabulous cybersecurity lead transformed our bare metal deployment configuration and accreditation from a three to nine month activity to a process that achieved authorization to operate with all the required scans, reports and documentation and less than an hour. The key enabler for us was the infrastructure as code unified baseline that we could configuration, manage and collaborate around to transform a manual, sometimes very interpretive work into something unambiguous and automated. Perhaps similar strategies could improve how integration of systems engineering into the development, uh, and delivery flow of our products. So our idea was to create an architecture as code capability that enabled DevOps practices for systems engineering work products.


So our vision was to create a minimum viable process and product, a domain specific language that allowed our system engineering teams to declaratively define the architecture of their system in the language of the system domain. In a way, the architecture of this, uh, the way that would provide transparent and direct product development and operations team value. The architects could define the top double, top down system constraints, interactions, design budgets, test specifications, and then release that as a product to the teams who could incrementally refine that for their product architectures to incorporate capability enhancements through their planning and release cycles.


And we had some success with this. Uh, MVP. We had a small team using these practices who discovered a flawed architectural assumption late in the lifecycle. As they were building capability, they were able to rapidly re-architect and auto generate and deliver a working system. A product team from the same program I was discussing earlier adopted this architecture's code strategy. They completely ignored code generation instead focusing on tailored systems engineering art artifact generation to reduce their lead time for a mission critical safety critical product from six to eight weeks to under an hour. They also transition their safety critical testing from a probability based sampling of manual test cases and exhaustive auto to an exhaustive automated test of every possible safety permutation that product could ever face in the real world. They created fully automated pipelines that tested and analyzed every requirement and used the information in the architecture's code model to provide fully traceable documentation and generate detailed acceptance documents with data on every test case for every requirement for every module with a full report of objective evidence demonstrating success.


So we learned a lot from these teams, but we also learned there was no way a broader systems engineering culture would ever adopt and use a custom domain specific language littered with curly braces and semicolons. So we're currently pivoting to the next viable product that focuses on approachability, usability, and extensibility. We're now using a much simpler structured data representation for architectures in yaml. We're now using much, uh, we're we've transitioned to an extensible python implementation that's consistent with what systems engineering analysts are accustomed to using on a daily basis. And we're currently exploring this on another complex system to declare an architecture with full vertical integration from system component to module with all the horizontal integration and vertical integration documented and traced.


So Gene asked, how could I get help from you? So don't worry, I I have no interest or ability to sell you a product, and the vast majority of you will never be our clients. But in a sense, you're all my customers and I hope you have some interest in helping drive improvement in this domain. So my request is this. If you share our passion for systems engineering and DevOps, we'd love to collaborate with you to learn and improve together. And if you're an engineering tool vendor, feel free to steal our ideas, make improvements, and run with them. We see value in establishing model-based systems engineering artifacts that we can effectively define, branch modify, diff, and merge in a collaborative unambiguous baseline. Help us overcome the legacy of PowerPoint engineering by inventing new declarative ways to define system architectures and engineering products suitable for automated quality analysis artifact generation, and CICD pipelines. So we're committed to transparency on this journey. It took over 14 months to get approval, but we have published our work as open source under an MIT license. Reach out via our GitHub repo and collaborate with us and our online discussions to help integrate the world's of systems engineering and DevOps. Thank you.