Las Vegas 2018

How Common Processes and Communication Helped AGCO Grow

Growth by acquisition has long been a strategy for AGCO.


While successful for the business as a whole, that growth created challenges for their digital transformation. A dozen globally-dispersed groups were used to doing things their own way and were resistant to change. They lacked a common language and process for doing software releases, starting at the build stage.


This talk will illustrate how AGCO: Used communication and visibility to break down silos and share best practices and how to get disparate groups to want to come on board to the common processes.


Erik Maziarz is an IT System Administrator and Technical Lead at the AGCO Corporation. He is responsible for managing the common software development environment, custom development efforts, and business process integration for a suite of applications used to facilitate the global Electronic Engineering and Software development teams. He started out as an enlisted member of the United States Air Force before transitioning to software development at UPS and eventually landing at AGCO.


Erik has been the project manager of a multi-phase project at AGCO to introduce a common continuous integration and deployment environment to multiple engineering sites at AGCO's global engineering sites.

EM

Erik Maziarz

IT Systems Administrator II, AGCO

Transcript

00:00:05

My name is, uh, Eric Maar. I'm coming from AGCO Corporation, somewhat of a generic sounding name. Um, but it's, uh, an agriculture company build, uh, uh, different kind of, uh, agriculture, farm machinery, tractors, combines, um, implements, um, dairy, um, uh, devices, also, um, feeding implements and, uh, a lot of different things. Um, a little bit about me Started off, um, aside from some odd jobs here and there, um, was the Air Force worked as a, uh, I worked in the, uh, packing and creating section. We shipped out supplies, worked in the Woodhouse or wood shop, did a lot of, uh, work in the solver in wood shop. And, uh, moved from there to UPS, where I worked as an application developer. Worked on the website and, uh, email communication, um, application. So a lot of the straight up development work there. From there, I went to agco, where it's been a mixture of, uh, development and, uh, kind of project management and, um, overall system, uh, design.

00:01:18

And it's not a mess, but a lot of different things. Um, but so about a year and a half, two years ago, we started with a project to bring, um, a lot of our engineering teams into a kind of a, a global system set up, because AGCO is, uh, a company that has so far grown by acquisition. They'll buy different, uh, companies, um, Massey Ferguson, Gleaner, uh, GSI, fent, um, uh, vol Ultra. They, those are, uh, all kind of global, globally situated companies. And from there, they integrate them into their, their setup. Some of the problems with that are that, uh, they do not, they do not start from the ground up as part of agco. They're all independent companies. They, uh, being independent companies, they have their ways of doing things, different cultures different, uh, yeah, different methods of, uh, basically doing stuff. As you would expect, it's not a startup where, you know, you've got five people and they are all kind of have the same vision and, uh, plan on doing things the same way.

00:02:34

So, you know, for years, since I think 19 90, 19 91, with the first couple acquisitions, they've been doing things their own way. And that's been an established process for, uh, a few decades, couple decades now. So breaking them, not breaking 'em apart from those, but trying to integrate them into a global, um, environment where the same standards processes are followed has been what I've been working on. So, um, lemme see here. So maybe you, maybe are, you know, or maybe not. Uh, tractors, uh, combines, those kinds of vehicles are pretty complicated nowadays. They're not just a vehicle that pulls stuff around the field. Uh, they, uh, have, you know, they have GPS systems in them. They have, um, autonomous systems that can help plant crops or plant seeds. Um, you send 'em out in the field, and they have swarm vehicles that we've, uh, developed that basically plant and sew a field on their own.

00:03:40

Uh, you have, um, air conditioning and heating in some of the cabs that's pretty advanced. You have, um, lemme see what else? Uh, automatic milking, automatic, uh, feeding systems, automatic, um, you know, morale in, um, enhancing systems for, um, uh, for the farms. All those require software. So part of what we've been doing is trying to standardize, is this going? There we go. Okay. So there's a couple pictures of some of the advanced looking things that we have in our, uh, in our fleet, in our, uh, vehicles and, uh, things that we work on and sell and have. So, so my group is the electronics functional group, and basically we are in charge of kind of the overall direction of the global direction that our groups take as far in the software and electronics engineering group. So we manage the overall application lifecycle management, the, uh, you know, source code traceability portion of things, just the overall way that the teams work, um, from a software development standpoint.

00:05:01

Um, so like I said, previously, a lot of these teams would do things independently. They would develop code, um, requirements, uh, testing that was all just kind of standalone site by site. They would, you know, communicate their requirements, um, design specifications via email, Excel spreadsheets, and, uh, that worked when they were, you know, you could talk, you could walk to the next building or to a cubicle over from you and talk to your manager about this stuff. But given the direction that we've been going, um, trying to develop a platform strategy where the same software can run on the same machines, um, you have to have some sort of interconnectivity between our sites. And so you've got teams in the us, um, Europe, south America, most of the engineering teams are in the, in Europe or, uh, the us, but trying to communicate between them, you know, new, new software builds being ready, um, to test or to implement, um, requirements as far as what is going to be going into a new, uh, a new product.

00:06:11

That's difficult. And it, it seem, it does seem like something that, you know, some, a company that's been doing business for this long should be doing by now. And, you know, they do it some well, sometimes well, sometimes not. And, uh, that is where we are, um, where my group is working on. So, um, a couple years ago, like I said, project came to us to kind of start migrating our different sites into a global, um, uh, a platform, a global environment where everyone did their things, did their development, did their resource tracking, did their, uh, software development, whole life cycle in the same environment. And, um, you know, we needed that because as far as software builds go, a lot of the times that was someone's laptop, someone's desktop underneath a desk that was running those builds. You know, once a week, once every two weeks, whenever a sprint cycle was done, someone would download or, uh, check out the code, run a build, publish it, and that site would have access to that build. And then maybe they would remember to send it out to other groups. And, uh, you know, if they had dependencies, hopefully they would get what they needed to, uh, start integrating that and be able to build their software with those builds. There's multiple computers that go onto each of these machines, each of these devices, and it's a highly dependent system. There's a lot of people that depend upon something down the chain or up the chain giving something to them. Um, see, uh, lemme see.

00:08:03

One of the goals we had with this was to kind of spread information that was known among one group, but might not be known among others, such as different testing methods. Um, so with embedded hardware, which is what, or embedded systems which we use for, um, for vehicles, it's a little bit different than software. Like I said earlier, I came from UPS where we did software development, we had web applications, and you would have a build, you would, uh, build it, deploy it, it would go to an environment. And, um, that was, that was that basically, it was, um, pretty simple. It was dependent on your application, you know, it was a pretty standard process. Embedded systems, from what I've been able to learn so far, are a lot different and a lot more complex than web applications. One script may not be able to, uh, do what you want it to do as far as, um, these applications, we have, um, terminals that are built for these machines now that the, you know, the terminals are basically part of the armrest, uh, uh, aperture that is in these machines where there was a lot of, you know, very, uh, detailed information in there, in there that, uh, those are, um, oh yeah.

00:09:27

So those, uh, depending on the sites that we had, they were doing those drastically different. So part of the first couple of, um, I'll get to it in a minute, but the couple of the first sites that we migrated were these terminal applications, and basically just trying to cross pollinate these ideas as far as what were good processes with, uh, migrating and what was not, what did, what worked, what didn't work. And also, um, yeah, let's see the next one.

00:09:58

Okay, so as I mentioned earlier, we had multiple sites, uh, situated around the globe. Um, most of them numerically in Europe, a couple in the us we've got some non-engineering sites in, um, other parts of the, um, other, other parts of the earth. But they, um, don't really contribute as far as the engineering, um, situation goes. But, um, next slide here, we have kind of a, it's not an everyday situation, but some of the processes that we had would lead to this situation as far as builds would go. We'd start with team in <inaudible>, Finland, having to build, publish to slot, and that would go to France and, and they would all just wait for a build to be completed and get sent, be distributed. And it was, um, a bit of a, a mess there. Was it, it, it, it does seem like something that should have been, um, taken care of before or done better than it was.

00:11:07

But that's how the situation, that's how it was. It's was not clean. It was not, uh, efficient. It relied a lot on manual intervention by teams. So that's what we were trying to get rid of. And, um, yeah, it, it was, uh, it, it was really a mess. Um, some, some sites had independent, uh, contracts with, uh, external teams and they would have their own setups that would work for them. But as far as the company went, that was not, uh, something that was in place as a standard. And like I said earlier, part of my team's group or my team's responsibility was to try to determine and establish a standard process that we would follow. Okay. Um, so like I mentioned earlier, some of the issues that we, yeah, so some of the concerns that we had were that we were working with multiple groups, multiple cultures, multiple sites.

00:12:13

Um, it wasn't just one site that had a project being done. It, they have multiple projects. They have, you know, you know, anywhere between 10 and 50 engineers, I think, well, maybe just smaller sites. We had had five to 10. So it's a lot of people working together that they, um, it's just not what they're used to. It's like, if any of you're familiar with, um, you know, little league sports possibly, or baseball, you have an all star team at the end of the year where you take the best kids from all the teams and they, all of a sudden they have to work together. You have conflicting personalities, conflicting, uh, ideas on how to work together, and they're all good independently, but trying to get them to work together is, uh, you know, it's an effort and you have to convince them of the best way to do things given their new situation.

00:13:04

And that's a lot of what we had to do here. Um, so different sites, they, um, had their own, like I said, processes as far as doing things. Um, yeah. And what we were trying to do was to, if a, if a team had a build <affirmative> that was done via maybe one gigantic script that had a lot of hard coded dependencies, and that's how they did things. Part of our job, or my job during this project to migrate 'em away from that was to break it apart into modular pieces. Um, we used electric flow as our, um, continuous integration environment, and that's how we broke it apart into modules, which would, or procedures, which would kind of help them reduce the impact that changes, changes would make. And it would also make it so that other groups, other sites could consume those, those, uh, changes or consume the kind of knowledge that they had gained from, um, how they would complete their builds and, uh, incorporate it into their own.

00:14:13

'cause believe it or not, some teams we had, I think I forgot to mention it earlier, we would have, uh, some teams that had their home brew setups, they would use Jenkins, they would use, um, just a manual execution of scripts, or they would use SVN or get, check out the code run scripts. Um, talking with one of the teams recently, um, it was for a Linux base bill, a terminal, um, you know, every two weeks it would take up words of half a day to get the environment set up for the bill, check everything out, copy everything, a lot of manual steps, which, you know, know that that's not something we want to have. You wanna be able to click a button or give a couple of, uh, predefined steps, parameters and, uh, tell it to go. This was a lot of human interaction, which, like I said, already is something you wanna avoid.

00:15:06

Um, so some of the earlier sites we worked with, we were kind of trying to decide which ones were different enough from other ones, so that that kind of, uh, that initial foothold at those sites would be visible to the other people and we could get them, uh, you know, at least able to buy into the idea that it was gonna be useful to get into this global eco ecosystem. Um, yeah. And so some of the biggest issues we had were, were trying to get them to buy into it. It was, um, they had existing setups that would work for them. It was how they were doing it for years. And, um, how do you go to a team and tell them, okay, the way you're doing things, I know it works for you. I know you guys can do it. You have steps you can follow and you know how to do it, but we need to change that and, um, have you do the same thing, but in a different way.

00:16:03

So there won't be really be any immediate results, but that's something that you just gotta do. So unfortunately for my, for my side, I, I can tell that they see someone from corporate coming to tell them, this is what is going to be the new standard for you guys to follow. You guys might not see why it's gonna be beneficial right now, but that's, that's what we're gonna do. And, um, so up until somewhat recently, there's been a lot of pushback and resistance to that. But the more kind of momentum we've been getting with getting additional teams, additional sites on board with more projects, it's been, um, going better as far as teams being receptive to getting into this ecosystem. Um, I guess I, I mentioned ecosystem a bit because we've been trying to get them a lot of the teams and sites, um, on board as far as the whole application lifecycle management, the build, um, continuous integration, source control management, um, bunch of different things.

00:17:08

So it's, uh, you know, it, it's a whole, whole environment really. Um, yeah, so started off, I think probably mentioned it somewhat already, but we started off by going to a couple of pilot sites. Um, it was, um, a multi-phase project, but the first pilot sites were ones that were, um, either complicated, they were, um, diverse as far as the environment, the type of build it was, um, number of builds, um, complexity. So the first site we went to was for one of our flagship, um, or new, uh, developments was, which was, was a, which is a new combine, which is being developed, um, uh, from the ground up as far as ICO goes. It was, uh, it's actually the first, um, combine that's been developed completely in-house. All the other software builds that we had were for existing models. So this was something that actually isn't released yet.

00:18:09

So it's been, um, something that we've been, you know, catching it somewhat early in the process. They've been spending the past few years of finding requirements and everything for it. But the build process and builds for it have really just been ongoing for the past couple, well, past year and a half, I guess, at this point now. But it's been, um, it was a very complex, uh, thing that took, um, I'm gonna say a couple weeks to get, uh, down fairly well. Um, there were some unique challenges in that given the, um, just all the different types of applications that were used in the bill. I think MATLAB caused an issue due to what was that? Um, ah, invoking MATLAB would, uh, spring up a, uh, uh, a user interface, which, uh, had some issues with. I, I think the electric flow agent we were using had to run that as a process instead of a service, just windows issues with that.

00:19:12

But it was a, aside from that, um, just kind of a complex thing to do, a lot of steps involved, a lot of different variations in the build. Um, and I don't know if I go over it later, but, um, some of the challenges that we had were, you had people who were not super intimate with the existing build processes. They knew how to follow steps to do the build, and they had, you know, a checklist of things to do, but they could not explain it, you know, very well, they were not experts on it. So trying to migrate a build from someone's knowledge who is okay with it to get it implemented completely, that, that, that was a challenge. So for my side, having to work with these people to get their builds from a somewhat documented process to, um, something that was, uh, you know, something that was working, it, it was a challenge and it, it really just took a lot of time working with these guys to help them understand what they were trying to do and get it working in the system. Um, so lemme see.

00:20:25

Yeah, so part of some of what we were trying to do also was, I mentioned earlier the communication of the teams. They would send, uh, a bill via email. Sometimes they had a site set up where that was really with the external teams where they would, uh, you know, publish a build to we part of, um, what we set up also was a common, uh, file server for builds to be published to. So that dependent teams that had dependencies on these, uh, these builds would be notified of, uh, new changes. So they could incorporate those in testing, um, or for their testing or for just development purposes in general. Um, and going back to the testing portion of things, um, one of the things that we were able to do was given how earlier that these builds were being done manually or kind of separate from the, uh, common environment that we had, once we incorporated these, we were able to have a common step invoked that would trigger a set of testing.

00:21:33

So some of the, um, testing for embedded or hardware systems is a hill system or hardware in loop where there is, um, you need some complex or powerful, uh, machinery, not machinery, but, uh, computing machines to run these tests. And we were able to trigger these after our builds were done, and then import these or link these with our, um, life application lifecycle management system, which is, uh, ion that we use. So, you know, at, at the end of the day, a daily build would run and that would trigger the, the test run in ion to be executed. They would, uh, connect toon, pull down all of the test cases in a test run, execute those test cases based on a, um, a testing framework that they had set up and import the test results. So that might seem like a fairly, uh, trivial thing or a basic thing that should be done.

00:22:34

Um, it, it simply wasn't for us. So we had one of our sites, uh, ate, uh, for vra. They would, uh, they were, they were probably, um, they were ahead of the curve as far as that goes with our company. And given how those steps were now visible in this global system that we had, other teams, other sites were able to see that this is something that they were invoking with their builds and Heston Jackson and, but Kansas and, um, Minnesota, those sites and grew divorced in, um, Netherlands, they started to incorporate that also because they all used the same a LM system, and they're able to incorporate that same testing framework because they all use the same system now, and they're able to have their testing done automatically based on nightly builds, which is something that had not been done before. It was all manual.

00:23:29

Um, I, I can't say how much time that saved, but it's something that happens automatically now. And it's, from what I've heard, it's been a, a boon to their development, um, process. It's been, it's been helpful just, just by having that, just by virtue of having what ATE was doing visible to everyone else in the company as far as, um, development goes. Um, and then also, um, all those errors pointing to the global file server, you can see that they, uh, th bills are immediately available, immediately visible to the other teams. So that, that's just one of the other advantages of having kind of this global, um, visible system. Um, so didn't really, I probably touched on that a little bit, but we started off with the, um, pilot sites by implementing a couple of the complex or different builds. Um, like I said, we have some of these, um, terminal builds, which are done on Linux systems, which only a couple, only a couple people that do the managing applications are familiar with.

00:24:43

Um, some of the more complicated, uh, windows builds, windows based builds, um, like the flat, that one, uh, flagship combine, uh, build that I mentioned, those were done. Um, so those were done at, in the first couple of sites. Then we moved on to additional sites. Nothing really, um, extra there, just simply incorporating more of our engineering sites. And from there they've been able to kind of, uh, not evangelize, but kind of talk to the other people at those sites and get them to bring their builds into the system. Um, yeah, it, it's been a slow process getting people to buy into it, just because, like I mentioned earlier, they have something that works. Why would you wanna change? It's hard to get them to buy into it. And it's taken a couple of years really just because of, uh, I, I think people we had working on it were me and me, that was about it.

00:25:45

Um, we, uh, worked with, um, electric Cloud. We had some of their professional services, people working with us to help get them the teams migrated and do training. But, um, you know, I I would say if you can devote people for resources and, uh, this training that would help getting, getting people to buy into this kind of, uh, the setup, getting 'em to transition from one, uh, system to the next. Um, but it, that, that's where we're at now. We've had additional teams at these sites come into the system. They've been approaching us, my team, to help get their builds implemented, which has been nice because, you know, we've got, I think I, one of the earlier slides we had, um, basically the number of builds per site, the number of builds that we had, uh, planned at the first, uh, migration and then where we're at now.

00:26:41

So some of the sites still have a lot of, a lot of, uh, ways to go to be fully implemented, but it, it's getting better. Um, a couple of informal surveys as far as, um, the times that were spent compiling or setting up builds. And also, uh, forgot to mention it earlier that we had, um, well, no, I mentioned, you know, builds were done via someone's local desktop laptop. Um, we now have dedicated infrastructure that's part of our actual, um, network. Um, it's a bit more powerful. Things run more, uh, quickly. So that's been something that people like, uh, it, it's, yeah, just basically it's process improvements that, uh, you know, they, they've enjoyed so far once they get things implemented. So where we're at now, um, still working on getting people off of their existing setups, which is taking time given, uh, resources on it, um, training.

00:27:45

So some of the complaints we had initially were that the way things were done before they could, uh, kind of isolate things via Windows batch files or just, um, scripts that were located in source control or on, on servers. Um, getting those extracted into, um, you know, procedures or, um, whatnot. It, it was a bit of a process. So that's what we've been working on training teams with. Uh, you have a couple of build experts at each of these sites, and they've been helping, uh, different, uh, people that work on different projects, had different projects to get their projects implemented. Also, the, the biggest thing, uh, is the testing, the automated testing that occurs at the end of the builds is, um, it's not really necessary to have it the build be done in this ecosystem to have the testing done, but they like just the whole thing being an automated process.

00:28:42

So just get 'em in, get 'em in for the testing, and it, uh, kind of sets everything else up from beginning to, uh, get them to want to start using it. And, uh, yeah, the deployment pipeline, that's something else we wanted to start working with, given how we're not just a web application. Uh, the deployment process for embedded systems is a bit different, is a bit different. So we've got some other, um, kind of projects in place for that, uh, may or may not use, um, you know, our integration, uh, continuous integration environment. But that's something that I've been trying to figure out how to incorporate. It's, uh, it, it's an ongoing process. There's just no, uh, good way of doing it with, uh, the way our setup is. It's, um, something I wanna be able to use, but I haven't been really been able to justify or find a way to get it, uh, uh, implemented yet. So, uh, the, the deployment pipeline process may or may not be something we can implement in the future, but as far as testing and the actual build process, that's been something that's been working really well for us. So, uh, yeah, that's basically it. It was, that's how we were able to start from no kind of shared environment to go forward to a 25% integrated environment. It's not all the way there yet, but that's where we are at. So that's it.