The Art of Network Engineering

Ep 134 - Delving into the Intricacies of Network Automation with Ansible Expert Tony Dubiel

A.J., Andy, Dan, Tim, and Lexie Episode 134

Send us a text

Ready to unravel the intricacies of network automation? We've got an insightful conversation lined up for you that promises to enlighten you on this complex topic. We kick off this episode by chatting about our new diverse content, our exciting podcast merchandise, and my upcoming autumn adventures in Vermont. We also welcome our special guest, Tony Dubiel, the Product Solutions Architect for Ansible Network Automation at Red Hat, who offers an insider's perspective on Ansible Network Automation.

Our discussion unfolds on a journey from route switching to data center and collaboration, emphasizing the worth of a networking background when stepping into collaboration. We delve into the vast array of automation tools, and debate whether network engineers should stick to one tool or embrace different tools for different scenarios. We also tackle the challenges of network automation, the role of abstraction and orchestration tools, and how ITSM and IPAM can make automation more accessible.

We then turn the spotlight on the Ansible Automation Platform, discussing its varied uses and how it's been revamped to be more cloud-native and scalable. We also touch upon the ongoing debate about using a tool like Terraform for immutable and disposable infrastructure, or different tools depending on the situation. Tony shares his thoughts on the evolution of network automation and the advantages of using validated content to lower the barrier to entry. So buckle up and join us on this enlightening journey into the world of network automation with Ansible.

Links to references made throughout the show:
Ansible4Networking Online Meetup Group (join for free)
https://www.meetup.com/ansible4networking/
Self service labs: https://www.redhat.com/en/interactive-labs/ansible
Network Use cases: https://www.ansible.com/use-cases/network-automation
Network Automation Ebook: https://www.redhat.com/en/engage/network-automation-guide-20221202
Network Blogs: https://www.ansible.com/blog/topic/network-automation
Getting Started with Network Automation:
https://docs.ansible.com/ansible/latest/network/getting_started/index.html
https://ansible.com/network-automation

More from Tony:
https://twitter.com/tdubiel

Find everything AONE right here: https://linktr.ee/artofneteng

Speaker 1:

This is the Art of Network Engineering podcast. In this podcast we'll explore tools, technologies and technology. We aim to bring you information that will expand your skill sense and toolbox and share the stories of fellow network engineers. Welcome to the Art of Network Engineering. I am AJ Murray and I am joined this evening by Tim Bertino. He is at Tim Bertino on Twitter. Tim, I love the hat you got there. Look at that. You too, man, got to love the new swag. We recently got some new swag hats for both podcasts. We got some shirts, got some jackets and, of course, the hats. So nice to rock the logo. It's so cool. We built the podcast up. We started Kibbles to Clouds earlier this year. We see the logos all over the place. But when I picked up that embroidered gear from the shop, I was like this is too cool.

Speaker 2:

And they did a fantastic job. This is a place local to you there in. Vermont and it's very, very well done stuff.

Speaker 1:

Yeah, yeah. So what's new Tim? How are you doing?

Speaker 2:

So what's new? Actually a lot. With the podcast, I've been putting in a decent amount of time. We've been fortunate enough to have some more sponsorships lately and we're doing some different kind of content than what we're used to. We usually do either straight up podcast episodes or YouTube videos and we've been getting into the short form, been having to do a little bit of the TikTok and the like and it's been interesting. I've been doing some more behind the camera time and luckily we got to do a decent amount with a bulk of the team when we were out there in Knoxville recently. So it's been a lot of fun and exciting stuff that we can't wait to share with everybody coming up here pretty soon.

Speaker 1:

Yeah, yeah, tim, you're a genius. Like the stuff that you have come up with and written, it's great stuff. I cannot wait to share it with you all. I just want to explode and tell everybody, like what we're doing. You know what we probably could, because by the time we edit this and get it out there, I'm sure there will be a lot of content to be out there.

Speaker 1:

But no, it was really fun to record that stuff in person. And then you know, I had no idea what you had done for the second video and you had uploaded all your clips and I started pulling them down.

Speaker 2:

I will say my favorite part is the outtakes yeah, I hold nothing back in the outtakes because I know, and actually I should say I hope and pray that AJ doesn't release most of those publicly, just for the team.

Speaker 1:

Yeah.

Speaker 2:

So what's new with you, aj? At the time of recording here, we're nearing the end of summer and getting into what I call the best season of the year, that is, fall slash autumn. So what's new out in Vermont?

Speaker 1:

Well, the leaves, they aren't a change in. We've gotten a ton of rain this year and all estimates say that because of the additional rain, we're going to see some really pop and color this fall. So we're going to get. You know, last year the leaves changed. They change every year regardless, but last year it was a really dry summer and all of the tones in the fall were rather muted, right Like they still got the oranges and reds and stuff like that, but they didn't pop like they normally do.

Speaker 1:

So I'm really hoping for a really beautiful fall. I'm trying to time it, I'm trying to figure out when peak falls going to hit, because I'm going to take a couple days off from work, charge up my camera batteries, charge up my drone batteries and I'm just going to set out across the state to see what I can get from my photography stuff. So it's always a fun time of year. And then you know, like you said, we've been working on the extra content. So I've been, you know, burning a little midnight oil, editing that stuff. So for our listeners who aren't aware yet, we are on TikTok and you can find us, just like you find us anywhere on our social media.

Speaker 1:

We are at Art of NetEng and you know we're putting out content. We found some nice little AI driven tools that help us take our long form episodes and dice them up into some nice short form content. So that helps as far as the content creation goes on those platforms. And you know it's actually taken off. We're starting to get some good followers there and I think we're starting to build the listenership. You've slowly seen the downloads start to tick up here, as we're reaching new audience members on these platforms, so it's pretty exciting stuff.

Speaker 2:

And I got to give you some props on that too, because you do all of the, and not just for the short form. I mean, for everything we do with the podcast and YouTube. You do a bulk of the editing, and with this short form content, I've really just been handing you rough video, and what you're turning it into is really cool. Man, we really appreciate what you're doing.

Speaker 1:

It's fun, it's nice to have a creative outlet that has nothing, absolutely nothing to do with IT, right?

Speaker 2:

And it's I mean, at the end of the day, it's a skill set, right that? You can use in the future.

Speaker 1:

Yep, Absolutely. Well, let's dive in, shall we? I'm very excited to welcome our guest to the show this evening. His name is Tony Dubeel and he is the product solutions architect for Ansible Network Automation at Red Hat. Thank you so much for joining us, Tony, and welcome to the show.

Speaker 4:

Hey, thanks for having me, it's awesome.

Speaker 1:

What exactly is a product solutions architect for Ansible Network Automation?

Speaker 4:

Okay, so Ansible is a product and an open source initiative, right? So I work for Red Hat and we have account teams that cover what we call North America public sector, so that's all the federal and state and local government types of customers. So I'm an overlay that supports our customers for Ansible, but my focus is network automation. So anything to do with Ansible and network automation. I help them In terms of sometimes just demonstrations, proof of concepts, enablements, workshops, all the things to help them move forward and make sure that they can adopt the product and have success with whatever use cases they're trying to drive through the network space.

Speaker 1:

Okay, very cool. So when I hear a product solutions architect, that could be a couple of things. Right, you could be a person that has a marketing sales background, that's maybe learning how to do some of this network automation stuff to show off. You might have a networking background and you've got really good people skills and that's why you're doing the talks and stuff like that. So what is it? Can you give us a background on your career? How did you end up here as a product solutions?

Speaker 4:

architect. So I go pretty far back in the traditional network engineer in terms of my career. So in the earliest days I was in the Air Force as a tech controller and I was working with old school Cisco routers like 2,500, 7,500, stuff like that. So I'm pretty far back like dating myself. And when I got out for a while I worked in the federal space with contractors and primarily gravitating around Cisco stuff and over time I got the route switch, ccie and that opened a lot of doors for me and moving forward I started liking the presale side of things because it was a little bit less of cutovers and uncertainty with my work-life balance and all that kind of stuff. So I moved into a couple of partners Cisco partners and then eventually I was in Cisco.

Speaker 4:

So in terms of that type of role, supporting products, it was primarily around within an architecture you have certain products. So in Cisco I worked in service provider space. I was in enterprise for a while, commercial in some of their public sector, covering different products but doing a lot of the same things. Where you're helping the customer adopt the solution and in the presales capacity you're usually giving away some free consulting. There's opportunities for you to continue to have your hands on skill set and maintain it, because you're helping them with proof of concepts and things in their own environment, like they're doing bake-offs and different trials against solutions, so not as hands-on as you would be post sales, but you still have the opportunities. So I went into collaboration. I was a voice CCIE so I did that for a while in Cisco and then data center. Data center is where I was introduced into automation but it actually wasn't Well, it was kind of because of the network but it was mainly because of OpenStack, if you're familiar with that particular technology.

Speaker 4:

So that was like kind of like a private cloud alternative like VMware and another virtualization types of solutions and kind of before all the public cloud stuff really came to light and before a lot of adoption even to AWS. So those APIs a lot of times they were perfect candidates to use Ansible. So I'd learned Ansible because of OpenStack. But Neutron was like the service in Ansible for networking so I was using that to configure. Even like Cisco devices could provide gateway capabilities for OpenStack and things like that. But folks started really looking at traditional networking and automation. So that opened my eyes to like there's so many possibilities to avoid like all these GUIs that we have to deal with. So death by a thousand clicks where it takes forever to provision anything, or you want to make things repeatable, right, so you don't sit there and spend the whole weekend.

Speaker 4:

So I was like there's a lot of opportunity to improve that experience. So in Cisco I was able I was fortunate to move into an automation role. So for a while I was in enterprise and I was focused on programmability and automation for our customers and enterprise self. So that was like customers that were from the Carolinas down to Florida and then over to Texas and it included like everything like Walmart and Exxon and Home Depot, like pretty much anything you think of banks and things. It was just all just depending on the size of the customer they would fall into that bucket and they had like so many different you know possible use cases for automation because they would, you know, a lot of them would have like tons of different Cisco solutions and they were all different, especially earlier on, before all of the unification of APIs, where a lot of this stuff, like even within a vendor, would be so, so different in how you would authenticate and use and consume their APIs. So it was really difficult for me because just like understanding all the Cisco solutions and how to configure them manually, okay, then how do you translate that you know, into interacting with the APIs or, like you know, the command lines if we're talking about routers and switches but being able to use multiple tools because, you know, not everybody likes the Ansible. Some people, you know, gravitate more towards Python types of tools, like now with like Norn year and APOM and that Miko and those types of things. So I had to know a little bit about that.

Speaker 4:

For more of the cloud networking stuff, terraform was really popular and for folks that were dealing with a lot of REST APIs, they might be using something like Postman and then they're really like the Postman is easy but it's like understanding the APIs you know those raw API calls is kind of difficult.

Speaker 4:

So trying to do all that and sustain everything it's really tough. So I think that's the challenge that a lot of people have as they started to, you know, figure out, you know what tools now that they that they're going to commit to, and then really starting to double down on them and really build the skill sets. So I think now, like it's taken a lot of years to take off and DevNet, you know, from Cisco, really helped things along and hang pressed in. I got, you know, opportunities to work with him and he's awesome and kind of like you know he really helped build a lot of the whole notion of net dev ops and getting you know, networking people excited about automating but like it's really taken like five years for people to really start to, you know, now I think people are really starting to implement. It's becoming more mainstream, which you know that there's pros and cons of that too, because there's like now there's a lot of people that are looking for help.

Speaker 4:

Yeah there's no shortage of people that need and a lot of people you know just need that help to get started. And once they get far, you know far enough along, then they're going to be okay. You know there's, you know just enablement and those types of things. So sorry, I kind of went off and that's kind of like you know going. You know that transition type of thing, that was fantastic.

Speaker 1:

So did I count three CCIEs route switch, voice and DC Geez.

Speaker 2:

And I do. Before we jump into automation and tools, I want to step back to the, the base network engineering days. So you went from route switch into DC and collaboration. I mean you really the only thing you didn't mess with, in at least that you're calling out, is wireless. So was that on purpose, that you really specialized in those three tracks, or did it just kind of happen along the way?

Speaker 4:

Well, actually I did pursue security CCIE at one point. I was doing like vulnerability assessments and things like that and I was kind of it was it was different because it was like security devices and architecture versus, like you know, incident response and you know assessments kind of thing. So I kind of didn't know which way to take that. But then that's when collaboration came along. And then I really liked collaboration because it was it was really cool when a lot of people were cutting over from their PBXs, you know, into call manager and you know, just that whole process was a lot of planning and you know, and a lot of work enablement to help the people understand this whole new paradigm of voiceover IP. And then the call centers were like their own beast because they would have the IVR systems and that was like a programming type of thing too.

Speaker 4:

So I really enjoyed like call center and call center express, you know helping, you know create those scripts and things like that. So that was like another area. I guess I could probably point to that as being like the first automation type of thing before the open stack, but it's just, you know, a little bit different because that was more like you know, just go ahead that, like Java, you know scripting interface that you would use to create those scripts but um well, and to me that's like the perfect.

Speaker 2:

being a network engineer is like the perfect background for somebody getting into collaboration, because it's all like you said before it was the, the analog or the PBX systems, and then it was voice and collaboration over IP. So somebody that really understands the for lack of a better term meet and potatoes of networking is, to me, perfect for collaboration.

Speaker 4:

Yeah, and there's a lot of, like you know, isdn and SIP trunks and things that fell more on the network side.

Speaker 4:

So it was like you really need network people to adopt that technology, where I think they would have a better chance to succeed than like a lot of the traditional PBX folks trying to adopt, you know, all this new technology around it.

Speaker 4:

So it was really like an inflection point for network engineers like to get in that area and I kind of feel that way around network automation too, where I think you know, having like the experience on the network side and then learning to adopt automation is a little bit easier for folks that may, you know, be more of a developer and then trying to understand all these different aspects of networking, because you're talking about so many different vendors, so many different types of devices and even like within a certain category of device, like routers and switches, as an example, it's so different in the data center versus the campus, versus the WAN and then versus like Colo facilities or like if you're trying to do networking between public clouds or hybrid cloud, you know on-prem, off-prem, it's like there's so much to know on the back end and the intent of what you're trying to build.

Speaker 4:

But, like for networking folks, you know, sometimes it's kind of like ISDN it's worth to squeeze because if you can use the portals and you're comfortable there and the command lines and the GUI to some extent, do you always have to automate? So it really just depends on, like, how often you have to make changes and how many touch points you have. Or you might make that choice on I really should adopt the technology or not. But I'm all in and it's taken about four or five years and then now I can automate just about anything. You know, anything that comes up I can figure it out. It's just a matter of time to figure out how do you do it manually, you know, then how can I take that and then transition that into an automation and make sure it manifests itself with the same outcomes that it did originally?

Speaker 2:

So all right, I've got a super loaded question. You mentioned earlier that a lot of folks I mean really myself included don't have a lot of experience around automation and they don't know where to start, and you also mentioned that there's a large number of different tools out there. Do you feel like network engineers need to settle on that tool of Python or Ansible or XYZ, or do you see there being value in having different tools in an environment because you may have different use cases?

Speaker 4:

I think, different tools for different use cases, but for the individual, I think it's always a good idea to explore Python, even if you're not planning on building, you know, becoming a full developer and building like applications with Python. I think it's a good idea to understand the basics because just about every vendor has like a Python SDK for their APIs. So if you really wanted to and if you learn how to like build Python scripts up to the point where you can use functions and then call like these classes that they create, they do all the heavy lifting on the back end with their SDKs and then they take care of all that communication, all the rest APIs, and turn those into parameters and then you're just using those parameters in your Python script. If you can get to that level, that's fine, but that may work for you. But then how do you work with like a team of other folks that may not have the time or be able to sustain like a skill set enough to like interact at that level with Python? So that's where I was. So I was like, hey, I can do all these things, but then I would show it to a customer and they would be, you know, impressed by it, or maybe they wouldn't want to use it, and then maybe a couple of folks would be able to take it and run with it, like in their environment. But then what happens when they leave? Were they able to enable, like the other folks around them? So like that's kind of back to the abstraction. Thing too is like can you hand it off to people an easier method so that they can use it without having to know, like all of the underpinnings of how it works?

Speaker 4:

So the reason why I still think it's valuable like if you can learn Python, you have the time, do that, because what you can do is like Ansible is a Python application. You know it's built on Python and when you run a module in Ansible, you're interacting with a playbook and all you need to know is YAML and all you're doing is defining parameters in a YAML playbook, right? So normally you don't hard code those. You'll variableize them and it's just a module and we'll have parameters and certain parameters are required for, like the configuration, some of our optional, and if you don't know what they are, you can look at the documentation and it has all the parameters listed out. So most people can figure out how to do that in a basic playbook, but behind the scenes it's a Python script that's running for that module and these days the way that it works is the vendor creates those, so they kind of do the heavy lifting.

Speaker 4:

But if you know Python you can go and create your own Python scripts and then you just go through and use certain libraries for Ansible so you can turn it into an Ansible module and then you can abstract for other folks where it was kind of hard to do that using just Python, maybe you have to do like a Django front end or something like that and it's like a lot more coding on your part. But this way it's like Ansible already has it set up so you can kind of hide your Python in the background and other people can use it and consume it. And it's like either consume through a playbook or you can even dumb it down further with self-service using a survey in the Ansible controller or using something like ServiceNow or another ITSM sitting in front of it, and then they just would have like drop down forms and things like that to populate data or something fancier like an Infobox or NetBox or something that's an IPAM.

Speaker 4:

Then they're filling out all that information and just gets pushed into Ansible but the idea is Okay.

Speaker 2:

So you're already kind of blowing my mind here, and before we get a little any deeper, I want to have a little bit of a vocabulary lesson. So when we talk about things like this, I've heard multiple ways to frame this up that I think are really different. So I want to take a minute and talk about some of the different things that you've said. There are concepts of within automation. There's concepts of abstraction, orchestration and automation, and I'm told those are not synonymous things. Can you kind of define what those three concepts are?

Speaker 4:

Yeah, so automation or this is just my viewpoint of it, but for the most part automation is just taking tasks right Because a human could complete a task in a certain order. So there's a process, so there's a step one, step two, step three, and then you either succeed or fail. So automation is simply just taking that and then automating it, and you can think of that as kind of like the scripting or, in this case, a playbook in Ansible, and that would be automation. Orchestration is where you may have tasks that have different purposes, that may need to occur, like at a different time, and maybe, like the output of a certain task is a requirement of dependency for the input for, like, another task. So you might have like these conditions in between.

Speaker 4:

So a lot of times orchestration tools will have, like you know, different conditions. You know different like succeed, fail, or like always states like what do you want to do if it passes or fails? It's more complex, so normally it's stitching together like multiple scripts or, in this case, in Ansible, it'll be stitching together like multiple playbooks or multiple roles or kind of like a fancier playbook, and orchestration would give you like integrations into things outside of automation, like roles based access, as an example, or other services for credentials and things like that, or integrations into other tools too, that typically you wouldn't have like a playbook interacting directly with these things. It would be, you know, consequence of having orchestration. So the our automation controller is an orchestrator, so you could you could think of it as more of like the platform is kind of where the orchestration capabilities are more advanced.

Speaker 2:

You know, in that sense, Okay, so you're stitching multiple tasks or multiple playbooks together and that's, that's orchestration things that could be dependent upon each other, okay.

Speaker 4:

But it could be multiple like things outside of automation too, like your. Itsm or something like Cisco, ICE for like policy. Yeah, so there's with orchestration, there's, there's tons of things you can do. It's very powerful.

Speaker 1:

So, tony, you said a few minutes ago where you know, now there's a lot of people that are doing network automation, and I think you know, as opposed to you know four or five, six years ago, and I think there's a really big factor in why that is. So, you know, I remember five or six years ago when there everyone was just like you got to learn Python. You got to learn Python. Network automation is going to be this great thing. You got to learn Python.

Speaker 1:

So so you go on to Pluralsight or CBT or wherever, and you learn Python, but you don't learn it in a network automation context. You're just learning Python and it's like great, I learned Python. Now what? But over the years, you know Cisco, devnet, with developer advocates like you know, in the past, life for you and Hank Preston and then lots of other content creators out there now, have created this network automation context for us to go and consume this material. And you know I always refer to Ansible as the gateway drug for network automation. Because you're right, you don't need to know Python, you don't need to know any crazy scripting languages, you just need to understand YAML. And if you can understand YAML, you're golden. And you know the command line syntax for the platform that you're working with, be it Cisco, juniper or Rista or whatever. Right that. Those are the only two things you really need to be successful with Ansible.

Speaker 1:

Now, all that being said, let's define what is Ansible. What are some of the parts to it? I know that there's an open source product in there. There's like a paid for product. I've done a lot of work with the open source piece. I've put a few how to blog articles together to get network engineers started in their network automation journey. So let's kind of define what is Ansible, what are some of the parts and pieces to it. You mentioned playbooks. What are some of the other things that we need to put Ansible together to get it working for us?

Speaker 4:

Okay. So like, ansible Core Engine is the same, regardless if we're talking about open source or if we're talking about the product or really any of the tooling that uses Ansible Like. Even if you're running Ansible and like GitLab or something on a runner, the Ansible Core Engine is the same thing, so that core software has the automation capabilities of Ansible. So when you execute automation, that's the Ansible Core Engine. Where it's a bit different is, like when I mentioned the Ansible Automation Platform. There's that Right, the Ansible Core Engine. How we use it has changed. It's evolved a little bit, because a lot of people are familiar with Tower as being the product from Ansible. But currently in Ansible Automation Platform, what used to be Tower is now called the controller, but there's additional components as well that are a part of the platform that didn't exist in the days of Tower. So what we've done is re-architected it and decoupled some of the responsibilities that used to be in Tower so that it would be more cloud native and be able to scale better in terms of high availability and performance. So that's changed as well. So the Ansible Core Engine today lives in a container, because we're moving more into this container-wise world of everything.

Speaker 4:

So if you used Ansible Core Engine in the past, you would have to install it into. Normally, the best practice would be installed in a Python virtual environment so that way all the dependencies have a wrapper around it. So you're using the right version of Python and all the other modules that you need and whatever else Ansible requires. You can install it, but you have to keep installing it everywhere you're using it. So there's always that chance that if you're working with somebody else, there might be a small nuance, a different version of something that they're using or a different collection and that type of thing, and it's hard to police that because if you're troubleshooting something that doesn't work, it could be a bug, but it might just be the version and it's hard for teams to work that way With it being in a container. You can version control these execution environment containers and they'll have the specific version of the collections. So the collections is how we package the modules today and you can go to Galaxy as the community, the open source community, and you can use the collections from various vendors there. But the product has certified collections. So certified collections are effectively the same thing, but the vendor goes through additional vetting and it's supported through the subscription through the product, so you can open up a ticket and then Red Hat will troubleshoot with the vendor for you, as opposed to the open source side of things. If you have a problem, you find the contributor whatever vendor or individuals that are responsible for that particular module in that collection and you go to GitHub and you log in issue and then they'll try to help you. But there's no agreement or service contract or anything like that.

Speaker 4:

So it's that same paradigm, but in the Ansible product, the controller is like managing automation. So if you had Tower did the same thing. It's like I have all these playbooks. Do I want to run them from my laptop or from a server in Cloud or whatever, or do I want to run them from, in this case, the controller, which has additional capabilities to manage those jobs or those playbook runs. So who has access to them, with roles-based access, and the jobs themselves, because I may want to increase their tune based off of how many devices I'm connecting to. So there's additional capabilities in that regard, like if I want to do job slicing or I can do forks and serial and those types of things in my playbook to make them run better, like parallelism, like how many playbooks do you want or how many endpoints do you want to touch at the same time? So there's a lot of levers that I have for better performance of the automation.

Speaker 4:

And then additional management, because there's additional logging and auditing and, like I mentioned, the roles-based access and integrations into third-party tools that are common within IT, so things like the ServiceNow and Infoblox and that kind of stuff. So that's like managing your automation. And then there's addition. So, moreover, there's other things too now. So there's a private automation hub so that way you can manage all of your collections and your execution environments in your own environment and if you create custom content, you can store it there. So it's like a container registry and it does all these things that allow you to work locally. And then we have the console for the customer so they can go to that and they can pull down the certified content as a customer, or they can have it synced up with their private environment. Or, if they're air gap, there's ways of updating the private repository. So that's another capability.

Speaker 4:

And then the automation mesh is around how we're able to scale out, in this new architecture, the cluster of controllers, and we have this notion of these execution nodes, so the execution nodes, instead of like installing all the capabilities of the controller on them.

Speaker 4:

It's a very light install. So you can have a smaller VM or, in some cases, a container that you're running this execution node and all it's doing is executing the playbook locally, so you can position them like near where the target of the automation is. So maybe you have like a lot of latency, like sites that are way out wherever and you're geographically separated. So far, like in the old days, you'd have to just work the best that you could with SSH and there's some timers you can adjust for connections and things like that. But now you can position the execution node right there. So there's no latency. You're not bound by latency with launching that job in the execution node. You just have to have connectivity through this automation mesh, which is like a layer four TLS connection, like just a layer four mesh network back to the cluster, and you'll have better performance that way.

Speaker 4:

Sometimes you might have like a boundary, like segmentation or that, so there's a hop node you can use in that case, where it just has outbound connectivity back to the cluster, and then that way it can have an execution node sit behind it and then if there's a firewall or some other security boundary, it will allow that outbound connection. It'll open up a receptor port and then it'll invoke the automation on the execution node that's sitting behind a segmented zone on SD-WAN or in a DMZ or something like that, and it can work that way too. So it opens up flexibility, because I may have a controller that sits on-prem, that's managing things in cloud over a VPN or a direct connector or something, or vice versa. I can spin up a cluster of Ansible controller in the other components, in AWS or in Google Cloud or in Azure, and they could manage on-prem endpoints too. So it just opens up all these different possibilities. Or you can run a cluster on top of OpenShift too, which is our Kubernetes solution.

Speaker 2:

Yeah, so it's pretty much anywhere. That's a really interesting concept. I mean, the whole concept of edge computing and content delivery networks is really big right now for things like media and big data and I wouldn't have thought of that type of distributed use case for something like automation. But it makes a lot of sense where you can centrally manage all of your automation but have a way to sync that to edge nodes that are actually managing the gear, get those jobs to run as close to the endpoints as possible. It makes a lot of sense, just not something I would have thought of from the lens of network automation before.

Speaker 4:

And in our Edge, Red Hat's Edge solution. They've already created all of the wrapper of different Ansible, validated content around managing our Edge, but it will also include other vendor stuff too, like if you're using a rocky wireless at Edge sites and things like that, so it makes it really easy to spin those up and then again you don't have to have your controllers co-locate it, so that way.

Speaker 2:

I want to talk about that validated content and the collections from vendors. Now, is that something where vendors go through the process of validating their collections to be used in Ansible? Is that something that came as like a value add when Red Hat kind of took over operations or operationalized? Ansible Is that when that piece happened?

Speaker 4:

Yeah, it happened shortly after they decoupled the core engine from the modules.

Speaker 4:

So, that was kind of like the first step needed, because that's why it took so long for Ansible updates, right, Because to have like a major release they would have to have like all the vendor updates, you know, and they would time that with updating Ansible. So it really made it hard to be agile. So when they split off now it's decoupled, right. So the Ansible core engine has no dependency on any of these modules that are now packaged into collections. So when they started the collections, they started the certified collections. But it's taken time. Like not all the vendors adopted it right away, but most of the best-of-breed ones out there that have already been working on the community side of things have went through the process. So, like an example, like PanOS they became certified like I think like two or three months ago and that was like the last major one that I recall that a lot of folks were waiting on that one, like when are we going to have the certified collection? So now we have that. There's one other piece too. So there's VAL-DATA content and that actually has a linkage back to when you guys were asking earlier about abstraction. So that one I can kind of define like how we make things easier because of that. Another thing that's new is the event-driven automation and we call it event-driven Ansible. So what that's doing is it's a new, another component, right, and it can have a source plug-in for like a message broker. So, like Kafka, there's a bunch of different ones. I'm not an expert in a lot of them, but on the networking side of things, I've been using Kafka already for that and Kafka has integrations into things like telegraph. So if you're familiar with streaming telemetry and being able to pull that in and convert it from like PRTO buff, where normally that would go into like a database, like an influx database, but you can also convert it into JSON structured data and then have that managed in a topic in Kafka and that can feed our EDA so effectively like we can listen to events on devices in real time, as opposed to like.

Speaker 4:

The way that Ansible normally works is when you run a playbook, right, that's like a push model. Like you're invoking that playbook, it'll go and, like you can gather facts or whatever the playbook's doing, it can learn, state or glean operational information from that device. With streaming telemetry you set that up to where it's called XPath, so it points to an object that's either a configuration object or it's an operational state object, so something like an interface. If it were up or down, you could periodically check that. Or, if it's, if it changes state it'll send out, that'll trigger the telemetry. So you can receive these things in real time as opposed to like in a push model, you have to schedule them so in the controller. That's another capability it has as a schedule. So I might go out periodically and do compliance checks and check certain operational state, the populated dashboard or something like that.

Speaker 4:

Now with EDA we can like as soon as something happens, happens in the network, that particular streaming telemetry can become an event for EDA and that could create a trigger that launches a playbook in the controller or sends a notification or does something in a workflow to react to it.

Speaker 4:

But I'm not suggesting like I tell people like, even though you can automate remediation for a lot of these things, you probably don't want to or not yet for a lot of things because you just want to be sure and you normally have like a downtime window and change management and all that kind of stuff anyway, but you can at least learn about it and everybody's aware and you can go out and then automate troubleshooting triage to collect more data. So then when you do open up a ticket like you'd have an open up a ticket and service now you can have all this data ready. So when somebody else is ready to make that decision, to approve it, you can have an approval step and then BAM pushed the change out to the device. But it was all like a feedback loop, so very proactive in that regard. So that was the last of the component thing. So I'm sorry, I kind of went off track.

Speaker 1:

No, no, no.

Speaker 4:

From where you were going.

Speaker 1:

There's a whole lot more to Ansible now than the last time I looked at it. Holy smokes, yeah, yeah, it was just tower.

Speaker 4:

That's pretty cool. Yeah, it's finally. It's grown up a little bit.

Speaker 1:

Yeah, a little bit.

Speaker 2:

So we are very biased here, right, Because this is the art of network engineering and we are getting familiar in this chat with the different network automation components of Ansible. But what are some of the other use cases that people can can use from Ansible? That's outside of just straight up network automation.

Speaker 4:

Yeah. So, like I work with a team of folks that have different skill sets, so they focus on different domains and IT. So one of our like value props is like, hey, you can, you can automate anything in IT pretty much with Ansible. So it's just really just a matter of if it's, if someone hasn't already done it before, somebody going through the steps of creating a process and building automation around it, where I think like the bulk of the content and where most of the history of Ansible automation is, and even today right is really around servers and because Red Hat is real, so like real automation to do like a lot of things like compliance checks and patching and to integrate into like other tools like satellite that help in that regard as well. So it's like a that's another Red Hat product so they integrate really nicely to do like the, you know, patch updates and remediations and those types of checks together.

Speaker 4:

So a lot of our customers, you know, use Ansible for that particular use case. Another one that has a lot of traction too is just public cloud. So you know there's always that debate like, well, hey, do I want to use something you know more like a terraform, you know, for things that are like immutable and can I just use like a terraform in cloud. So we work well with terraform. So we have collections where, like terraforms, northbound of Ansible and vice versa, like Ansible controller is kind of managing the workflow and terraform is, you know, a component of that.

Speaker 4:

It just depends again, like you mentioned earlier, like, does it make sense to use different tools? Or how you would use the tools in different scenarios? So in cloud, like a lot of the things that are containerized, right, you're going to blow them away. You know they're disposable, so those are completely immutable. So something like terraform makes sense because it maintains a state, right. So things like networking in cloud, though. So if you're going to have, like you know, transient gateways between you, know, using AWS, like VPCs, or if you have, like, express routes and direct connects, like with your on-prem, into the public cloud or into a Colo facility, you don't blow them away. Those devices stay up and running and they need to be configuration managed, right. So that's where the tool for that would be Ansible. So they kind of go together hand in hand, because it's two different purposes. It's like the immutable stuff is really more around, like delivering application stacks temporarily and then having the benefit of being able to tear them down, so you're not paying for them when you're not using them. But the infrastructure has to be there because it's a part of your infrastructure, because, even though it's going across clouds, it's your infrastructure. If you have AWS and you network it to Azure, then you have connections into your environment and maybe you have on and off ramps with SD-WAN into those things. They're going to be there all the time. That would be more configuration management use cases. There's a lot around that.

Speaker 4:

You mentioned Edge, so that's more emerging. We see a lot of traction there and security. So I'm trying to think, yeah, probably networking is probably like lower on the totem pole right now in terms of Ansible, just because some of those other use cases just people have been automating them longer so they just had more time for adoption. But now I think that I think it's really going to ramp up on the network side of things. So we're a little bit later to the party, but now that we're starting to understand it, it's like well, hey, I got 10,000 devices sitting here, I can catch up to you real quick and I've got like 100 different use cases, so it won't be long to eclipse that stuff.

Speaker 2:

I do see how things like servers and compute have been around a long time as far as automation, because to me you're probably spinning up those kinds of services up and down much more often than you are switches and routers. When you do spin up, for instance, 10 servers to be part of this compute cluster, they are more often than not, other than host names and IPs, going to have a lot of the same different configurations. And it's a little bit different when it comes to networking. But, like you said, I agree with you that it's starting to become more mainstream to want to be able to manage and automate networks, but I do think that it's a lot around. You mentioned a minute ago that with networking, where you're not necessarily spinning up and spinning down networking services, often the biggest value proposition there is to have standardized configuration management and that's where things I think like Ansible are really able to deliver on that.

Speaker 4:

Yeah, and there's some things, that kind of cross-stobains to like compliance. You know that impacts everybody and sometimes it's the same configuration because maybe perhaps you have like the same NTP servers or DNS configurations, things that are a part of your compliance that span all those different devices. So it's nice having like a platform that you can use to touch all those different types of devices, whether it's a server, cloud or networking or security, and you really don't have to worry about all the nuances. You can create a workflow that just checks NTP for you know crossed all of them, and then it either passes or fails and then you either have you know you either react to it, or it's a notification, or you create a list, or you know populate a report or whatever. You know those are becoming more common too where it's like a cross-domain. But in terms of like networking, there's a lot of very unique nuances to networking that are different than servers and containers and things that you know like that, not just that they're permanent and that you want like a common configuration. It's also difficult like to manage the configurations because really, like in an ideal scenario, it would be like automate everything, so like if you have like a network operations team and a design team like automate first. So, like all of our baseline configurations, we always have automation in mind and the operations team is always going to make changes and do any testing you know based off of automation. Then it would be really easy to have a single source of truth and to ensure that you know all the configurations and state of the devices are in compliance at any given time. But in reality it's like you have folks automating but then you also have people that are going in and out of band to troubleshoot and making changes and it's not always reconciled.

Speaker 4:

So the next iteration of something you know when you go through, like a scheduling of a you know compliance check or you're making your next configuration change, then you should go and check those devices first and do a drift check to make sure that what you think is a configuration is actually the configuration, because otherwise you may overwrite it. And it was something that somebody like spent hours on troubleshooting and figure something out but it just didn't make it back to the automation team for them to fix it on the front end, to go and revise, like the variable files that they're using for playbooks and things and to incorporate that into the automation. So a lot of times you'll kind of see that impact where you know it's just like hey, it used to work and now it doesn't, and then somehow the automation broke it. But in fact it was just like automation wasn't updated. So as folks become more familiar with like a true DevOps or net DevOps process, they start using repositories and they start incorporating you know, drift checks and you know just making it a lot easier if you do need to make a change.

Speaker 4:

You know, as a result of troubleshooting or if you're just you know, making new changes, that you can enforce those policies you know pretty easy. And that kind of gets back to like not just running a bunch of playbooks but having like an automation platform and being able to manage that and to integrate it into like, like Git repositories and things like that or other sources of truth for your configuration, like CMDB, types of you know databases, like in your ITSMs and the like, so that way you can check against them too before you make a change and post change. And just, you know a lot of state can change unintentionally, you know. You know it could be pulling your hair out, and I think that's a big challenge is like just adopting a process around that being all in and not just like dipping the toe.

Speaker 2:

I think all in is the right term because you bring up a good point there and it's more than just automation, because the network state is only to the automation engine is only as good as what it knows about and if it doesn't have the updated state, the updated configuration, then it can cause that's when it can cause a lot of problems. So something I want to discuss is you know, we've been talking about a lot of in-depth use cases around network automation. So let's step back a little bit and think about this from the perspective of somebody who wants to get into Ansible. We'll make it simple. We'll say you know, a medium size enterprise customer wants to start getting into network automation, wants to see what Ansible is all about. Where would they start?

Speaker 4:

Like normally we would. We would want to like kind of look at their current state of the things that they're doing and you don't want to have, like you know, in terms of like the goals, you want to make it like very attainable for them, kind of like the low lying types of things, so that they can get a quick win as they're learning. So we would tend to stay more on like the read only side of things. So you can, you can pretty much go into any environment and you can help them. You know, either through if they need a compliance type of check, you know those are pretty easy to set up so you can go and check, like you know versions of what's running in each model. You know which ones do need to be updated backups, right, so they're safe to do. You can, you can. It's kind of cool with Ansible you can back up to a Git repository, to a server locally. There's like all these different scenarios of where you can back up, like router configs or really any type of config, whether it's ACI or a Palo Alto firewall, anything that has any type of config, you can easily back that up. So those are like a good starting point too, because they may have something already, but normally whatever they're using is kind of bound into a domain, so like if you're using like DNA center from Cisco, as an example, it's more around like the campus and Wi-Fi, you know, like the catalyst products. So it does a great job of like you know that's already baked into it, you back up the configs as part of it and it has swim for like updates and things of that nature. But it's not going to work with some of the other, even like with the Cisco solution. So the backups work the same across different types of devices firewalls, routers, switches the backups effectively work the same. So it's you know it's another consistent way. So that's a good starting point. So really a lot of the network management types of things.

Speaker 4:

But again the issue is, is they're probably using a tool or multiple tools for that? So sometimes the starting may have a little bit of overlap with some of the tools they're already using. But you know, again, you know that's one approach. Another approach is if they have a lab environment, because nine times out of 10, if they have a lab environment, they probably haven't automated the lab itself. So you can go to town with a lab environment and you can show like, all the best practices and because it's a lab environment, there's no issue with downtime and things like that. Or if you need to break things, it's a, you know, a great environment to learn, and then by them automating their lab environment, then usually it makes it a lot easier because they can. They can spin up like different scenarios a lot easier and make that more disposable in itself.

Speaker 4:

So I've seen a lot of people like really, you know, turn their lab environment to something really cool that they can, they can use without having to build it manually to make changes for different scenarios. Or it's like someone's using the lab environment so we have to wait until they're done with it, you know. So it's going to add some delay into this particular you know, if we're working on a project or whatever and we don't have a lab environment, you know so like you could make it more multi-tenant in nature and things like that. So lab environments are great and usually those are where we would introduce like the Git repositories and things like CI-CD pipelines, so like kind of introducing some of the other kind of you know, cooler DevOps types of tools in there too, because it's like, hey, in application development we have like a notion of development branches, staging branches and then production. So a development branch would kind of be more like you're messing around with your laptop or maybe you're using like a lot of simulations or emulations like CML from like a Cisco side of thing or like GNS or EVNG or whatever.

Speaker 4:

You know it's not real gear, but that's okay, because you're really just trying to vet out your configurations or making sure that your automation works. You know, with configurations, but it doesn't really have like a great data plane. You know it's definitely not something you can stress, test, right. So your staging environment should be like a real lab, where you don't have like all the devices or maybe not even all the same models, but it's close enough, but it's some.

Speaker 4:

You know actual, real gear has the actual controller or whatever the solution is. So you can closely represent what you're going to do. So that way you can cover about 80% of the things that would surface there. So that way when you go into production and you do, you know you never really find everything but you should, you know, have less surface area, you know, for something to go wrong. So that whole blast radius, right. So when you go into production, maybe I don't need as big of a downtime window, right, I don't have to sit there all weekend, I only need a couple hours, and I know that it's only going to be like a few things we'll have to troubleshoot. We won't have that. You know that risk of like, oh damn, we're going to have to roll back you know and then we're going to have to ask for more.

Speaker 4:

You know, kiss the ring of the change management people ask for another downtime. Yeah, so you can, you can cover your ass kind of well. I don't know.

Speaker 1:

Yeah, you're good. Thanks, a little bit, yeah. So you mentioned earlier that things have kind of gone to like a container kind of focus, right so. But can you still install the Ansible core product on any old Linux box, or is that optional?

Speaker 3:

Oh yeah.

Speaker 1:

Okay, cool. So so it. You know, like I said, ansible is the gateway. I've always referred to it as the gateway drug to automation, right? So, so you could do the container thing. You can install it on just a vanilla Linux box and learning how to, you know, do YAML is extremely, extremely easy. So you know, get your hands dirty, start playing with Ansible, and you made some great points. A lot of people, when they start their network automation journey, they think they got to like jump in all the way and start automating changes. But you know, if they've never done that before, it's like well, what am I supposed to use this thing? Well, you know, validation of all your network configuration doesn't meet any sort of you know company standard. Are you, you know, checking the versions of all your devices to make sure everything's up to date? And then, if you have to, you can automate the software updates themselves, which is pretty cool. I've seen a lot of people do that with various playbooks and stuff.

Speaker 4:

Yeah, working on some of that right now.

Speaker 1:

Yeah, and then, and then you know, once you trust, how the platform works, then you can start get information, start making you know smaller changes and stuff like that, and then just kind of build up from there. And then I mean I'm just from what you've discussed this past hour on what all the larger Ansible product does. It's crazy everything I'm still thinking about that event driven thing. You know like interfaces go down and you can start, you know, pulling troubleshooting information, just running show commands and gathering that stuff back in some automated fashion for somebody else to triage later. But you know I'm thinking like you know capturing logs as close to when the event happens, so that way you have that information right there, you know, stored safely in a text file or attached to a ticket or whatever the case is. So just some really cool stuff going on over there.

Speaker 4:

You can use this logs to instead of streaming telemetry.

Speaker 4:

Oh, okay, all right, yeah, that could be another source for it too. So, or like event, the EEE can trigger, like events, so like you could use that to kind of in conjunction with it. So but that was a little bit hard because you have to do like a little bit tickle scripting and all that kind of stuff. But like all that stuff works, there's like tons of different ways, but the for me, like the streaming telemetry, kind of covers everything that the syslogs do anyway. So you can.

Speaker 4:

It's just a matter of there is a bit of a challenge because X path so what that is is instead of like XML. So XML for me I get a headache trying to read XML, right, because you have the open and closing tags and you're trying to find like the values in the middle of this thing is all zigzaggy everywhere and you know there's a lot of. It is like, oh my god. So X path is like a very direct path to whatever that object is. So you know it. It's kind of hard to remember those two, but there's tools to determine what they are and there's some really good like GitHub repositories where they have a bunch of examples. But but Cisco has the Yang suite tool that you can run like a little Python application. You can search for X path so pretty much like anything like command.

Speaker 4:

You would do what would be the X path for it and it'll show what it is and then you just drop that in there. So it could be like the state of the interfaces or like I want to know when the configuration changes because someone made a config on the device and I want to be notified of that. And then you know, have that, maybe run a config drift check right after it happens to make sure something important didn't change on the device. So that's like another use case.

Speaker 1:

Yeah.

Speaker 4:

But I'm happy to show like that I can demo EDA. If you guys want to see that sometime there's a lot of things.

Speaker 1:

Yeah, I know I do. Yeah, that sounds really cool. So time flies when you're having fun, Tony. So so if people want to learn more, either about the open source project or the larger enterprise product, where can people go to go to learn more about Ansible?

Speaker 4:

Yeah, so there's, there's a lot of different sites. There's one like a whole like category of different labs that are self paced, so normally I would show people that link I could. Can I paste links in here, or?

Speaker 1:

if you want to paste them in the chat, or if you want to send them to us via email or I mean you can even send them to me in in LinkedIn chat I can add them to the show notes. So for right now, explain what the links are, and then I will have all the links available in the show notes.

Speaker 4:

Okay cool. Yeah, because I have like different ones for different purposes, gotcha. So the the self paced labs, are really cool because you get like a working AAP controller and you can do like the the scripted labs are a part of it or you can actually, because you have admin access to it you can create your own job templates and playbooks and things there and run it locally to use it for like testing and things like that. So you can kind of think of it as like a like a DevNet sandbox. Okay, so it's, it's curated, but you can use it like however you want and it does have like a time duration, so you know once that time's over then you'd have to schedule another one. So it's a great way to learn that.

Speaker 4:

Another tool that I didn't talk about that's important too is the command line. So in like the older days I mentioned, you know you you install Ansible core and you can install it locally and typically you're using like the Ansible dash playbook commands or Ansible if you want to talk like directly to a module, or Ansible vault. You know if you want to encrypt your user names or your passwords or credentials. Rather, so like all that stuff still exists, but we have this new command line tool called the Ansible navigator, and the Ansible Navigator uses the execution environments too. So that way, if you're working in what I call like your test or your dev environment, you can use the Navigator and you can work out all your playbooks and run them locally and then, when you're done, you can commit them into your repository and then pull them into the controller. So that way you don't have to toggle between the GUI and what you're doing. You can work locally and it's the exact same container that you're using locally as the one that you would run as an execution environment for that particular project in the controller. So there's that.

Speaker 4:

Another thing is like we kept talking about like abstraction and the validated content. So the validated content it sort of intersects with like where should you start? So like, I still like the same thing. You know, I really recommend like, doing more of like the read only and staying more in the lab environment to kind of you know, build your, your skills and have a you know the correct chops before you go out and make changes into your actual production environments.

Speaker 4:

But when you do go into production in the old days it was a little bit more difficult because you really needed to know how you were configuring those devices from the command line. So, like when people would start out like with Ansible, and it really wasn't like that long ago, but let's say like if we, if we went back in time, like maybe like four years, most of that automation was using JINJA 2 templates, which also is a Python thing too, right. So in those templates it's very sensitive to the syntax of the command line because if you create a template and you're often like the spacing you know, especially for like a router config you'll you're going to have an error because it's not going to be a valid input into the device.

Speaker 4:

So you really would have to know. Just you know you can't get away from knowing the command line in that regard. So when we move forward we moved into like the next evolution was the resource modules. So in the resource modules the way that they work is they interact with the device and we have a notion of state. So this state is, where Ansible, once it makes that connection through SSH or through the REST API, it'll look at what the configuration is in the device and it'll look at your configuration that you have your intended configuration. That's either like in one of your variable files and then variable files they could exist in like an inventory. But that's not best practice. That's just for like real high level stuff. Group would be for, like you, you organize the groups of devices and groups like like types of devices into groups so you can have group bars. And then host bars are very specific to the device configuration because you know normally they have different IP addresses and host names and access list entries and all that kind of stuff. So so you would have those.

Speaker 4:

So the state would say, like if I had the state of merged in a resource module, it'll look to see what configs in there and I'll look at your config, like wherever you set those variables. And there's like more locations where you could set variables. But just you know, for the sake of simplicity, it's like in one of those ones that I mentioned. So they'll compare what's there and what's in the device. If it sees the configs already there and it has the state of merged, it's not going to change anything, it's only going to merge in something that's new, which is sort of like how repos work to so. So if it sees something that doesn't exist, it'll merge that in and leave everything else alone. But then there's also other states, so there's replaced.

Speaker 4:

So replace will kind of do the same thing. If it sees something that's not in there, that's in your config, it'll add it. But if it sees something that that is there, that's not in your config, it would replace it by removing it. So that one's used a lot for like compliance checks. If you want to do the remediation, so to kind of take that a step further, you can run a resource module on check mode where it just tells you the things that it would change or would replace, or you can run it in run mode and it actually makes the change on the device. So usually it's a two step process. You go out and learn and see like what do I need to change? Then you would execute it in run mode later and actually make the change.

Speaker 4:

But you could use the exact same playbooks. You just run it in check mode versus versus run mode. Overwrite is similar but it removes like the entire config wholesale and then replace or overwrite it like with what your config is. But that doesn't work well. You never want to use that with like access list and VLANs.

Speaker 4:

You would always want to use the replaced, so it'll be very precise in what it's taking out. And then there's delete it. So delete. It's really cool because in the old days of Genja 2, what normally would happen is you'd have a Genja template and that would represent a configuration. Best practice would be to have a backup of that config and then if you made a mistake or something and you needed to fail back, then you would replace it with that entire configuration. But then all this reconvergence happens right, because the whole config needs to come back up again, so your routing protocols and your ARP tables and CAM tables and all that stuff. You have to wait for all that In a resource module.

Speaker 4:

The state of delete it would only delete the config from that one resource module. So resource module is very scoped and specific to like a protocol or a type of configuration in the router switch or whatever the device is. So it could just be like OSPF or BGP or SNMP or there's a bunch of different ones that are different types of configurations in a device. But it's only a delete that config, the one that you sent. So that's very less, it's less intrusive, it's going to be very quick and that blast radius is very small and you can make that change very quickly. So delete it is what that does. So you have all these states and then the other part of it is you're not even dealing with the command lines, you're dealing with structured data. So it creates like a YAML dictionary or list, depending on the documentation. They'll tell you in the config if it's like a dictionary or a list of dictionaries, that all goes back like just normal programming stuff. But it's like the structure of that data. So what these resource modules do is when they look like I mentioned, like they look to see what configs in there, it shows you the before, right, and that's that configuration. So it shows you the before. And then it shows you the command lines or whatever the native construct is. Rest APIs are like in Juniper, it's NetConf. So you see that in like the XML or whatever. It'll show you whatever that is. And then it shows you the after that it either applies because you have it in like a run mode, or it shows you the after doesn't change when it's in that check mode, but you do see like what the config that it needs to push is. So it's like a built in diff and then that's very useful because you don't have to do any programming to make that work. It's baked into the resource modules.

Speaker 4:

So resource modules are great, but like to use them, you still have to know how to use like if you're going to create like roles and that type of thing, to like like make purpose built playbooks that are using a resource modules. So there's a little bit of effort in that. So the validated content with these are is at least this first round of it. This first iteration from Red Hat is we have the network base, so it covers like all of the basic network use cases. So what happens there is you have like this, this role that's kind of like a role at a higher layer than like the other roles that are focused on the specific like resource modules and you don't have to really write anything into the playbook. You just list out like what resource modules you want to interact with and it'll go out to a device that hasn't been automated yet and it'll ingest that configuration and turn that configuration into all of the YAML files that represent all the resource modules that work for that device and then if you make changes to them, you know, to those configurations, you can push it back out again and if you have, you know, replace, you know as the state or whatever it'll go out and remove things that shouldn't be there and replace it with the configs. It should be so that one like you really don't have to know much at all. So it's not really like a low code or no code, it's like an easy code type of paradigm around that. But then for the ones that we have in, we do have validated content around BGP and OSPF and in firewalls and some other things that will. That list will continue to build. So in the meantime the basic one covers a lot but you have to do like a little bit more in the playbook.

Speaker 4:

But they're getting to the point where it's kind of like a single task of using this resource module that does like all these different things for you. So you may only have like just a little bit of code that isn't represented in like a resource module. So you may still have like a corner case like this little one-off Gingya 2 template. Or you might use like a config module with, like the lines command or something like that, that parameter just to do like these little bits and pieces. But the idea is, like you know, you're up and running and it's very, very simple to get started using the validated content.

Speaker 4:

The hard part is the people that that are, like, been using the older ways for a long time. In some ways, like for them, they have to refactor some of their playbooks now to take advantage of the newer paradigm, because you want it to kind of fall in that framework, because now it's really easy because everything again like there is some work involved but there are a lot of it's done for you, so you can. You can really get a long way with a very like low barrier to entry with the validated content. And vendors are getting involved too. So this is like multi vendor what I'm talking about. And then the vendors are providing their own validated content for their use cases, for things like you know ARISTA's, you know CVP and like Cisco's ACI, like those types of things will become, you know, validated content at some point. The customers will just go and grab it Like I need to, you know the.

Speaker 4:

Cisco validated content. Boom it. Just just a little bit of work and I'm up and running. The old days it was like, oh, I got to write playbooks for like two months to get everything in the way I needed.

Speaker 1:

That's awesome, that's great. All right, and again, you'll get those links over to me and we'll put those in the show notes for for people to enjoy. Yeah, I'll give you links on, like the validated content to EDA.

Speaker 4:

I also have like a network automation meetup that I do like at least once a month. Oh that one is just like a meetup where we do like hands on labs and it's all just networking automation. So it's always a network use case. So I'll give you, give you guys, that link to awesome.

Speaker 1:

And then I found an Ansible specific YouTube channel.

Speaker 4:

Well, there is, yeah, there's. There's actually a couple of different ones. There's the product one and the community.

Speaker 1:

Cool, we'll, we'll. We'll. Include those in the show notes as well. Tony, how about yourself? Is there a way that people can follow you? Are you on Twitter, linkedin, anywhere that we can share with our listeners?

Speaker 4:

You know on Twitter it's at T Dubio. T D U B I E L. I'm not as active on there as I used to be on LinkedIn. What the heck's my the LinkedIn's like coding Dubio. So OK.

Speaker 4:

ODI NG Dubio, d U, b, I, e, l or at coding Dubio, with like the whatever the front part of the LinkedIn URL. And then I do have like a like a meetup site I'll give you the link for that too that has the events listed. So it's free. People can just join it and then they'll get a notification of like when there is an event and then they'll have like the link for the conference call for it.

Speaker 1:

Tony, this has been great. I've learned a ton about what Ansible has been up to the last three years since I really last dug into it. Tim, do you have any any last minute questions here for Tony?

Speaker 2:

No, no questions last minute. I just think it was really interesting to hear about the the evolution of of Red Hat because over the years because, aj, you nailed it on the head You've gone through blog posts in the past and using, you know, the core, ansible, ansible Tower, to do some of the network automation to see what Red Hat has taken and done with Ansible on top of the core. That's been there forever. So it's it was. It was really cool to kind of step through time to see how Ansible is involved. So, tony, thank you for joining us and really teaching us that we appreciate it.

Speaker 4:

Yeah, thanks for having me and keeping me on track. I mean, there's so much to talk about, it's kind of hard. Like I feel like I'm throwing everything at you. You're throwing it all at me.

Speaker 1:

I didn't realize there was so much to it. So so definitely I think there's a couple of things that we could follow up with some YouTube videos. I mean, I'm personally really interested in the event driven Ansible stuff, so that that might be a good follow up to do. But anyway, thank you so much for joining us and we'll see you next time on another episode of the Art of Network Engineering podcast.

Speaker 3:

Hey, dear friends, we hope you enjoyed listening to that episode just as much as we did Record it. If you want to hear more, make sure you subscribe to the show and your favorite pod catcher you can also give that little bell rascal a little ring of dingy, so you know when we release new episodes. If you're social like we are, you can follow us on Twitter and Instagram. We are at Art of Net, that's Art of N E T E N G. You can also find us on that weaving web that is the Internet, at Art of Network Engineering dot com. There you'll find our show notes and some blog articles from the hosts, guests and other friends who just like getting their thoughts down on that virtual paper. Until next time, friends, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Cables2Clouds Artwork

Cables2Clouds

Cables2Clouds