The Art of Network Engineering

Ep 127 - Observability with Phil Gervasi and Kentik!

A.J., Andy, Dan, Tim, and Lexie Episode 127

Send us a text

This episode is sponsored by Kentik!

Ever felt baffled by your network's performance? It's time to join us on a journey into the world of network engineering with our esteemed guest, Phil Gervasi. We'll explore the intricate world of network visibility and discuss the importance of diagnosing network issues.

The episode takes an exciting turn as we delve into the intriguing concept of network observability. Extending beyond mere data gathering, observability is the key to answering any question about your network. Phil Gervasi will guide us through the maze of network observability, its historical roots, the challenges of modern networks, and Kentik's unique approach to network observability. Discover how it empowers customers to pinpoint potential causes of network changes, breathing fresh life into network operations.

Are you ready to supercharge your network operations? We'll show you how with synthetic testing, an active testing form that simulates a wide range of conditions. We'll discuss how Kentik uses automated insights and troubleshooting tools to lower customers' Mean Time To Repair (MTTR). And that's not all; we'll also explore DDoS Mitigation and Network Security Solutions, demonstrating how Kentik's platform can discern different types of attacks and help manage false positives and threat feeds. By the end of our journey, you'll have a broad understanding of the impact of cloud technology, containerization, and AI on enterprise workloads. Don't miss out on this networking masterclass!

Follow Phil for his awesome Networking memes and more great content!
Twitter: https://twitter.com/network_phil

More from Kentik
Get Started! - https://www.kentik.com/get-started/
Website: https://www.kentik.com/

Find everything AONE right here: https://linktr.ee/artofneteng

Speaker 1:

This is the art of Network Engineering podcast.

Speaker 2:

In this podcast we'll explore tools, technologies, technology and the information that will expand your skill sense and toolbox and share the stories of fellow network engineers. Hey, what you looking at? That seems interesting. I'm not really sure, but it looks angry. Somebody said it's called the network. Oh okay, I hear it gets blamed for stuff a lot. Do you think that's why it's mad? I guess it could be. The issue is that I can't see what's wrong. I have no visibility into how it feels or what its hopes and dreams are. It just keeps growling and throwing ones and zeros at me. Hmm, there's got to be a way we can see what the problem is, fix it and get out in front of similar ones in the future. Any idea how we do that? Not a clue, but I'm going to try to find out on this episode of the art of Network Engineering.

Speaker 3:

Welcome to the art of Network Engineering. My name is Andy Laptef. You can find all my fun stuff, or permit IP Andy, andycom. And there you heard Mr Timber Tino dropping visibility bombs. What's up, tim?

Speaker 2:

Not a much. Andy, it's good to see you. I actually haven't recorded one of these with you, and I'm not sure how long.

Speaker 3:

Yeah, it's been way too long. It's been summertime, a lot of vacation, just you know. We just got back from Turks and Caicos. It was glorious Spending time down the shore, it's, you know something fun, and now you're back here with me. Hey but, this is awesome.

Speaker 2:

You did tease it there it is. At the time of this recording, it is summertime and we actually have a live event coming up that I'll get to see you in person here pretty soon, andy.

Speaker 3:

So I'm super comfortable that ever, tim, yeah believe we've known each other all these years and so the second time we're getting together. Yeah, k tech connect right, if you Google that you can get the site and it's our second meetup. We did Asheville last year. We're the fine folks at Knoxville Technology Council are inviting us down this year. It's going to be a three day event. You know food, fun beers, a live sponsored, not in a live you know recorded podcast and this really cool TV studio gonna have a happy hour on the roof afterwards. It's going to be awesome, yeah.

Speaker 2:

I looked up that place. It looks pretty wild.

Speaker 3:

Yeah, yeah.

Speaker 2:

So I'm going to put you on the spot real quick. What was the highlight of Turks and Caicos?

Speaker 3:

You know we haven't been on an adult only vacation since we started a family 10 years ago, so we bring the kids everywhere all the time and it's wonderful, right, we're super involved parents and we want the kids everywhere. But people have been telling us for years man, you two just got to get away, you got to reconnect, you got to. You know, make sure you two stay connected. So we finally did it and you know we miss the kids most of the time. I mean it's ridiculous.

Speaker 2:

I could get away and it's like I wish the kids were here, but did you find yourself just talking about the kids the entire time?

Speaker 3:

Yeah, like the whole time. You know you. Yeah, it's a thing right, you become a parent, they become your life, and then it's like, who are you without them? But no, to directly answer your question, I think the highlight was I took sailing lessons, which is pretty awesome. I've never sailed before and I sailed about the second day. I took it out on my own. I was completely vertical, like 30 feet in the air and almost flipped it I was. I was like all right, I'm good, I'm out and going back to the beach and reading my book and I also read fiction for the first time since getting into tech 14 years ago. Every spare moment I'm reading, you know, a cert guide or a taking a video or trying to learn something, and it was just nice to unplug from all that and, you know, enjoy some cool.

Speaker 2:

What'd you pick up?

Speaker 3:

Stephen King. I was a big Stephen King head as a kid. So, yeah, I picked up this 900 page novel about a time traveler who goes back and tries to stop the assassination of JFK. So that's what I was up to on the beach. But now I mean it was, it was good. Thanks for asking. Has your summer been good? I saw you were golfing, which makes me mad every time I see it, because I'm so bad at golf. When I see people golf I have a viscer reaction, so like if I was out there I'd be in the woods.

Speaker 2:

Andy, it is a. It is a terrible game, Terrible.

Speaker 3:

You do it a lot for a guy who you must enjoy it.

Speaker 2:

I enjoy what I'm drinking when I do it, and being outside is good, but it's one of those things where I'm I'm absolutely terrible at the game, but it's a blast. It's just enough fun to keep you going back and doing it, at least for me. But again, probably that liquid courage. But anyway, let's get into the meeting potatoes of the show. We actually have a sponsored episode for you all today, sponsored by Kentic, the network observability platform. Everything you've ever wanted to know about the network, from data center to container to cloud. And with us we have someone who really doesn't need an introduction. I really mean that we have network Phil. Phil Gervasi, how are you, phil? Thanks for joining us.

Speaker 1:

I am doing great, tim, and you flatter me so much that I need no introduction. But thank you, I'm doing great. I'm so excited that Phil's here?

Speaker 3:

Yeah, we were. We were talking before the recording. Like I remember listening to Phil on network collective and it was my favorite podcast and networking and I've wanted to meet him for years. And and here we are, like I'm fanboying on Phil.

Speaker 1:

That's really cool, andy, thank you. I mean, that was a lot of fun doing network collective some years ago. You know, the idea back then was to was to do a podcast. That just was like engineer to engineer, like heart to heart. You know, no, no sales, no, no marketing. Just let's talk about packets, the. I guess you could say like those corridor conversations that you have like when you go to a big networking event, right, so you listen to the session and then everybody's kind of like nerding out in the hallway. That's what we wanted, and then you know, just recorded. So that was a lot of fun. I was part of that for about a year, year and a half, and then moved on. But yeah, yeah, I'm really flattered that you remembered that as well, andy, thanks Now were you still a net edge at the time?

Speaker 3:

Because I'm yeah, you know I'm selfishly as a guy who went from net edge to a vendor, you know I've been following you for you know, through your journey. So you were a net engine and doing network collective. What got you into? When did you jump to the, to the vendor side? How did that happen?

Speaker 1:

So I have, my experience is almost entirely with with ours, whether it's like a regional var or a national var, you know, like a household name kind of a var. And then I got into being like a field engineer, a traditional implementation engineer, which I loved, by the way. I mean, I like, I love it but hate it. It's one of those things where, like, I love the fulfillment of fixing and building but I don't miss like a cut over at 2am on a Saturday. So there there is a love hate there.

Speaker 1:

But then what happened was I had an opportunity to get into pre sales and I was a solutions architect for large var and I absolutely loved it. That was phenomenal because I had to stay very, very close to the technology, very close to being a still a regular engineer quote, unquote, regular, whatever. But then also, you know, I got to meet with customers, do presentations. I we did podcast back then as well. See, the thing is that prior to being in tech, I was actually a high school teacher for five years Now that was 20 years ago, but that's what I did. So I have a master's degree in education and being an English teacher.

Speaker 2:

Oh, I didn't know that.

Speaker 1:

Yeah, yeah. And so you know I changed careers for a variety of reasons, mostly money. But I changed careers, got into tech, went through that whole process and when I got into pre sales I got to like marry those two worlds of being a nerd. You know, I'm totally a Trekkie and I love fixing and building things and even if it's just like working on my lawnmower at home, right, but then also being just interacting with people, whether they be customers or like a large audience, whatever happened to be and just talking tech and, you know, getting to the heart of the matter and that. So that was a really cool few years for me and a really good friend of mine.

Speaker 1:

You might also know Brandon Carroll. He was a good friend of mine from, you know, going to networking field days and tech field days for a long time. I hooked up with him about a opportunity over at Riverbed as a technical evangelist, which I wasn't quite sure what that meant, but we hashed it out. So I was with Riverbed for about a year and now I'm doing the same as the director of technical evangelism with Kentic. So the opportunity was when I was chatting with Brandon about it and then ultimately now deciding to do this as my kind of the next few years of my career at least few years. It's like this awesome combination of being an engineer but being able to talk to people. And here's the cool part I don't have like a revenue quota attached like I did in pre-sales, so I get the best of all these worlds.

Speaker 1:

I have a lab at home. I work with our SES at Kentic every day. In fact, I think I annoy some of them with how many questions I ask. I'm always trying to learn, but then I get to turn around and create videos, do podcasts and talk to interesting people. Recently I did a podcast on submarine cables and then did a huge research paper on it and it was so interesting. So I get to do all that stuff now. That was the transition from being a network engineer into what I do now as a technical evangelist and, by the way, what that means to me. So your audience knows it's a little different than technical marketing. What I do is I go forth and I tell the good news of what Kentic does. Right, that's what an evangelist is, and so the idea is engineer to engineer, heart to heart. I go out and I tell people about what problems we solve from a technical perspective. So what's the? Is it Missouri? Is that the show me state? Or is that Montana? I don't know. I think it's Missouri.

Speaker 1:

But it's the idea of like I want to go out there and kind of prove the technology I want to show you. So I remember being on the receiving end of the conference room table and hearing from vendors and hearing from sales folks usually right and they're saying, hey, our solution does XYZ. And I'm sitting there like an engineer, just having finished studying for the latest and greatest certification, and I'm like, yeah, yeah, prove it. I kind of smell marketing. I smell that you're blowing smoke in my face. Prove it to me.

Speaker 1:

So my goal in life right now is to be intimately familiar with what we're doing at Kentic, of course, but then going forth and proving it, showing like, hey, this is what we actually solve and how, and then stopping there I don't go beyond that into this is what we can hypothetically do. You know, if all the stars were aligned and there was something like that. So that was my whole foray from being a traditional engineer into pre sales and then into the vendor side and you know, getting specifically into visibility and then now what we do with observability. It's not necessarily something I planned, but I mean Kentic is such a network engineering heavy company that so many of the folks that I work with there are network engineers used to be network engineers, so that's kind of the core of who we are there.

Speaker 2:

So, phil, let's, let's talk about some problems that that enterprise engineers may have. So a big fear that I always have is that you get that call of hey, we've got all these problems with slowness, latency, applications aren't working, and you have to just ask tons of questions about what's happening and you just don't have any. Any data even point you in a in the right direction. So I kind of see that as having a lack of visibility or observability. So how would you in Kentic describe what network observability is and means to you?

Speaker 3:

Yeah, and is it different than visibility?

Speaker 1:

Yeah, it is, in fact, I'm glad you said that, andy, because I was going to start with that. It's not just that it's different and some like replacement. It's not a new technology necessarily. It's the evolution. It's building upon what I'm going to call traditional visibility. I've even used the term legacy visibility and listen, if the whole marketing thing around network observability bothers some folks, you can call it advanced analytics or next gen visit, I don't care what you call it, but it's really this idea Careful, you're going to invent a Gartner term here very quick.

Speaker 1:

Oh yeah, and then can I get paid on that? That would be pretty cool.

Speaker 1:

But, ultimately, we're building upon a foundation of traditional visibility and we'll talk about that to go beyond that, and so it's very data driven and there's a data analysis approach to finding meaning in the data. So I like to say that the difference between visibility and observability is like the difference between seeing and understanding. It's like the difference between seeing more data on really cool graphs and they might be awesome graphs and useful and needful and then understanding how all that stuff works together. So it's going beyond just gathering more data and seeing it. Like I said, data is important. It's the foundation of observability. But network observability in particular is about that data analysis that we do to get the insight into the data that we collect.

Speaker 1:

So Avi Friedman is our CEO and our founder. He likes to say that network observability is the ability to answer any question about your network. So that's kind of like what does that mean? But basically you got all this data now. So what does it mean? What can I learn from it? And so you should be able to find out the answers to complex questions by looking at that data and legacy visibility sort of can't. So you got another tool. Maybe you have many tools, so you have all these point solutions and they're disparate, and maybe if you had a team of PhDs from MIT they could sit there and like figure it all out for you. So observability is going to do that programmatically. So, yeah, it's much more service and application oriented rather than just the network.

Speaker 1:

So the term observability is probably very, very familiar with the APM the application performance monitoring folks, for years, so it's not like a new thing, but it actually finds its roots, even before that, in control theory. If you're familiar with that, we're not familiar with that. Control theory is a field of control engineering and math applied math that it deals with dynamic systems. It doesn't have to be technology in a sense of like IT, what we do. It could be like a manufacturing facility and what you're doing is you're gathering telemetry without affecting the system in order to determine the state of that system. So if you can imagine some kind of a big production facility, there's going to be all sorts of devices and all sorts of different types of data and you're going to collect that without affecting anything, right? Because therein lies the paradox of like if you're observing it, are you actually then changing its state? But as much as you can, you're going to observe that data and then be able to determine what the health of that system is.

Speaker 1:

Our system, andy and Tim, is the internet and the network, so it's a very complex system and in 2023, one of the reasons that Kenting is doing what it's doing is because of how applications are delivered today. So the network, as you all know, the network is not like my computer an IDF switch, an MDF switch, a router and a firewall. It's like in a line in my hallway in a three tier design that still exists out there. But now we're talking about containerized services, ephemeral interfaces and CNIs and clouds, public cloud, you know CASBs and SD-WAN and overlays. We rely completely on the public internet to deliver both mundane and mission critical applications. I mean, I'm talking to you guys now, right, and I'm looking at my computer and I don't know, probably like 80 or 90% of the applications that I use are SAS apps. They're not on my computer locally, if you are. That's true. I use, like Premier and Camtasia and stuff, garageband, but by and large, there's SaaS applications.

Speaker 1:

So let's start thinking about all right, if that's the system right. We're concerned about application delivery. That's what we want to figure out and what's its health and, of course, how can we improve it and troubleshoot it when there's a problem? How do I do that? Like I don't own half of that. I don't own the path between my New York City branch and Office 365. I don't own what's going on in between Azure and AWS. I get there's a lot of gaps there, not to mention the fact that we are relying on all sorts of different network adjacent services and components in the path as well. So that's where we are in 2023, and that's where network observability comes in. It's the idea of how do we now go beyond just collecting more than just like NetFlow and SNMP? Right, those are really important. By the way, we really rely on NetFlow and SNMP.

Speaker 3:

Don't go bust it on NetFlow. That's why it's not.

Speaker 1:

NetFlow. I think it's a very useful.

Speaker 3:

It's all I had. Man, it's all I had.

Speaker 1:

It's all you had, but you know what I think. Here's my theory, not theory. Here's my thought on NetFlow. I think that it's been underutilized by a lot of engineers over the years. It's an extendable resource, so you can add fields to it, so now you can start to use it to collect much more than just like whatever your source and destination. Right, and if you think about it, today there's a tremendous amount of data that we can't look at anyway, whether it's encrypted data or because it's PII. We can't look at payloads necessarily. Sometimes we need to, and so we'll have packet captures to do an audit or some kind of a security forensics analysis, whatever. That's still useful, but by and large, netflow is going to give you a tremendous amount of information. I get it that it is technically more coarse than a PCAP, but you can sample down to literally create one flow record per packet if you want to. Now that's kind of crazy, but you could do that.

Speaker 1:

So it is, I think, underutilized, underappreciated, and it is a big part of what we use for telemetry at Kentick, and one of the reasons that we use it, among other things, is because it's everywhere. So we want to collect data. We don't want to wait 20 years for something to replace NetFlow and SNMP, like that's great that that's happening. Yes, we also ingest streaming and things like that, but we want what we can get today because we want to know what the state of the network is today. So we collect information, like NetFlow, traditional NetFlow, sflow, any kind of flow data, right, jflow. We're also IP fix. We're collecting SNMP information. We're collecting streaming. We're collecting information from like your IPAM, your CRM. We're going to also ingest global routing tables, your own routing tables. We want to find out what you're using for user IDs, application tags, security tags, and that's where the data analysis starts to come in.

Speaker 1:

So we saw this kind of activity happen with NetFlow Well, so what? Or we see all these CRC errors on an interface using SNMP Well, so what? How does that all relate to everything? And ultimately, remember, network observability is concerned with the service that we're delivering, which usually is an application, right? So it has its roots in that control theory, and what we're doing, then, is providing, just like control theory and the kind of broader scope of observability, insight, deeper insight, more than just graphs for feedback, feedback control, incident remediation, being able to be more efficient with your resources, cost control and ultimately sending a human being out to fix a problem. One of our data scientists at Kentick is a really cool guy and he's got his PhD in something Computer science, I guess, I don't know, but anyway he's one of our data scientists to explain to me that before working in the networking space, to him it's just math, it's math and data. So he was working on a project where they were collecting information from aircraft systems and ultimately all of this telemetry was coming in. They were doing some sort of cool data analysis and then, for the goal of sending out a tech to fix a problem and that's what we're doing at Kentick is, ultimately we are trying to augment an engineer, we're trying to solve a network operations problem.

Speaker 1:

So, tim, you mentioned like, oh, we got this problem with an application being slower, with a latency problem. Well, first of all, I don't think you're ever going to get a ticket from an end user saying I think I have a latency problem. It's probably going to be like my application sucks, logging in, is taking forever. Salesforce is not even logging in in the first. Whatever happens to be and then low and behold, is a latency problem as well. But then, when we start to talk about latency. The key here is not that you discovered it was a latency problem In 2023, it's like is the latency between AWS and Azure? Is the latency because I have some CASB firewall that I don't even manage? It's a light touch management through a third party and they're doing DPI when they shouldn't be. Where is the latency coming from? Where am I dropping packets or is there a problem with the page rendering slowly because one particular file is taking a long time to resolve in DNS?

Speaker 1:

There's so many components that that's the issue that we're trying to solve at Kentick. Is this operational component? And we use the term network observability because we're very network centric. But you think about it, we're crossing all the streams here. We're talking about application stuff, we're talking about container stuff, we're talking about cloud stuff. We wanna look at code, we wanna look at Kentick, we wanna look at all those things so we can understand application delivery, again from a network centric perspective, because that's who we are and that is the delivery mechanism, that's the substrate that's used and, if you think about it, therefore, a tremendous amount of application performance information is embedded right in the network itself.

Speaker 1:

So what better place, if you're in the network space to figure out why an application that's delivered over the network, why it's not working, by looking at these metrics that we can gather.

Speaker 1:

Sometimes they're easy because you just configure your router to send flow to our portal or your switch or whatever, and sometimes a little bit harder because you're trying to figure out what's going on with my intermediate providers, one or two hops up from my last mile and you're trying to look at that, and that's one of the things that we do.

Speaker 1:

We'll augment our customer's data with these trace routes, using a trace route and Paris trace route, if you're familiar with that to look at what the path looks like between your branch office in Chicago and whatever, and your data center, or to a SaaS application where you don't really have any view on the other side. That's a really long-winded answer to your question, tim, but I'm really trying to lay the foundation for what we're doing. That it's not a new thing, it's not magic, it's math, it's Python and Jupyter, notebooks and databases. There's nothing crazy going on. It's just applying that workflow and that data analysis mindset to a tremendous amount of variety and types of data and that, by the way, lends itself to an entire host of problems that I'm sure we'll get into because at some point this always ends up going toward machine learning at some point and we get the collective eye roll. But it is an interesting topic.

Speaker 3:

I got a question when I was managing data center WAN and somebody. We'd get a ticket and customers are pissed and they can't get to an application. Great, what data do I have? You said telemetry, you said a lot of things and I just remember there was all this information to look at but you weren't sure where to look, what to look for, and it's the art of network engineering. So it seemed like troubleshooting was an art.

Speaker 3:

Some people would start here, some would start somewhere else. I'd usually look at logs first and then, okay, I told you, like NetFlow, you know, like, hey, we can't get to an application. All right, let me see if it's getting through my transit network. But I always wished I had a tool that would tell me at least where to look, like, hey, we've been paying attention and you might want to start over here.

Speaker 3:

So I guess I don't know if this is what I'm hoping you're saying, but it sounds like if I had Kensic solution. I guess it's software I'm installing or maybe I'm consuming its ass. But then I direct all my telemetry, all my NetFlow, all my lot, like everything that I can collect, to you, your software and your magical math, and all that stuff consumes. It all generates meaning, which I guess is the observability, like you said. That's the understanding, right and then tells me like do I get, when I get paged at two in the morning and people are pissed, what is Kensic giving me? So I'm hoping you know like it sounds to me, like you're the missing piece that I never had when I was managing prod. You're telling me either what's broken or where it might be broken. Or hey, look over here, cause we had a big global network with a bunch of data centers and cloud connectivity and containerization like it. It was so complex. It's like where the hell do I start, right? So is that what you guys are doing?

Speaker 1:

That's the road that we're on and I say that that way because we're kind of. Our customers are all global. We have very large e-commerce and enterprises and service providers you know some household names around the world but ultimately we have this two-pronged approach where we give you the ability, the unbound ability, to explore all this data, because there is a human element and it's difficult to embed that human element in a Python script or in an algorithm. Let me give you an example, andy. So you're managing your data centers right. You mentioned and this is a stupid example, but let's say, you got a pair of 100 gig links, right, and it's an active standby link, which I would never configure because you wanna utilize both of them, but whatever.

Speaker 1:

So you got an active in a standby link and your one link is operating normal or whatever, and then the other link is just trickling along at one meg per day because it is a standby, no problem. And then it goes from one meg to two megs. That's a 100% increase, right? So statistically significant. Does it affect end users at all? Application track no, not at all. It's not. It's statistically significant. Maybe it's interesting, but do you send out the fire brigade to figure it out? So there's a human judgment involved there. We're like well, I don't know, but let's say that that starts to trend and then it goes to three megs and then four megs and it does that for a week. Technically it's still affecting nothing because it's like you know, it's just a dribble on the interface, but there is a trend and so maybe you get an alert.

Speaker 1:

So embedding that subjective component is difficult. So we give you this unbound ability for you as an engineer to explore. We wanna augment you. Augment you as an engineer to explore the net flow and compare net flow with the results of a synthetic test, with the SNMP information, and you can literally look at all together and see ah, look at this in a time series, this is related. And you can do that and get as deep as you want. But we're also going to provide programmatic, automated insights to say, hey, we're seeing a trend here and this is likely the cause over here in this side of your network. Now, that's a matter of probability, it's a matter of confidence and something that we're always working on, and always, you know, our data scientists are always adjusting and looking at how can we make it more accurate. We're looking for feedback from our customers as well, how we can incorporate some of that subjective component, that human element.

Speaker 1:

So let me give you an example that I like to give it all my presentations. I think it's funny, but it's nice to see the picture. But I'll just describe you. Imagine a line graph where you're comparing ice cream sales over the course of a year, a calendar year, to shark attacks over the course of a calendar year. You're going to see a spike where both of them kind of peak in the middle of the summer, and so a computer might say, oh no, these are correlated and there's a problem there. But I think, tim and Andy, you guys know that. Are they correlated?

Speaker 2:

I'm pausing that's a pretty positive call. I've seen Jaws man, I don't know.

Speaker 1:

No, but you're right, they are correlated. They are correlated, it's not a causal relationship. So we have now a computer that's finding correlations all over the place, sometimes spurious and sometimes not causal. So adding that human component to a machine is difficult and that's the process that we're going through. A lot of vendors are doing that it's not just us and so ultimately, what does that do?

Speaker 1:

So you have that unbound ability to explore and go down that troubleshooting path that you know in your heart because it is an art right and you can take it wherever you wanna go. But you also get these automated insights that say we're noticing a trend in your ingest, or rather egress, cost in AWS. Whatever your costs are trending in this direction unexpectedly, and there is an 80% probability that it's related to this change where you swung peers from this data center to this data center. So that's a good place to start. And then you end up finding out why things changed. And we're not doing auto remediation. That's not something that we're pushing right now, but that's what we're doing is trying to solve that network operations problem where we can give you the ability to troubleshoot the way that you know how, but also provides you those automated insights to expedite the remediation process and get you to the answer faster and sooner.

Speaker 3:

It's so valuable. That's what I just thought is like you're reducing MTTR, right? Yeah, that's exactly right, Absolutely yeah, which is huge. That's what it's all about All right, phil.

Speaker 2:

I can almost hear the listeners thinking, okay, this sounds great, but how do I do it? How? What do customers have to do to get the Kentix solution up and running?

Speaker 1:

So we are a SaaS company but we are our own private cloud, and one of the reasons is we have performance requirements for what we're doing on the data analysis side and also for security reasons as far as our customers are concerned and how we handle that data. So we have specific choices on what kind of databases we've used and how we ingest our information or our customer information. But how can folks Start? Let me answer that question. First, you can actually sign up for a trial where you just use your personal gmail account or whatever and you can start sending flows into your own private portal or not just flows, I say flows, but you can send information, your own private portal, stand up synthetic test agents, if you like, and spin them up on your own, on your own devices at home, and send that information in the portal and then just start playing around with it. It's it's really cool. I do that. I obviously have a work account, but I have my own private portal with a personal account, so I can literally just break everything and experiment. So, yeah, folks can do that. Maybe we can link to it and and that's something that we just do for the community. One cool thing about kentic is that we are very open and we're part of you know. There's no like secret sauce and proprietary this and that we're pretty open with what we do. We, we contribute to the community. We have a kb that basically shows like everything that we do. So so, ultimately, what the way we do it is, we're gonna ingest information from your organization. However we can get our hands on usually it's just a TLS connection over the internet where we're ingesting. You can use a proxy, whatever you need to do and we're gonna ingest that into our platform. That platform has changed names over the years. I don't know what we call it today. For a time it was the kentic observability cloud. Now I think it's the kentic observability platform, but whatever it is, we're taking it in and we're going to start doing cool stuff with it immediately.

Speaker 1:

Now, one thing is that we're going to Enrich that data or color that data everybody uses a different term in data science but we're basically adding more information that's not necessarily from you, to give more context to that information, to your telemetry. So we're gonna add things like the global routing table, threat feeds. You know, if you have it a CRM that we can ingest information about customer IDs, user IDs, application tags, all that stuff is well Information about the cost that you know how much things cost with AWS transit providers, so we peer in relationships, how much those things cost. We're actually gonna then be able to treat them all as objects in a database that we can relate to each other. Now we don't use a graphical database. We found that a columnar database is very it's better for us because it's faster to query and we're able to Isolate individual customer information down from a high level down to the very bottom, so we're able to do that.

Speaker 1:

As far as regulatory concerns, and and once that information is ingested into the system, we're gonna start doing stuff right away, things like we're gonna cluster, we're going to Classify, label data because, you remember, a lot of information coming in from the network is unlabeled and if your audience is familiar with machine learning, you know you start talking about labeled, unlabeled data and training the machine, supervised and unsupervised learning, semi supervised learning. But I want to make it very clear that we don't believe when I say we, I mean kentuck we don't believe that machine learning is like a magic bullet or a silver budget bullet. It is just another tool in our toolbox. We have found that just having all the information in a very In a database that we can query very fast, that in and of itself is a big deal. Sometimes, just doing the machine learning, pre-processing stuff that, like you know, you would call statistical analysis, we can stop there and still have a tremendous benefit. Now, what I mean by that is so that information comes into our system, right, we ingest it in there, and now we have all sorts of very different formats of data.

Speaker 1:

So, if you can imagine, you know you have information that tells you about how much of your traffic on your network is HTTPS, and it's a percentage 67% out of a hundred. Okay, that's, that's a number. And over here you have millions of packets per second. That's a completely different number, a completely different scale and a completely different range. So here's a scale of zero to a hundred and here's a scale of zero to, I guess, infinity, right, and then the range is different as well. So how do you start comparing those in literally a math formula that's embedded in Python. Well, you got to do stuff with it. That's when we start to do things like standardization, normalization and scaling. So we plug those into formulas so that way they get transformed. This is called transform more transformation in ML preprocessing, where you convert those values Into values in all the same range, so usually the zero to one, sometimes negative one to one, and you retain their proportionality, so that way that millions of packets per second and 67% Retain their proportionality to each other, but you can plug them and you can do something with them.

Speaker 1:

Now we have to do all that when we ingest this information, and then we can start to do things like fine patterns in in the traffic, for example. An example of that would be you know, I seeing that there's a Botnet brewing or a potential DDoS attack brewing, right. And we see, alright, this is reminiscent of what we're seeing over here, and so we can throw up an insight and our alert that there's a potential for a DDoS attack in this ASN or in this region, this geo, whatever it is. And you know, here's a, here's a suggestion for a mitigation, right. So that's, that's an example of just looking for patterns in the traffic.

Speaker 1:

We can look at seasonality and say, well, over time, we're seeing that this kind of behavior happens in this kind of a cyclical pattern. That's what seasonality is. We can look at trends. We see that we have this, like I mentioned, that one gate or one meg to two meg to three meg, that's a trend. We can also Dynamically create best baselines and benchmarks. So now we have a rolling standard deviation to say this is what a normal login time is for Microsoft 365 from your New York City office. And so once I go to standard deviation, the deviations above that will throw an alert and say you have something to Look at. And here's where the latency is occurring. It's happening with your upstream provider, which happens to be Verizon. Here's your, here's their phone number. That we don't give you their phone number. You can figure that part out, but that's the idea is that we're able to look at the data and Put some of the pieces together for you and say this is probably what you should be looking at. Take a look, and.

Speaker 1:

And the reason we do that is because, well, I mean, so many of those components aren't yours. I mean I don't. I don't own the network between the, my office, where I'm sitting right now, and I don't know wherever the servers are that run this program right, so I can't troubleshoot anything. I just had a blip a few minutes ago, I don't know. So we're gonna capture that data over time as well. But that does pose problems because now you're looking at short-term and long-term dependencies. So I have applications that depend on, you know, particular BGP prefixes that are advertised in a particular ASN in a particular geo of the world, and then the application behavior changes when I advertise it Somewhere else. But that's a short term, see it's. It depends on on that. It's doesn't depend on like a static thing, like a router that's running in Somewhere where that's not changing. So you know, you have these things that we're still trying to solve, but ultimately, again, solving that network operations problem.

Speaker 2:

So Phil, monitoring alone gives us visibility into networks and applications that are in use at the time. But what about proactive, continuous tracking of service performance to try to get ahead of those potential issues? You mentioned a few minutes ago that Kentick offers synthetic testing. Can you describe what exactly that is from a Kentick perspective and how it works?

Speaker 1:

Sure, so there's a distinction between passive and proactive or active monitoring, right. The passive monitoring that we do, and that most organizations do are, is collecting the information about what's happening on the network right now. So it's real time and what happened? So it's like kind of that, that Metaphysical or existential. Right now it's like there is no such thing as the present, because it just happened. But it's that. That's the idea it's like, and it's end user traffic, right, and we're observing what's going on. That's pure observability. So this is kind of where the whole thing about observability breaks down a little bit. Synthetic testing is literally artificial traffic that we're putting on the network test traffic. So it's technically not observability for, like the purist, so maybe you'll get a nasty email saying how dare he, but that's. But that's the idea is that we're able to use both of these and we combine them in the database and we look at the relationship between what's actually happening and our test traffic. Kentick uses a variety of agents that are deployed throughout the world, so we have public global agents and we're testing on our own various SaaS providers, public clouds, other types of things for our service provider customers as well. We're looking at transits and Global backbone information as well, and we're looking all all this stuff and and we actually provide that to all of our customers it's called the state of the internet, so all of our customers can log into it and there it is and you can see what the loss latency, jitter, http latency, you know, dom, processing time for drop boxes or whatever. That's a really cool place to start. But then what you can also do is deploy these test agents, which are very lightweight Linux packages, by the way. You can deploy them, private agents, and use those, whether you put them at your branch offices in your data centers. You put them wherever you want if your service provider. You put them in all your pops, all over the place, your points of presence, and you can start to build out synthetic tests, monitoring over time, whatever you want to see. That can be as as simple as Trace, route, ping, loss latency, jitter. You can do things that are more complex, like sending out sample, get requests to see if I'm gonna get that 200 ok, and when I don't throw an error, you can get even more complex.

Speaker 1:

Something that I am experimenting with is a combination of what we call page load tests and synthetic transaction monitoring. So what I have are agents that are deployed on my house at home. I have a lab. You can deploy them, you know, and they're lightweight, and I just have them running in ESXi Because I have an old PowerEdge server and I'm testing against All sorts of SAS applications just to experiment with page loads and SCM to see how granular I can get, and it's astounding how I can use the synthetic transaction test to simulate an end user logging into Microsoft 365. I use my personal Gmail account for that and then, you know, mess around with PowerPoint, or or you can do that with an e-commerce site.

Speaker 1:

Put something in a shopping cart and try to make a purchase. Capture the total transaction time, capture how long it took for each element and component along the way to take, including resolution of new files, the movement of this and that. Take screenshots along the way which you can store as well, and it's amazing how you can see like, wow, check it out, this one particular file don't even own it, right, it's, it's out in the cloud, this one particular file. It's taking forever to resolve that host name and then therefore get that, get that, you know, activity to occur. Page load is similar. You're measuring how long it takes for a page to actually load For the agent, but then also all of its components in a waterfall. And because we're testing against a remote destination, well, we're gonna also do trace routes to look at the path between here and there. And so now, not only do I have the, the results of that waterfall to see all the different files and elements that are Loading, and and what the resolution, dns resolution times look like for all those pieces, all that stuff, I have an idea of what path I'm taking over the public internet, which is not that difficult to do with Paris trace route, where you can see your multiple paths going over the internet up your last, your first Mile.

Speaker 1:

There's. There are some, you know, gaps in there when you know you're not going to be able to get some visibility from service certain providers in in a in a particular path. But a lot of the time we're able to augment that with our own data that we have at Kentic. So that's what synthetic tests are all about. It's an artificial traffic that you generate to literally test whatever you want to test.

Speaker 1:

If it's on the, on a network, we can, we can monitor it, and we want to do that programmatically over time and then use those results in the same database as all the other stuff that we've been talking about for the past past half hour. Right, we want to be able to correlate all of it, and so those, those tests, again, can be very mundane, they can be very sophisticated and they can be to internal resources. I'm I'm actually using tests to monitor a Cisco wireless controller, a router gateway, a bunch of SAS applications as well, and you know, look at VMs. Whatever it happens to be, it doesn't really matter, you can get to it over the network. We're gonna, we're gonna test it. So, again it's, it's an active form of testing and and, though technically not observability, we still find it very, very valuable tool the time it takes to resolve host names.

Speaker 2:

You know, I was wondering how long it was going to take for you to blame DNS here, but yeah, yeah, I just put a demo together the other day for work.

Speaker 1:

So one of the things I do is a tech evangelist as I do demos, and I was creating in my demo environment Yet another scenario in which an application was running like crap and I was gonna blame DNS. And it's not that hard to do that. I can like hose, like a record or we don't use a record is the same anymore. But I you can kind of mess around with DNS so that a particular file takes a long time to load and then, low and behold, you have a cool demo right. And I was like I don't want to do this again, I don't want to blame DNS again.

Speaker 1:

So I did something else where I simulated One of my upstream providers. I took a long time to figure this out, but I simulated one of my upstream providers causing network latency, and in the demo I like several times way too much because I I guess I was just worried that I was gonna seem like I was blaming DNS yet again. I explicitly said several times like hey, look, it's not DNS. And then again I look over here Look at these DNS metrics, it's not.

Speaker 4:

DNS.

Speaker 1:

But sometimes it is, and so we do monitor that. In fact, we monitor all of the public DNS services, like you know, the, the ones that we use for our personal use, that a lot of companies use as well. So we'll monitor the resolution time there as well to see what's going on, because it does play a role in application delivery and really Tim doesn't like think about all the stuff that plays a role in application delivery, like the list is Tremendous. It's not like router switches and firewalls. I mean there's wireless controllers and access points and containers and Caspi's, and you know you mentioned DNS and like do you even own that DNS server and what's the path between me and that DNS server? There's a lot of stuff going on and and, and therein lies the problem and and the solution that we're we're trying to report here.

Speaker 2:

So the the full path, visibility and observability. I guess I should say it is really interesting to me to be to be able to visually see, okay Along the path from my users to this application. There, there is an issue here what looks to be in this provider. Is there a way for your customers to Export that information in a format that you could take to? If it's like a direct provider that you work with, that you can take to that provider and say, hey, here's some data that I have that shows that it looks like you have an issue here, do you? Is there any way to do that?

Speaker 1:

Yeah, that's, that's a, that's a big, that's a very important functionality of the platform is just to be able to share the data. So you can share a link to the portal, you can share that out and you can spoof it and, you know, protect your data. You can Send it out as a PDF, you can send out in any kind of visualization that you want. So, if a sanki makes sense, or a line graph or a bar chat bar chart because remember, really, when it comes down to it, we're a data analysis platform under the hood, we're collecting a bunch of information will present it to you any way you want and then you can share it in any way you want. It's, it's something that a lot of our customers do on a very regular basis, being in network operations, at service providers and very large enterprise, and so that's, that's that's table stakes to be able to share that data and democratize it among teams.

Speaker 2:

So you mentioned earlier that service providers are a big customer of yours and service providers, I think, have a unique challenge in that they have so much data from from different customers Going through their networks and, of course, some of that data can be malicious and there can be Denial of service and distributed denial of service attacks. Is there anything in the kentic portfolio that that your service providers, customers, leverage to handle? Do some of that DDoS mitigation?

Speaker 1:

Well, I'll address the first, their DDoS mitigation. For sure, that's actually a very compelling part of our platform without many of our service provider customers In fact, we have some customers that, I'm pretty sure, utilize us specifically for that, excuse me. And the reason is that we're able to see, because we have all of that data and and that flow is a great place to see that. Not just that flow, excuse me, but flow data is a great place to be able to see that and recognize those patterns. And then what we do is we will Assign a probability to that, like this is a likely DDoS attack of this type. Remember, there's different types of DDoS attacks as well. It's not just distributed denial of service, but you can have what they call recursive, that whatever there there's a few, and then we can we do partner with several third parties to automate remediation. So that way it's literally a click of a button and then you are Bouncing your prefixes and to your, your third-party vendor, they'll advertise them for you, scrub the traffic, before releasing them back to you to advertise. So that's a very compelling part of our platform, utilized by a lot of our service provider customers, but also some of our very large enterprise customers, especially when you start to get into like streaming services, content delivery networks, which I guess is like service provider, like I don't know, but CDNs. You know, for a long time there were co-locating services down at individual pops and and co's and locations and now with live streaming, they don't really do that so much. So there's there's just tentacles out there everywhere. And Gaming providers you think about like I don't know what's, what are the different games out, like crowd strike Is that a no crowd force? I don't know all the different huge video games where there's like a million kids playing at the same time. Those actually are organizations that are susceptible to DDoS attacks. So the it's not just service providers that are concerned about that.

Speaker 1:

Now, as far as Looking at that From a security perspective, you know we're also looking, ingesting threat feeds and looking at potential botnets and really unusual traffic. The problem with the security side and I know this from both experience and also From some really close friends of mine that work in the cyberspace that a lot of the time you know, like a sock, a security operation center, they're spending like 80% of their time dealing with false positives. So this is always a matter of probability, right, anomaly detection and just anomalous behavior in general, not necessarily from a user perspective, but from a more broad perspective, like a global perspective. It's not as easy to do, and so you have a sock that's using some SIM or some tool and there might be some ML component to it, and they're still troubleshooting false positives. They're still looking at this and saying, is this real or not? So what we're doing is always trying to improve that by analyzing the data and then comparing that to what actual threat feeds are and keeping that as up to date as possible and so moving forward. It really is.

Speaker 1:

You remember Johnny Five in Short Circuit? You remember that movie?

Speaker 2:

Oh, Johnny Five Five is Alive.

Speaker 1:

Yeah, exactly he was like more input, right, that's us more data, and that poses a problem, of course. I mean, it's not like a big data is a problem, necessarily. It's kind of like a solved problem. More data is just a problem because you have more types of data and you want to be able to query it in real time. It's not like I have all this data and I'll get back to you in a week with the answer. It's like no, I'm troubleshooting my application right now, like it stinks now and I got a CEO on my back yelling at me.

Speaker 1:

So those are the kind of the problems that we deal with, and there is some ML that becomes valuable, where we can use time series models, linear aggression and nonlinear regression, those model families as well, to do forecasting and seasonality and predictive analyses, like I mentioned earlier. So we can start to say that we see this abnormal behavior and it's growing in this geo and this is something that is a security concern. This is something that is reminiscent of this type of attack. So that's really the crux of our security focus is just more input, more data and being able to analyze it.

Speaker 2:

Now back to the troubleshooting component of this. So I don't want to speak for all networking folks, but me personally. I don't want to spend my entire day just in a system looking for problems. So, from a kentuck perspective, what are my options for when can I set thresholds to trigger alerts and what are those? If so, what are those alerts look like? Can I get emails? Can I integrate with an ITSM to generate tickets? What are my options?

Speaker 1:

Yeah, we're going to integrate with whatever ticketing systems are out there. You know I don't have an exhaustive list in front of me, but you know you're using service now or you want to do Slack, whatever email, obviously. So whatever ticketing systems will integrate with. And as far as alerting, the alerting is going to be based first by default. A lot of it is going to be dynamically created, which is cool, right? That sounds really cool. But remember my example that I gave earlier, that maybe I'm seeing six or seven seconds as my normal time to log into Microsoft 365. If that's normal, that's great. But if you set a threshold that says three seconds, because I don't want it to take that long, but that's not normal. Regardless of what you want, it's never going to take, or it's always going to take twice as long or three times as long. That is the norm, whether you like it or not. So the system will dynamically create baselines. It will dynamically analyze trends and actually point out to you in our insights function.

Speaker 1:

By the way, insights is not like a separate thing. We don't have an ML menu. It's an underlying function of the entire platform. We have one screen that you log into and so it's all working together. It's not like you're logging into several different screens and so you get this alert that you see this trend, this increase of flow or this lack of flow, where you normally see flow, things like that. So it's not like an up-down, right. So here I have a red alert because I have a router that's down. Well, what if the router's not down but you're not seeing as much flow as you used to see? So there's a lot of interesting stuff that you're going to do there. That's a little bit more advanced. That requires analyzing the data in a new way. I lost my train of thought, tim. Where were we going with that?

Speaker 2:

You asked me about the learning. Well, we do have a question from the chat from the Sutharion. So you mentioned understanding what's normal. So the question is how long does it take for Kintik to establish a baseline of what's real?

Speaker 1:

So for this to happen dynamically happens very quickly, but what's going to happen is that's going to change over time as you increase the data. And so in a matter of, as a random number let's say, 15 minutes, half an hour of actual ingest of constant data, right. And then us looking at that data, you're going to see some sort of initial rolling standard deviation, some rolling baseline of what is normal in that half an hour, right. And then as you let's use the example of synthetic tests and where you're looking at the results of latency what is normal latency from here to here Over time that rolling standard deviation is going to change as you have a new normal and you have more data, right. And then you might have that rolling standard deviation change for a short time where it's like, well, for this day, this was what the standard deviation, the rolling standard deviation, was right. This between 100 milliseconds and 300 milliseconds. That's kind of crappy connection, but that's, you know, that's what the normal was for that day. So you do have a short term and a long term as well, but that's, that's all. The dynamically created baselines and benchmarking and trends, and that's very important. So that's, that's inherent to our system, but the reality is that's not how you run a lot of your network and a lot of your application delivery.

Speaker 1:

I've worked when I was with VARS, helping customers design networks and build networks. A lot of the time there were very strict requirements for latency and jitter and things like that For, let's say, mri imaging. There was a large medical, or rather healthcare system in the Midwest US with I don't know how many data centers, six or eight data centers, tons of hospitals and then of course, the I don't know 1000 doctor offices and urgent cares and all that stuff and all the locations where they had MRI machines, which wasn't all of them. That was the larger locations for them to get back and they were a hub and spoke, because this was the days before SD WAN was very ubiquitous. They were not a, they were a partial mesh, I should say, but anyway they were back calling that traffic to the one of the three or four data centers, to the software that ran those MRI imaging programs, whatever it was. I don't really know the healthcare part of it, but I do know that they had very strict requirements about you know, it had to be under 30 milliseconds or 15 milliseconds, whatever it was, I don't remember.

Speaker 1:

And so you know you don't necessarily want a rolling standard deviation. You don't want an automatically generated baseline in that case. So you can go in and configure whatever you want, whether it's in milliseconds or it's how much packet loss you know. In this link I can only talk tolerate 0.5% packet loss, or you can leave it for the, the, the tool, to do for you, and I'm not going to say it's a danger.

Speaker 1:

But the consideration you have to make is that if 300 milliseconds becomes the norm, you're actually going to stop getting alerts because it's like normal. So after a time the system's like yeah, that's normal. So you're going to want to address those alerts, you're going to want to hard code where you need to, but then remember that it's not just latency. So you might have 300 milliseconds latency over time, but you're also going to have a high page load time. That's not going to change. And a high page load time is going to affect user, user experience. So the alerts are dynamic and configurable, and that's across the board. And and the alerting is, of course, integrated with whatever systems that people use. Again, that's that's the whole point of what we're doing is to present with you the data that you need to fix the problem right now.

Speaker 2:

Yeah for sure, that does make a lot of sense, phil. One of the last things that I wanted to talk about was the impact of cloud. So a lot of enterprises are starting to not starting to have been for a while, I should say taking workloads that were in an on-premises data center into the cloud. Are they leveraging Kintyx solution for cloud workloads any differently than they would for on-premises networks and accessing SaaS applications?

Speaker 1:

Well, I'm not going to say using it any differently, but, yes, kintyx cloud, which, again, is not another screen you're logging into. It's just part of the platform and if, if somebody does create a their own personal test account, you can see that everything is there in one menu and all that data is in the same database underlying all of this. And so we're capturing VPC flow logs, google system logs, whatever it is, and we're looking at the, the traffic in and among your, your VPCs back to on-prem. If you're a hybrid environment, which many people are right now, I'm working with somebody right now with a 30,000 individual employee organization across the US, only 1500 locations, and they are, they have a small footprint in both Azure and AWS and they are very much hybrid, and they're looking at bringing some stuff back on-prem, which I'm sure you guys hear about all the time, right, and so how not only is how how is my traffic moving among my VPCs and among my, my containers within the cloud itself? We didn't bring up containers, but we can talk about that as well how is it also traversing back on-prem and what path am I taking? Remember, it's all related, so that's that's very much a part of what we do.

Speaker 1:

Conversation about networking in 2023 is usually a conversation about cloud connectivity, right? I remember just before I left the VAR space, I was doing a lot of SD-WAN projects and it was almost always how do I make my cloud connectivity easier and better? I was talking to network engineers, but I will burn up, if you don't mind, tim, that we are also monitoring container networking as well, whether that be by IP remember, these are ephemeral, so we want to do other things like correlate with a process ID, a pod name, whatever we can, a service name. These containers are living in the cloud, most likely, and we're able to identify loss, latency in jitter and dependencies among containers as well. Now, what we do for that.

Speaker 1:

Yes, there's flow and there's other information. We're looking at just meta information from that, but we're also using EBPF, the extended Berkeley packet filter. Nobody ever says extended Berkeley packet filter anymore, but it's EBPF and we're using that, deployed as an agent in the container environment, to collect metrics at the container. It's a Linux machine, really right At the kernel level, from the network stack in particular. We're not necessarily using it for APM, we're not really an APM company so we are collecting those metrics from the network stack and therefore we have visibility into your container network as well. Again, I know you didn't ask about that, tim, but to me, when I'm talking about cloud, I end up going into containers, and then how does that connect back to my on-prem and all that?

Speaker 2:

So yeah, no, I appreciate you bringing that up because it's definitely. Containerization in the cloud is definitely, I think, a big component of what people are trying to do. They're trying to migrate away from those monolithic applications and go to a more service-oriented architecture, and containers are really at the heart of that. So I'm glad you brought it up, because that's got to be a big part of what a lot of your large customers are doing.

Speaker 1:

It is Not even all our large customers. I guess, I don't know, it's how you define large and medium, it's true? Yeah, yeah, I remember going through the Cisco certification track and I would read in the textbooks they'd be like a small business with 10,000 people and.

Speaker 1:

I'd be like I remember working for regional bars where my customers were like 30 people, yeah, and I was setting up a site to site VPN on a Fortinet or something. I don't know, but in any case, not even our very, very large customers. It's just that they want to be agile, they are application-focused, and so they utilize microservices architectures because it lends itself to what they're doing cloud-native buildouts, rather than lifting and shifting their VMs. So you're seeing even, I guess you could say, more technologically progressive companies that are not gigantic really relying on microservices architectures to do that. So that's one of the focuses for us in the coming years or not coming years, right now, I would say.

Speaker 1:

But the product within our platform is called Kentik Cube Cube with a K. I guess you should. I don't know. Do we say Kube? Because it's Kubernetes, so do we say Kentik Cube or Kentik Cube? I say this in jest, by the way, tim, because this is an ongoing debate, like in the slacks at Orc. Some people say Cube, some people say Kube. In any case, yeah, I did, if it's OK, I see somebody asking about AI in the chat. Can I address that?

Speaker 2:

Absolutely.

Speaker 1:

So I am very reluctant to talk about AI because it smells so marketing to me.

Speaker 1:

One of the things that I hated when I was a network engineer was sitting on the other side of the table listening to a vendor kind of go off on like with all these marketing terms. And at the time I remember somebody I won't say the name of the vendor and they're going on and on about SDN and I'm like that's the most ambiguous At the time everybody was talking about we haven't heard in years. But it was such an ambiguous thing so I was calling him on it and he didn't know what to do and he was backpedaling and finally he goes well, we use bare metal SDN and I'm like I'm done. Here. I was like George Costanza, like I'm out, I'm done, this is nuts Now. He did bring breakfast. We had bagels and cream cheese and stuff, so I did stick around for that meeting.

Speaker 1:

But I am very reluctant to talk about technology that has that smell of marketing, unless I can prove that this is the problem we're trying to solve. I'm not saying that AI is not a thing. Not at all. In fact, I'm recording a podcast soon with a good friend of mine about where we are with artificial intelligence specifically in the realm of network, but I can say he's in the broader scheme. When you look at machine learning as a broader kind of technology realm, we are using that as one of the tools in our toolbox to achieve the insight that I mentioned. To have the machine, the computer, the system do some of the learning for us, to figure out those dependencies and causal relationships for us that we might be able to do the three of us here we might be able to figure something out over time.

Speaker 1:

By the way, I love doing that. Right, that's like in my soul. I love finding solutions to problems. We're engineers, but that's not like reality. My CIO is not like Phil, I really love that you're enjoying yourself fixing this problem, but I kind of need you to fix it now. That's cool that you're having a ball, and so that's what the ML component of our system is for. It's for finding that insight in a programmatic way and replacing a team of PhDs from MIT. Ai is getting into a new realm where I joke with my data scientist all the time that I'm like so can you show me some of those if then statements? And they're like it's more than that, phil and I'm like I know, I know, so I did want to bring it up that it's not a term that I use or we use at Kentick very much, because, again, we do have some statistical analysis algorithms that we apply. We do apply some ML models where we need to, when we need to, and then we don't, when it doesn't make sense.

Speaker 2:

No, I'm glad you got into that discussion Because, like you said, ai is such an out there word or acronym that it leaves a lot to be determined. Like you said, sdn was a very similar thing for a long time Is everybody talked about it, but everybody also had a different definition of it. So I'm glad that you brought up the elephant in the room and talked about how you're leveraging machine learning. So thank you for that. So, phil, I'll ask you now let's bring this whole thing together, let's give kind of a call to action People that are really interested in everything we just talked about. What would you advise they do next? Well, I think it'd be great.

Speaker 1:

If you want to get just get familiar with the platform, what we can do, what it looks like, see it for yourself I recommend that you spin up your own free portal. Just use your personal account if you like. You don't need to use a work account. We're not going to blast you with emails and all that kind of thing Because, again, the idea for me as a tech evangelist, as a technical evangelist, is to prove it, show you the platform in action, send some stuff into it, send some information into it. Obviously, it's not going to be as full fledged as if we were paying customer, but it's a great way to start to get a look at the platform for yourself. Another thing you can do, if I can be so bold and impose upon you, I would check out our company's podcast, telemetry. Now, we talk about visibility from time to time, but we talk about technology in general.

Speaker 1:

I do have SEs and other folks from my company, but also a lot of folks from outside the company, to talk about some of the things that Kentic is involved in.

Speaker 1:

So, as an example, I think you might be familiar with Doug Mandori. Doug is our resident director of internet analysis at Kentic. He has such a cool job. He basically looks at what's going on in the world Sometimes it's geopolitical stuff like Crimea and Ukraine and Russia and Cuba and all this really cool stuff and looking at what's going on with rerouting and submarine cables and all this cool stuff and so sometimes we'll have Doug and bringing on guests to talk about some of that stuff that's going on on a global scale Really interesting. And Kentic is involved with that Because again, we are collecting so much information both from service providers, backbone providers, transit providers, all that kind of stuff, that we can really offer a lot of depth of insight into the analysis of what's going on in the world. So I recommend that you check out Telemetry Now as well, and, of course, there's always the blog and things like that.

Speaker 2:

Well, phil, thank you very much for joining us and for Kentic of being a sponsor of the Art of Network Engineering. Definitely check out kenticcom, check out that free trial. That sounds really interesting and an awesome offering that they're given out to let people try out the platform and check out the Telemetry Now podcast. As for Andy and I thank you to all the listeners. You can find us at artofnetworkengineeringcom on the socials, at Art of NetEng, and also check out our Cables to Clouds podcast. Thank you all for listening. This has been another episode of the Art of Network Engineering.

Speaker 4:

Hey there, friends. We hope you enjoyed listening to that episode Just as much as we did recording it. If you want to hear more, make sure you subscribe to the show in your favorite podcatcher. You can also give that little bell rascal a little ring, a dingy, so you know when we release new episodes. If you're social like we are, you can follow us on Twitter and Instagram. We are at Art of NetEng, that's Art of N-E-T-E-N-G. You can also find us on that weaving web that is the internet, at artofnetworkengineeringcom. There you'll find our show notes and some blog articles from the hosts, guests and other friends who just like getting their thoughts down on that virtual paper. Until next time, friends, thanks for listening.

People on this episode

Podcasts we love

Check out these other fine podcasts recommended by us, not an algorithm.

Cables2Clouds Artwork

Cables2Clouds

The Art of Network Engineering