Welcome to the Episode 364, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”
Kubernetes does a great job at autoscaling for applications. But when you run Kubernetes in the cloud, auto scaling might not always be so great – especially if it doesn’t scale back down.
Spot by NetApp features Ocean – a way to address runaway cloud costs associated with Kubernetes deployments.
This week, Zak Harabedian (Zak.firstname.lastname@example.org, LinkedIn) stops by to discuss Spot Ocean and how it can positively impact your Azure Kubernetes Service deployments.
Finding the Podcast
You can find this week’s episode here:
I’ve also resurrected the YouTube playlist. You can find this week’s episode here:
You can also find the Tech ONTAP Podcast on:
I also recently got asked how to leverage RSS for the podcast. You can do that here:
The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.
Episode 364: NetApp Spot Ocean CD with Azure Kubernetes Service
Justin Parisi: This week on the Tech ONTAP Podcast, we talk about Spot Ocean and continuous delivery for your Kubernetes applications.
Podcast intro/outro: [Intro]
Justin Parisi: Hello and welcome to the Tech ONTAP Podcast. My name is Justin Parisi. I’m in the basement of my house and with me today, I have a special guest to talk to you all about Spot Ocean CD. So to do that, Zak Harabedian is here. So Zak did I say your name correctly first and how do we reach you?
Zak Harabedian: Yeah. Hey Justin.
Great to talk with you. My name is Zak Harabedian. I’m a product architect with Spot by NetApp. And you can reach me on LinkedIn or shoot me an email, Zak.email@example.com.
Justin Parisi: All right, so you mentioned Spot by NetApp and that’s one of our newest or newish acquisitions that we’ve had. And we are here to talk about Spot Ocean. So tell me about Spot Ocean. What is that?
Zak Harabedian: Yeah, sure. So Spot Ocean is our flagship Kubernetes offering in that it’s a serverless compute engine for containers in the cloud. So what exactly does that mean?
Basically we plug into your existing containers or Kubernetes infrastructure, and we take over all of the infrastructure scaling from that point. So we automatically figure out what is the best instance type, size, and family to launch all based on the pods requirements. From there, we continue to optimize your infrastructure, so like bin packing, scheduling simulations. How can we make things more efficient within the cluster? And then of course, you know that final layer is going to be container right sizing and observability. So Kubernetes kind of changes that realm of what you’re looking at in terms of cost metrics, right sizing, things like that.
So that’s all things that Ocean will be able to help you with.
Justin Parisi: So it sounds like Ocean is trying to tackle the problem of scale out, automated scale, as well as cost. Is that accurate?
Zak Harabedian: Yeah. And, Kubernetes makes it really easy to scale pods, right? And replica sets and horizontal pod autoscaler.
But the infrastructure is really tricky because it turns into like a game of Tetris, right? Some pods are memory intensive, compute intensive. A lot of customers that we talk to are setting up dozens of different node groups to try to match pod requirements to instance types and things like that.
So yeah, Ocean will help with all of that automation and save time for your DevOps teams that are responsible for maintaining this.
Justin Parisi: And when you’re talking about doing Kubernetes in the cloud, there’s also the cost associated with that, cuz you know Kubernetes is great for scaling out and it’ll do it for you automatically, but in the cloud, that might not be the best thing for you in your costs.
Zak Harabedian: Yeah, exactly. So, cost is another big challenge and that’s kind of how Spot got its name Spot.io, that was acquired by NetApp, in that we’re the experts at finding the best Spot instance pricing, which if you’re not familiar with a Spot instance or a Spot VM, they’re what can save you up to 90% on your compute.
So we go out, we find that excess capacity from the cloud providers. We try to launch that in your environments and we try to do that in a way that will give you reliable Spot infrastructure, but also, we have so much data from all of our customers using Spot instances that we can predict interruptions from the cloud provider if they need those VMs back, and we’ll launch replacement instances.
So not only are you going to have highly available infrastructure, but it will also be cost efficient with Ocean.
Justin Parisi: Okay, cool. So this is about Ocean CD, and I guess the CD here means continuous delivery. Is that accurate or is that something else?
Zak Harabedian: Yeah. So, we’re trying to make Ocean your one stop shop for all things Kubernetes management.
So at Spot by NetApp, we’re a customer obsessed company and we’ve spoken to hundreds of our customers that have said, Hey, look, you guys are already optimizing my infrastructure with Ocean. Is there anything that you guys can do to help optimize my software delivery process? So we took that in-house and we’ve been building Spot Ocean CD for the last 18 months or so.
So talking to all of our customers, hearing their pain points and trying to build a solution that meets the needs of our customers. So that’s how Ocean CD came about and into play. And today, what we have with Ocean CD is it’s a continuous delivery solution for Kubernetes applications specifically that makes it really easy to go from zero to progressive CD pipelines in a matter of minutes, which can get you to things like Canary or Bluegreen deployments in just a matter of minutes with Ocean CD.
Justin Parisi: Okay. And let’s talk about continuous delivery, cuz not everyone knows what that is. So can you just gimme kind of a high level overview of what that might be?
Zak Harabedian: Sure. So continuous delivery is like every time that there’s a change in your code, right? Maybe there’s a new update and maybe if we take a step back, we talk about Kubernetes and microservices specifically in that, everything from the shift, from monolithic to containers is doubling the amount of microservices you need to maintain.
And the reality is we’re now seeing deployments going from this big monthly release where you have everybody on call and you’re doing this big release and releasing new code to something that’s happening dozens if not hundreds, if not thousands of times a day. So anytime there’s a new change to any microservice, you need to get that out into your environment.
And continuous delivery is what allows you to do that and take those code changes and safely get them into your clusters.
Justin Parisi: So what are some of the challenges that you have noticed with doing a continuous delivery mechanism with Kubernetes applications?
Zak Harabedian: Yeah, it’s a good question. So, kind of what I mentioned earlier, that shift from monolithic to microservices, everything is just doubling in size or tripling, quadrupling, whatever number you want to use. But the challenges there are, you need to shift your mentality to start looking at different kinds of metrics.
You’re probably using different application monitoring tools and you need a system that can plug in to any monitoring tool you’re using today and use metrics to continuously verify your deployments as they’re hitting your environment. So what that allows you to do is to shift left and catch issues with your new release earlier in that pipeline so you’re not getting into production, having an incident, and trying to debug it there and not having any idea why that new version has an issue.
Justin Parisi: So why wouldn’t I use something native to Kubernetes that just comes with the stack, right? Or does it even exist?
Zak Harabedian: Yeah. So, the native solution to doing continuous delivery on Kubernetes is doing something called a rolling update, which is you have a new version, there’s a new pod that goes into the cluster and it just slowly trickles out there.
There’s no more granularity about it. And that can be a little risky because you’re not checking metrics, you’re just throwing it into your environment and really hoping for the best. So, our best practices are getting you to that place of being able to use a strategy like Canary or Bluegreen.
And, just as an example with Canary, we can integrate with your traffic management tool like Istio and we can say, okay, this new version of my release, I only want it to hit 1% of my traffic. Or maybe if I wanna take that a step further, I only want it to hit 1% of my traffic that’s using the Firefox browser.
Right? Something that’s that level of granularity and specific. And then once I do release that new version, I wanna check how it’s performing. So I want to go check CloudWatch, or I want to check Prometheus to see how it’s performing and if something’s not working as I expect it, we’re also gonna help you with that automation to automatically roll things back in the event that we detect an issue.
So you’re not getting too far into your release before you need to roll things back.
Justin Parisi: And do you have any sort of mechanisms that can do backup of configurations or you can copy the state of what you’ve done already and use that in the case of something happening, disaster recovery scenario, that sort of thing?
Zak Harabedian: Yep, exactly. So that’s the beauty of doing a SaaS solution with continuous delivery and with Ocean CD specifically. We have this reusable entity model in that you can define your strategy of how you want to roll things out and deploy things and reuse that for multiple different microservices and multiple different deployments.
So that’ll not only save you time but it’ll save your developers and application owners the ability to not need to be a CD expert. So that’s really the first layer. And then in terms of prior versions everything that you’ve released with Ocean CD, you’re going to see the revision history.
You could see the differences from the old and new versions, and then you can not only automatically have it rolled back, but if you are seeing something that you are not expecting, you can go into the console or use our CLI to roll it back to that specific version that was performing as you would expect it.
Justin Parisi: So how easy is it to implement? Is it something that I have to install on site or is it Cloud resident? Does it live next to the Kubernetes instances? How do I install it and how do I use it?
Zak Harabedian: Yeah, sure. So for any of our existing Ocean customers, you’re familiar with our setup process of deploying our controller.
And that takes a matter of minutes. With Ocean CD it’s the same idea. You deploy our controller into your cluster, and then what we do from there is we ask you to change your existing Kubernetes deployments to what we call a Spot deployment. So we use Kubernetes custom resource definitions to know, okay, this is something that you want us to manage for you, and then from there, we have a bunch of pre-built templates that will help you get up and running. So you’ll only need to plug a couple pieces of information in to craft your first strategy. I would say that’s probably under 15 minutes, you can be up and running with a full-blown Canary deployment with us, and we also have hands-on workshops that you can utilize. So, If you check out Spotk8s.com you’ll see we have workshops for our Ocean product, Ocean CD, Ocean for Apache Spark, and our Elastic group products. So it’s a great way to get hands on and play with a tool If you’re looking to get hands on.
Justin Parisi: Cool. So does this support all versions of Kubernetes? I mean, we’re talking about on-prem or doing things through Rancher or AKS or does it only work with specific flavors?
Zak Harabedian: Yep. So we will work with any flavor of Kubernetes, any cloud. So we’re multi-cloud multi cluster solution.
So in a single pane of glass, we’ll be able to support our customers that are running on-prem Kubernetes, AWS, Azure, Azure, GCP, whatever you’re using. The only requirement is going to be that you’re using Kubernetes.
Justin Parisi: And if I’m running an on-prem instance of Kubernetes, Spot CD will still be in the cloud.
What sort of caveats or expectations should I have when I’m running something remotely like that, offsite when I’m pointing it towards something that’s on-prem?
Zak Harabedian: Yeah, so the only requirement from our perspective is we just need our controller to be able to communicate back to our SaaS layer, so our SaaS layer is where we have all of that logic of your strategy, which is how you want to handle your deployments.
That’s what tells us how to communicate with your application monitoring tool. Things like that. So we just need that level of connectivity. And then in terms of what the other differences might be is maybe on-prem you’re using a different monitoring tool that you want us to be looking at instead of something cloud native, like AWS CloudWatch or things like that.
So those are probably the only two differences, but not too much you need to worry about with that solution.
Justin Parisi: Okay. And what about the licensing and the payment model? How does that work?
Zak Harabedian: Yeah so our payment model is based on two different ways that you can pay with Ocean CD. The first is a per cluster licensing model.
And then the other is a per service model. So, ultimately we need to talk with our customers and see what makes sense, because the reality is everybody uses Kubernetes a little bit differently. You have some customers that are building clusters for every single environment, and every team has their own cluster.
You have some customers that just run these mega Kubernetes clusters and they separate things out by namespace. So that’s why we have those two different versions to meet customers with a licensing model that makes sense for them.
Justin Parisi: All right. So earlier you mentioned the word verifications a lot, and I’m kind of curious what that is and why it’s important.
Zak Harabedian: Yeah. So, with Kubernetes, it’s also a change in how you’re looking at metrics. So whatever monitoring you had in place for your monolithic application is probably not gonna be suitable for Kubernetes. So Kubernetes, you’re looking at container level metrics.
You’re looking at CPU usage, not focused on nodes specifically, but you’re looking at container level. So it’s a whole nother set of metrics and tools you might be using. And, we at Spot feel it’s very important to integrate all of those metrics into your deployment pipelines.
So with Ocean CD all you need to do is define what monitoring tools you’re currently utilizing, and then we’ll set up that communication between our SaaS layer and your monitoring tool. And then you’ll give us success and failure conditions based on the metrics you’re already using today.
So you can automatically just tell us okay, if the CPU usage is greater than 95% on this name space, or this container specifically, roll it back because we know that’s something that is going to turn into an incident. So that’s why we feel it’s so important to integrate those metrics and have that automation because when we talk with our customers, The reality is they’re telling us that they don’t have an automated solution to verify metrics.
A lot of customers are doing deployments and they might be manually watching Grafana dashboards to make things as you expect. Or maybe it’s a NOC team that’s monitoring things for you. But we build all of that into Ocean CD, so it’s one less thing your teams need to worry about.
And when something does fail, we integrate natively with the Spot notification center, so you can set up different notification policies to send a Slack web hook or send a Teams web hook that will alert those teams that, okay, this deployment failed because your metrics didn’t pass the verification process.
So, not only can you build that in, but all the notifications will also be built into the tool so everybody’s notified with the right level of understanding.
Justin Parisi: I don’t know about you, but I need more Slack notifications.
Zak Harabedian: So, yeah, and the the cool thing is, you can set the notification engines up with custom policies.
So I can have the granularity of my cluster or my application only notifying me, so I don’t need to bother you with your coworkers that are committing bad code and they’re constantly failing the pipeline. So, we could help reduce that noise for you based on the granularity with the notification center.
Justin Parisi: Yeah, and I mean, I joke partially cuz it’s part of the job, but you can get drowned in notifications and it can negatively affect how you react when something comes up. Cuz you start thinking, oh, it’s just another false alarm. I don’t need to pay attention to this.
Zak Harabedian: Exactly. So we wanna eliminate that stigma, and not overwhelm you with notifications.
So yeah, you’ll be able to set things up with whatever makes sense for your workflow and only get to those important notifications.
Justin Parisi: So as far as Spot CD goes, where would I find more information?
Zak Harabedian: Yeah. So you could check out our website, Spot.io/ products/OceanCD. So that’ll get you to our landing page about Ocean CD.
From there, you can request a hands-on demo with our team of product experts. The other resource that you can check out is our Spot ins public GitHub. There we have a bunch of different templates for our entities. And again, the hands-on workshops I think are a great way to play with all of our products.
So Spotk8s. com is going to be a great resource to get hands-on with not only. Spot Ocean CD, but all of our infrastructure scaling products. So those are the three resources that I would suggest to anyone that wants to learn a little bit more.
Justin Parisi: All right, cool. And we’ll include those links in our blog as well.
So Zak, again, if we wanted to reach you, how do we do that?
Zak Harabedian: Yeah so you could reach out to me on LinkedIn Zak Harabedian, or shoot me an email, Zak.firstname.lastname@example.org.
Justin Parisi: All right. Excellent. Thanks again for talking to us all about Spot Ocean as well as the continuous delivery mechanisms that it offers.
Zak Harabedian: Awesome. Thanks so much for having me, Justin. Appreciated being on today and hopefully talk soon.
Justin Parisi: All right. That music tells me it’s time to go. If you’d like to get in touch with us, send us an email to email@example.com or send us a tweet @NetApp. As always, if you’d like to subscribe, find us on iTunes, Spotify, GooglePlay, iHeartRadio, SoundCloud, Stitcher, or via a techontappodcast.com. If you liked the show today, leave us a review.
On behalf of the entire Tech ONTAP podcast team, I’d like to thank Zak Harabedian for joining us today. As always, thanks for listening.
Podcast intro/outro: [outro]