Behind the Scenes Episode 316 – The Mark III and NetApp Medical AI Partnership

Welcome to the Episode 316, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, Andy Lin (andy@markiiisys.com), VP of Strategy and Innovation of Mark III and Esteban Rubens (estebanr@netapp.com) of NetApp join us to discuss the medical AI partnership between Mark III and NetApp and how Mark III enables customer access to medical AI with end-to-end support.

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

Transcription

The following transcript was generated using Google’s voice to text transcription service. As it is AI generated, YMMV.

I’m here in the basement of my house. And with me today, I have Esteban Rubens from NetApp to join us as well as a special guest from Mark III, Andy Lin. So Andy, what do you do at Mark III and how do we reach you?
00:10 – 00:30
0

0.87
Hey, thanks for coming on. Yeah, Andy Lin VP of strategy, Innovation. CTO, Mark III systems. You can reach me by. Just dropping me an email at andy.lin@markiiisys.com. All right, excellent. And of course Esteban Rubens NetApp, what do you do here in how do I reach you?
00:30 – 00:45
0

0.84
Hey Justin, thank you. I work in the healthcare team. I cover focus on cloud and AI in healthcare and life sciences. And people can reach me at 7, a.m. And sent NetApp or through Linkedin.
00:46 – 01:44
0

0.77
All right, excellent. So if you’ve not heard of Mark III, luckily, and he’s here to tell us what that is. So Andy, Mark III, what is that company? And what do you guys do? Absolutely. So we’re a long time, it Solutions provider in the field. We do what I call regular stuff but we’re we’re really unique in the ecosystem. And we’re going to talk about today is is really around our capabilities, specifically around our machine learning and helping clients specifically with their Journey around building. What I call new stack architectures or architectures that are driven by open-source Centric methodologies of birth or what I call ten thousand different ways to build the same thing, especially around a machine learning. So we’re obviously NetApp partner, or an Nvidia, Elite partner run, a machine learning, a reservation, and simulation as well. But today we’ll talk a little bit about specifically AI Center of Excellence concept and and what we’re doing around on the ecosystem within Healthcare.
01:45 – 02:45
0

0.90
So it has Mark III always been in AI Healthcare company or is it did it evolve over the course of time because there was a need in the market. So that’s a great question. We definitely grew up. Um, focused on with the focus strong focus on Healthcare. And I think that was just a by-product of how we were founded. We support Health Care institutions, all across the country, but we had the Good Fortune of being found in the mid-nineties in the shadows of the Texas Medical Center in Houston, which is the densest Medical Center in the world. For those of you have been there. It’s, you know, a few few different us square miles. And it’s got over 50,000 institutions. So, we’ve really absorbed competency in healthcare over the last quarter-century. Really? Just, I was, I called by osmosis just, by, by being there. And that’s really powered us home health care across the country. How may I specifically is kind of an interesting story. So the way we got into it was really, I think many, many ways, sort of, by chance in 2015. Yep.
02:45 – 03:45
0

0.72
We founded an innovation you did internally, which is basically like an internal starter, but we did with the start up, as we basically saw that software stature changing driven by open source, you know, even if we were to continue helping our clients with platforms and infrastructure data center, like we always have, you’ve got to really be able to go Builder to builders. I call it with organizations to help their Builders, their developers or data. Science is totally understand how, to orchestrate a build, some of these Stacks because open source has made it. So really, there are ten thousand four ways to build the same thing as I mentioned earlier. So everything has a higher propensity to run better on something. So we just, I think we realized early that the approach had to be fundamentally different. You still have to grow your SCS and your system Engineers, like all the fundamentals, but you have to really understand how things are fundamentally built to be able to wage. I think, help clients really get to where they want to be. So we got into a, I just, you know, one day, one of our developers who we had brought on from a start-up had really no, excuse me.
03:45 – 04:22
0

0.88
Back to you whatsoever. He had just come in and build some models using preview one tensorflow. This is a 20 2015. So, you know, that really just started us down the path to where we are today. So essentially what’s happened is that, you know, our our unit of data scientist and developers are sort of emerged over the years with, with our devops. Ml Ops now, Emily Ops Team and that season essays. And and we together work with clients specifically around building, a center of excellence concept as a joint team, Builders a.m. And Architects all working together.
04:24 – 04:37
0

0.84
Cool. So you mentioned that Mark III is a partner of NetApp and I’m interested to hear about maybe how that came about. But also how NetApp and mark for a partner? Like what do you feel like, what does one offer the other and how does that work?
04:39 – 05:39
0

0.89
Yeah, so, um, you know, we’ve been a partner of NetApp for, for quite some time. So we work with NetApp around. You know, what? I call traditional projects, specifically around a a machine learning, you know, obviously NetApp provides world-class storage and data Management Solutions as part of the stack with, you know, Technologies, like ontap Ai and things like data, Ops toolkit app and as well as things like a I control plane. And what we do is well, we work with an organization around guiding them toward what I call the AI center of excellence, and how I Define a center of excellence is this. It’s basically a centralized resource for a large Healthcare organization or research organization. And with that Healthcare working with, you know, academic medical centers, large Sciences, organizations launch Health Care Systems, biotech companies as well. It’s basically a centralized resource which allows 15 10:00 even maybe even thousands of dead.
05:39 – 06:38
0

0.85
Scientists developers researchers basically people building the models to all build the models that they need and they want that align with their work, with their own specific ID. He’s their own framework their own tool kits, allowing the organization and manage that resource Central. It sounds easy to explain, but it’s very difficult actually, for organizations. I think to navigate down this path. And so what we’ve done and how we also, you know partnered with NetApp as a final part of our strategy is we have, you know, these three different types of personas or groups as I call it within an organization that are required to make this journey happen and these three groups that we’ve seen are, you know, data, scientists and researchers. People building the models. You have an optimal Ops or computational scientists competitions years. They have different titles. Depending on the organization, people who are in charge of taking those models, scale them into production and then you also have
06:39 – 07:38
0

0.73
It Ops data centers. Well, which we all know who are in charge of basically ensuring maximum performance and and security and all that good stuff. Basically, the stuff that the models wage on top of. So you’ll notice that we’ve evolved organization as they described earlier in a similar manner. So when we’re actually working with one of these organizations, what we find is that all three groups within within our client organization that we work with, they all need to be sort of clicking for the momentum around. AI machine learning to continue. If even one of those groups who, you know, has a challenge or gets blocked in some way that it basically slows all momentum at driving toward the center of excellence. So we built a lot of different unique programs over the years specifically life for these three different types of organizations. And each one of these has a way that we’re partnering with with Nvidia and also NetApp in that Journey, you know, for data scientists wage.
07:39 – 08:38
0

0.71
And and and say, I sent your developers and what I call Builders Innovation teams, we have the concept of May, I education series that we built which is basically a series of em, you know, one-hour virtual modules of content and Labs that can be delivered virtually to, you know, one ten, fifty a hundred, different Builders, a data scientist researchers within an organization. So that’s been probably our most popular head just because education is a huge thing, not just in terms of like, you know, taking a one-week class or doing like something that’s really convenient and practical, you know, we have modules that are focused on protecting pricing from you know, doing real basic image analytics on on medical imaging as an example, right? But things that you could consume very quickly with under 1 hour and then take that lab home and just start building on whatever environment that like you’d like. We’ve got workshops that we built as well with the teams as well as wage.
08:39 – 09:38
0

0.87
We have also a hackathon program that we put together that really helps bring Builders and operators together with an organization’s. We found that this is extremely critical to get all the groups on the same page and working with each other, you know, using a lot of the shared learnings that we’ve we’ve accumulated over the last, you know, five plus years or so space. And then, so that’s around the data scientist side. And you know how we partner with NetApp. There is we have obviously we have built our own modules around things like machine, learning and deep-learning, computer vision and and the ontap dataset and things like that. But you know, we we also have joint modules that we built NetApp around, you know, the containers side, you know, how to leverage things like the data, Ops tool kit and try and Page. Yeah, control plane, and all that good stuff. As far as, you know, solving problems like, you know, reproducibility of data sets and traceability and things like that. I just want to underscore some of these said n d a v.
09:38 – 10:38
0

0.87
Center of Excellence, that’s so important to us as well because there’s this fast start problem, and overcoming inertia and so many organizations. And this is not just academic medical centers. We’ve seen glow big Pharma companies who have this problem, too. They, they know they want to do something, everything tells them, they should be doing it. But for some reason, they just can’t figure it out. They have enough smart people, but they need somebody from the outside to kind of help harness, all that energy and, and end up with something that’s usable for all the stakeholders. So that’s been huge in our relationship with Mark III wage because we obviously bring a lot to the table. But Mark tree is an equal partner. They they complement absolutely what we do. So really, when you think about it, it’s Mark III NetApp and Vidya together are bringing this concept to so many different kinds of customers who are really ready for it. This is not a hard sell when you start talking to all the different stage.
10:38 – 11:38
0

0.76
L, as I said, you see how they’re interested and how it all comes together very well. So there’s an obviously huge unmet need. So this is not a solution in search of a problem. This is a very clear problem that exists especially today, right at this point in time. So it’s really phenomenal what we’re able to accomplish with Mark and having this ready to go and we’re ready working with customers doing this. So, this is, this is not just a concept that something that has been done. Many times as very repeatable and Cake to do. I completely read that. I think the momentum right now is just enormous. I think, you know, in many ways. I think we probably started I think before maybe a little bit too early off the space when things were just picking up. But I mean right now we’re entering what I call the early part of that part of the curve in this space. I mean, it’s just, you know, you put this education out there for groups and we offer this Thursday.
11:38 – 11:56
0

0.78
At no charge and people just come out of the woodwork, you know, everyone’s really hungry in this space to to build models and to tackle old problems in new ways. It’s like all in the spring and it’s it’s a really exciting time to be involved with with the AI space in general.
11:58 – 12:57
0

0.90
So earlier you mentioned ontap, Ai and other things that you you’re using, like the data Ops. Toolkit. Are you leveraging Cloud at all at this point? Or are you looking that direction? What are your plans for that? Yeah, so we definitely have skill sets and we do partner with clients when they’re trying to deploy these types of the stack from cloud, but I think the area that we really specialize in is being able to roll out an equivalent. May I start of Excellence or her platform on premise or hybrid model? And we found that this is an an extreme need in the space obvious. So you can, you know, use frame works out there in the cloud like, like sagemaker or or Azure. Ml, right and WS or Forum, or a sure. But we found not that many clients are looking, you know, if they they’ll start building models in the cloud, but then they’ll realize almost immediately that wow, this is going to get really, really expensive. I need at least have another option.
12:58 – 13:58
0

0.90
And this is an area that that build a center of excellence that’s on Prem. It’s really? We’ve seen running TCL models, things like that, clients can save seventy 80% plus, it’s it’s an astronomical number, but up. To now up to recently, I would say within the last maybe year or so, the challenge with doing building a similar platform on premise, really been a software layer that interacts directly with the data scientists or, you know, when a data scientists logs on via VIA, you know, jupyter notebook or or b r Studio, whatever, are they going to have the same experience and up to recently the challenge with that has been you know, sort of but not really. But we’ve seen is that especially with a lot of these pieces that we’re working on and and actual production, real life done over the last, you know, twelve to twenty-four months we’ve seen that the parody had really has has arrived. And what that means is that building, an AI center of excellence in your data center birth.
13:58 – 14:58
0

0.89
For at least Bridge. The cloud will allow you to build the models. You need secure them, scale them into production, but allow you to derive those cost savings. And that’s why I think like wow, that’s that’s really the area that we we sort of specialize in. But we found that is really are sweet spot because we understand the data center just from, you know, growing up obviously in the space and back and we started working with Nvidia kubernetes in 2018. So we have a lot of strength as far as like work with clients around doing a full rollout. I want to copilot around that a lot of times that’s dead. You know, after we run the education series. No clients are excited. They’ve got, you know, even hundreds of data scientists that all wants to to build models and maybe they decide. Okay. Well, we need to have it all real reason for data, security, sovereignty reasons. And then also eat the economics scenario that described earlier like how do I do that? And and really that’s where the middle part of our team comes in data off?
14:58 – 15:58
0

0.89
Tucson table table, settle Ops in the middle will actually work with clients to actually roll that out. So that’s where, you know, the data data Ops toolkit. That’s where, you know, NetApp involvement would track comes into play with the storage provision with kubernetes. Kubernetes is really emerging in the space is really sort of a default. Modern orchestration platform that makes them a. I started back since work. We do work with a lot of clients still around, you know, bright and slurm, and I guess more traditional and core cluster managers HBC situation scheduling. But what we’re seeing is that almost all the growth is INS, that’s particular space around kubernetes. So so, yeah, that that’s sort of how we we work with folks. You know, like I said, it’s a three-pronged approach based on what’s really needed to to guide a client to this end destination. As far as the kubernetes piece goes. Can you kind of walk me through what that work flow looks like, like, where does suck?
15:58 – 16:58
0

0.89
When he’s fit into the whole space with, with the the provisioning and the in the orchestration, like how is it implemented in that environment? Kubernetes, you know, is a container orchestration framework as we all know, and it was open source in the mid 2010, but we’re we’re across sections, or at intersects with a, I specifically around when Nvidia kubernetes, or GPU Central scheduling with kubernetes was really made available to the community and around, 2018 time frame. And since then it’s made its way into, you know, all the Enterprise distributions which include open shift, mature, rke, tanzoo, and others. Those are sort of the, the big three four clients that are interested in enterprise-class distributions that were working with today. We also work with wage that they wanted vanilla kubernetes approach using a tool like an invidious D Bops framework to be able to actually work straight and deploy that by kubernetes, is so important is, it’s actual wage.
16:58 – 17:58
0

0.77
Framework that basically deploys and scales pods are basically microservices that align with an overall software platform that isps create jobs, like Domino data Labs as an example. Or what we found is that actually some clients Advanced ones in the space will actually create their own that run on top of it are developers actually even home grew our own platform, a specifically for computer vision use cases, but this platform is super important and and kubernetes is is is extremely important because it allows you really to handle the end end process of training models by deploying and scaling pods, depending on how many gpus you need to be able to train and build models, but then also deployed in place odds for entrancing. So once you have trained them using gpus within your kubernetes cluster, you can, then take that model. Regardless of you use a framework like tensorflow or terrorists or or high-pressure with whatever that is and then actually deploy. Yep.
17:58 – 18:58
0

0.74
In place to make predictions using inferencing within that same cluster and you can hit that, you know, from an interesting perspective via using an API for sdks page. And why we why we think that it’s it’s extremely exciting. Is obviously we kubernetes right? You can rent a Data Center and to the cloud but you can also stretch it to the edge as well. If you’re using, you know devices, like, for instance, Judson or Clara, a GX from medical instrumentation for conferencing or Xavier or you know, different accelerators for NVIDIA that actually run on the edge, you could use kubernetes to bridge that Gap. So, you know, obviously prior to kubernetes, there really wasn’t a way to do these things. Every part of that pipeline from training to inform me that sort of viewed in isolation and managed, you know, essentially which leads to friction, but you could see why I think that that it’s so Central to really the core of being able to build a center of excellence the matter off.
18:58 – 19:16
0

0.85
What part of healthcare, any vertical that you’re in today? So, with the kubernetes deployment that you’re using, I would imagine you’re tying it back to centralized storage and that storage is probably generally going to be like ontap AI or something along those lines. How are you handling the data protection piece? Like are you guys leveraging any sort of application or does it even matter for this data set?
19:18 – 20:18
0

0.71
Well, I think what, we’re what we’ve found so far is that this area of the industry Ram data protection, when you use kubernetes, turn a center of excellence for MLS is still quite honest. I’m a work in progress. I think what we found is that most of our clients today are really just using manual sort of backup and data protection strategies around the space. Just, you know, a data protection for containers is is still somewhat emerging and if you combine that with a i folks have take a sort of a one-off approach to to this today, but to be honest like right now most are just trying to get the basic framework to work and then we’re working with them, you know, and sort of one-off approaches to to sort of back-up what they what they actually need, but I’m sure like this part of the industry will evolve a time and I went as far as like, you know, snap shots and things like that. I think that’s that’s something that clients are really, you know, we’re seeing folks, take take advantage of dead.
20:19 – 21:18
0

0.76
This is where tools like the data Ops toolkit come into play. And and, you know, other software partners that specifically interact with with C, ROM drivers and and tried. And we’re, you know, it’s it’s really easy to create. For instance, a clone at the jupyter, notebook, level or Jupiter lab level of different notebooks that you might have a, what not using using these types of approaches. So yeah, it’ll be, it’ll be interesting to see what, what happens next couple of years and isn’t that part of the whole education aspect as well? We’re data scientist AR hyper advantage of the things they do. But in other aspects, there in the Bronze Age, they they don’t really know how to move data efficiently. And so, all this stuff that we can show them in terms of the integration act, like they did Ops tool kit and stuff like that where they can do things themselves that they’re not used to thinking about. And I T doesn’t really want to do because many IPS not even involved in that Center of Excellence dead.
21:18 – 22:08
0

0.89
Very much because it’s a different budget line or what have you. So, it’s opening the eyes of people to the possibilities of, hey, you can do this or talking to them about why snapshots may be useful, in terms of reproducibility of results or, you know, keeping track of things in a different way and having a concept of well, so, you know, GitHub 4 a.m. But how about GitHub for your training data to if you need to reproduce the results in five years and no one can do that. So there’s just so much that through education we can because it’s already included. Once you you should have our stuff running. It’s just a matter of using it doesn’t require to buy anything new and people still even though they have a platform. They may not be using it to the back post extent of its possibilities.
22:10 – 23:09
0

0.83
No, I hundred percent agree with that approach. And yeah, it’s it’s absolutely needed. And, and a lot of times like you pointed out. It’s just an afterthought, you know, organizations. Don’t even really think about that until they actually have the models working cuz a lot of times that’s the most difficult part of the process. But you know, once it’s up and I’m working, how do you protect that? And how do you make sure it can? It can scale and yeah, for some reasons like reproducibility or traceability. You know, how do you go back and figure out which data you use? In that particular data set data set versions, really key. I that’s, that’s extremely strong critical. So it’s, I think organizations, like working with us together just because you know, when we explain the three-pronged approach around how we, you know, tackle data science layer. We tackle the same time elapsed later and then obviously be tackled infrastructure. Or are you thinking about those things that they may not have encountered? They may not encounter for a year or two because we’ve lived it and log.
23:09 – 24:09
0

0.88
We’ve seen it. So I think like, you know, the MCS that we work with and the lifestyles organizations Farm. I think they have sort of peace of mind that we’re sort of looking out for them. Even if that may not come up yet because it’s not yet a problem, but it will be at some point. So, great Point. Yeah. And also at a higher level what makes this so unique. The collaboration that we have them bring to customers, is that? Yeah, of course, we’re trying to sell them something, ultimately, but we have so much more. This is not just about him ontap, hu, a box-and-one, install it. And when I go away, this is really long as you keep bring up Andy the solution. This is about running the center of excellence. This is an ongoing process that doesn’t stop when the center of excellence up and running. So that’s unique. I see that often and that’s why it’s so important for us to talk about it. So that everybody involved is aware it. Certainly everybody on the NetApp side because this is a big competitive event or something that we should be talking dead.
24:09 – 24:10
0

0.91
everybody about
24:12 – 25:12
0

0.73
Yeah, I think just just to add on to that and I totally agree. You know. Yeah, Center of Excellence is what I call, you know, I think there’s that buzzword out there and obviously we’re all in the technology industry up here. But I really regard the center of excellence and and how we help organizations and the healthcare space. And also other verticals is is have technology half, people culture skills. And you know, the technology side is obvious, right? We talked a lot about that today around, you know, everything from kubernetes to handle the orchestration layer from data center to Edge to calm down to the infrastructure, even down to things. Like, how do I figure out, you know, the power and cooling requirements for that’s actually a huge challenge in this space but also, you know, on the education of like how do I Empower data scientists to take their work in the researchers, take their work to to Really the next level where they were really block before and that’s really around education and we focus, like I said with package.
25:12 – 26:12
0

0.90
On culture, bringing different groups together, Builders and operators together and we leveraged really all our all our shared experiences to do that and and wage, you know, to a certain extent. Like I said, because we’ve had like a like a multi-year runway in the space, you know, five seven years, a lot of the learnings that we have pardoned, things that we share based on our own experiences and, and really what I call building every day, right? You know, obviously we talked about technology, but like, I think a lot of the most interesting things that we, that we talk about, are, you know, hey, we built a platform that did computer vision 2017. And we, we used, you know, in 2018. We scaled in a video kubernetes like here are some lessons that we learned most of the, the software Platforms in space that we start working with with, with clients or somebody wants to consume really a nice V driven platform or something. They’re open source, right? Almost always things are going to come when I call. This is a bad thing called broken out of the box.
26:12 – 27:12
0

0.75
And we’ll work with organizations to address and fix the deployment and things like that. Just because, you know, that’s, that’s just the nature of of this, the upside and outside of of working with open source, you know, a software and, you know, driving Innovation suppose. You got to do a little bit work up front. But these are just some of the lessons that we’ve learned to actually make the stuff work and practice. It’s not very glamorous. But yeah, once you really get to that point where you do have the service Excellence, working, I mean, it propels an organization, like you wouldn’t believe. So we’re like, I say, we’re excited about where we are in life cycle. I think we’re just entering the early part of of, really, I think the era of the center of excellence and it works out to be working together with with a lot of these amazing organizations across the country. So can you walk me through? Kind of like an end-to-end engagement with a customer? Right? Like so start off with how they reach you. The conversation looks like to get them started and then where does it go from there? Like this does my dog?
27:12 – 27:16
0

0.91
We stay involved throughout the entire process, or is there a stopping point at some point?
27:18 – 28:18
0

0.74
Yeah, so that’s a great. Great question. So some of the things that I’ve mentioned are all different ways work with organizations, but I’ll hit on I think a couple of the most popular ways that Faith care organizations and research organizations engage with us. Typically, most will engage with us because of Interest around Rai education series first or around something. Like our Mark III hack program, which is our ability to run hackathons for for organizations, you know, obviously most are in the position where they have a, you know, maybe 10 or 20 different groups or different pockets of people, working on different machine learning models, but they’re all sort of in the shadows and isolation on their own laptops and their workstations. So we’ve just found that out, you know, when we, when we work with, with organizations institutions, there’s always we always start, typically with the key sponsor. Within the organization who probably is in charge of serving all these groups, probably someone Within
28:18 – 29:18
0

0.71
It who’s directly attached to these research groups and these data science groups and these innovation groups are indeed groups. And then we together will offer up, you know, either the mark them back program or more popularly. Now, the education students and then what ends up happening is that, you know, what we found is 1050, 100 or 200 plus Builders developers researchers, typically, attend these, we’ve just been extremely pleasantly surprised that just really the outpouring of interest in this space. I think it’s just, you know, folks are home. That’s the only way I can describe it. And once that happens projects and interests come out of the woodwork within the organization. It’s almost like, you know, really unlocking something. That’s really. And then what happens. Next is ultimately what happens is that really empowers it to know what what the organization is coming off.
29:18 – 30:18
0

0.89
Is is interested in um, and then we start typically work with it around a pilot. So rather that be a p o c or a pilot, it could be a two or four node, you know. Ontap. AI assistants most likely powered by NVIDIA GTX a 100. We will go in and actually do the roll out for that POC and that client will typically pick whatever data science software layer. I want I mentioned one earlier which is actually a example of a large Life Sciences organization not working with right now, live around around a, a POC that uses dominoe data Labs as wage example, but there are others up there as well. And then we’ll work with that. Organization’s data scientist. Typically, they’ll assign 5 to 10, maybe early adopters or or folks, to trade out that tooling many times. Those data scientists are already building on their own laptops or their own work stations or their own departmental. I tease type systems or birth,
30:18 – 31:18
0

0.90
You know what? We’re seeing more and more. They’re already building a lot of models in the cloud, whether it be on AWS or Azure and they just want to see. Hey. Well, let’s do the job. Right? We’ll this is this good enough. What I’m trying to do because it’s good enough. In in theory. The a center of excellence will probably save the organization. Like I mentioned earlier, you know, seventy 80%. So we’ll run through the POC with them, you know, everything we’ve all the PS3 front so far, been extremely successful and then if that works then we’ll go to the really the next phase which is okay. Do I do I roll this particular pod into production or do I look at something that even larger right? And you can expand ontap, AI driven, you know, dgx pod 24, no date node node plus, you know, even much larger than that, just depending on what’s needed it. And we can help with that. Like I mentioned with that, roll out into production, will co-pilot you asked earlier about, like, you know, our relationship, one of the Dead,
31:18 – 32:18
0

0.78
From a longevity standpoint. I mean, what we found is that will obviously help roll this out. But really we’re integrated really for the indefinite future because what ends up happening is that once, it’s in their office, will bring back our data, scientists to actually build quick starts, build videos, build internal tutorials on why data scientists and and Builders within an organization should use that Center of Excellence resource. It’s, it’s interesting, because like once it’s built, you still need to make sure that folks are successful in on boarding almost like you’ve helped that client build like a SAS product. And in that that requires, you know, not only assisting with building models per say, we’re not going to really build the models for them. Even though we we have that capability. We’re going to help them be successful in on boarding. Make sure we reduce friction, just like, you know, for instance being a customer support manager, almost for like a SAS product. If you’re if you’re trying to watch your product and work, we thank God.
32:18 – 32:49
0

0.80
I got that the same way within that organization and, and if you do that and you’re sure they have a great experience, you’ll obviously lead to that pot growing, which means that more and more stakeholders within that organization are using AI machine learning effectively to address problems that they probably never would have been able to address for, which is amazing, for that organization and the fact that they’re doing and also amazing for for us. And for for NVIDIA NetApp as well to be part of part of that journey and part of that growth.
32:51 – 33:00
0

0.89
So Esteban from the NetApp perspective, like what do we fit in here? Like, what are we helping out with, with Mark III. Like, how are we making the success story more granular?
33:02 – 34:02
0

0.88
Well, obviously it’s the technology that we bring to the table. What the stuff we’re mentioning is just Hardware software, open source. It’s the collaboration and it’s our customers, right? Our customers need Solutions. And even though we have a piece of the puzzle, we don’t have the complete puzzle and even with ontap AI, we don’t have it either. Right? So NetApp is not enough NetApp video, not enough either. But NetApp plus a video plus Marked Tree, all of a sudden that is Magic, right? That that is totally different than the existing Lee. We all have very unique things. So we bring to Market and we have differentiators, but when you put all of that together and it’s really everybody talks about this, but it’s, it’s may be harder to implement it to sit down with customers and talk about what process are they interested? In solving what they’re paying points are? I mean, this is 101 stuff at everybody talks about, but in my
34:02 – 35:02
0

0.80
Experience, very few people actually do because they have some mandate that you need to sell more boxes of this kind or that kind. So the luxury of sitting down and having that conversation and plotting. A course, that may take many months, or even longer is rare. So, I think it’s the power of that and certainly, because we’ve been down this for so long. We we have a lot of customers, basically, all the large farm has, most of the large academic medical centers. There’s, you know, sled customer universities. They all have NetApp. So, we many times are already talking to them and we can find the right, people to sit down with and get this stuff started, especially because in many cases, we find that they had conversations internally, that were not really leading to anything productive, but we provide a catalyst wage.
35:02 – 36:02
0

0.71
So we when we raise the issue of a sudden people come out of the woodwork and they say yes, this is something we’ve been trying to do and it’s not really going well or it’s not going quickly enough and we have a lot of people interested. And certainly, when you’re talking about academic medical centers. Usually there’s universities around them in that ecosystem. So, you’re talking about undergrads grad, students, medical students doctor researchers, a recent doctors phds who are trying to do their own research, then, putting people together with the data. All of that is kind of the hard part that is not obvious. So I think it’s being part of our customers pass, that help us be there for them, because we can ask the right questions of the right time. And that’s not something that you can, you know, it’s not a trivial thing having those relationships. So that wage
36:02 – 36:55
0

0.82
You can be part of the right conversations, and then have the right people at the table, giving you mind share because otherwise, you’re not going to get anywhere. So it’s really everything together and everybody plays a role. If you, any of the pieces are missing, were not going to have the success that we’ve had so far. All right, sounds like you both have given us a lot to think about, in terms of a life Imaging and all sorts of different things, all the way from kubernetes to storage. So, you know, Andy if I wanted to find out more information about Mark III, where would I go to do that home? Yeah. You can check out our website at Mark III, That’s https://www.markiiisys.com/ or you can shoot me a note as well andy.lin@markiiisys.com. All right and Esteban.
36:56 – 37:38
0

0.90
Yeah, for anyone want to talk about this more, please get in touch with me at Esteban Rubens. Certainly. If you’re part of NetApp, you can find me in the directory. Otherwise LinkedIn. As I said, it’s easy enough and really, it’s all about finding people who want to gauge this conversation. We know that there’s so many and we just don’t have enough resources to go after everyone. So if a listening to this and you have customers in this space, we can just sit down and figure out who the right people are talked to and how to start these conversations. And it’s going to be very, very good for everyone involved. All right. Sounds great. Thanks so much for joining us today and talking to us all about Mark III, and then a partnership.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s