Welcome to the Episode 363, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”
On April Fools’ Day, you stay on your guard to avoid being pranked.
In IT, you always want to test before releasing something into production to avoid any surprises.
That methodology extends into other places, including the US military. Every year, the Navy and Marines conduct experimental training exercises to test field deployment readiness called Trident Warrior. And in even years, Trident Warrior combines with other allied nations RIMPAC forces to conduct these tests.
NetApp was able to participate in these exercises recently and Eddie Huerta (eddie.huerta@netapp.com), Frank Alcantar (frank.alcantar@netapp.com) and Dan Holmay (holmay@netapp.com) join us to discuss their experience and how NetApp ONTAP helps enhance the US Public Sector’s storage needs.
For more information:
Finding the Podcast
You can find this week’s episode here:
I’ve also resurrected the YouTube playlist. You can find this week’s episode here:
You can also find the Tech ONTAP Podcast on:
I also recently got asked how to leverage RSS for the podcast. You can do that here:
http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss
Transcription
The following transcript was generated using Descript’s speech to text service and then further edited. As it is AI generated, YMMV.
Episode 363: NetApp, the US Public Sector and Trident Warrior 2022
===
Justin Parisi: This week on the Tech ONTAP podcast, we talk about Trident Warrior and how NetApp assists the US public sector with their storage needs.
Podcast Intro/Outro: [Outro]
Justin Parisi: Hello and welcome to the Tech ONTAP podcast. My name is Justin Parisi. I’m here in the basement of my house and with me today have some special guests to talk to us all about NetApp in the public sector as well as the Trident Warrior initiative that’s going on. So with us to do that, we have Dan Holmay so, Dan, what do you do here at NetApp and how do we reach you?
Dan Holmay: Justin nice to be here. Dan Holmay. I’ve been here at NetApp for almost three years. I’m the AI solutions leader for US public sector. My job is to actually pull together all the ecosystem partners that we work with, including Nvidia and some of our other ISV Partners to build out end user solutions for our government customers.
You can reach me at dholmay, h-o-l-m-a-y at netapp.com and I’ll throw the ball back to you.
Justin Parisi: All right. Also with us here today, we have Eddie Huerta. So Eddie, what do you do here at NetApp and how do we reach you?
Eddie Huerta: Yeah. Hey Justin. Thanks for having me. My name is Eddie Huerta. I’m a account technology specialist on the Navy team.
I’ve been here 10 years and you can reach me at huerta@netapp.com or on LinkedIn.
Justin Parisi: Alright, and last but not least, Frank Alcantar. So Frank, what do you do here at NetApp and how do we reach you?
Frank Alcantara: Thanks Justin. I appreciate that. I joined NetApp back in 2007 as a systems engineer and most recently for the last handful of years, I’m working as an account manager. My area of operation for NetApp is, I’m a part of our Navy Marine Corps team. I own the account management responsibility for our Navy West, a major program for the Navy as well as our Hawaii and Pacific Rim regions. I can be reached via email frank.alcantar@netapp.com, and I’m also available on LinkedIn.
Justin Parisi: All right, so it’s interesting that they combined the Marines and Navy. I thought that wasn’t allowed.
Frank Alcantara: Well, if we’re being completely honest, the Marine Corps is Department of the Navy, and you know, Marines have a saying as to which department they are, but the Marine Corps is a Department of the Navy.
So the Marines fall under the responsibility of the Naval operations.
Justin Parisi: Oh, okay. I get it now. So it’s just the Army that’s the problem.
Frank Alcantara: Army and Air Force.
Justin Parisi: All right. Yeah, I’m just gonna create problems here. I’m gonna stop now. So let’s talk about public sector in general, cuz I always hear the words public sector and it doesn’t quite make sense why it’s government and that sort of thing. So can you kind of gimme an idea of why we call it a public sector?
Frank Alcantara: When you look across the ecosystem of OEM providers out there, companies like NetApp as well as a Cisco, a Microsoft or an AWS, there’s a sector of the business that deals directly with the government, and they all have different names. Some will call ’em government, some will call ’em public sector, some will call them DoD. From a NetApp perspective we call it US public sector. Public sector includes all DoD, it includes some of our agency type support within the DoD, but we also support state, local and higher education as well.
So, from that perspective, that’s the reason why we call it public sector. It’s all public based services and industries.
Justin Parisi: Okay, so there’s this thing called Trident Warrior, and I’ve never actually heard of this. So talk to me about what Trident Warrior is and where NetApp fits in there.
Frank Alcantara: Sure. So Trident Warrior is an annual experiment, and it’s run by the Navy’s third fleet out here in San Diego.
And what the charter for that is to bring emerging technologies that are available by commercial companies as well as large integrators where they’ll develop new capabilities, new methodologies, new concepts, things that could bring value to the fleet.
They do this annual experiment to be able to showcase that. And it’s an orchestrated, funded process. Typically to participate in the Trident Warrior, you’re working directly with the end user Navy. You identify a need and when you identify that need, there’s a very rigorous process of documentation validation because you’re connecting this emerging capability into a Navy network and you’re testing it in a real world scenario.
So, let me tie that into RIMPAC. So, Trident Warrior happens every single year. RIMPAC happens every other year, and RIMPAC is essentially an experiment that happens in the PAC AOR. The Pacific region and all the surrounding countries around the Pacific region, they call it the Pacific Theater. In even years, the RIMPAC and Trident Warrior merge into one experiment. And so for the year of 2022, which we just completed, Trident Warrior and RIMPAC. We initiated our experiment proposal into Trident Warrior, and after months of validation and vetting of our capability, we decided to test our experiment during the RIMPAC execution period in Hawaii. So really when you think about RIMPAC, I guess the theme for the year of 2022 that we executed our experiment was capable adaptive partners.
RIMPAC forces will exercise a wide range of capabilities projecting the inherent flexibility of maritime forces, and helping to promote a free and open Indo-Pacific. So that’s the theme that they went after. And then I’ll close here to give you a level of magnitude as to the scope of RIMPAC.
There were approximately 26 nations participating in RIMPAC from a Navy perspective as well as coalition forces. There were 38 surface ships, four submarines, nine national land forces. Countless unmanned systems, about 170 aircraft and more than 25,000 personnel in order to execute, and it’s not just one experiment, Justin. We, from a NetApp perspective built out a particular experiment and we’ll talk through that in the rest of this brief, but it is just a whole Initiative to communicate or to execute the RIMPAC. And it actually went from June to August and stretched from Southern California all the way out into the Pacific Theater.
So that’s what RIMPAC and Trident Warrior is.
Justin Parisi: Okay. And what is NetApp’s participation in this? Like what sort of support do we provide?
Frank Alcantara: Our participation, Justin was we were working directly with the Navy. So we had a customer sponsor within the Navy that had a particular need. And the particular need was around building a Data Fabric, a hybrid cloud architecture. There’s a significant need around our community, and when I say our community, out here in the Navy we see a significant need in building data fabrics, having data mobility, and the ability to burst into cloud and build these hybrid architectures. There’s a lot of security implications. There’s a lot of just owning of the environment end-to-end from on-prem into cloud.
And so there’s some challenges there, so that’s a gap. We identified that need. So what we decided to do was work with our customer to build a concept of operations on how to actually demonstrate that. So for the people out there that understand kind of the way the military operations work there’s a operation called a non-combatant evacuation operation, a NEO op.
Typically, what that means just at a high level, right, is where you deploy in theater and you start an operation where you’re moving friendly non-combatants out of a hostile area. And as you move those people into, say a United States or a Navy or Marine Corps controlled space, you wanna be able to vet those people so that you’re not letting a bad actor into a safe space.
So that NEO op, we decided to use that as the underpinning for what we were gonna do for our experiment. So when you think about that, what we decided to do was build out a deployable architecture that had a NetApp storage array, compute servers. We had some work stations to actually do the work. We utilized a 5G connected cradlepoint technology, and then we built out a cloud space in Microsoft Azure, deployed CVO, and used cameras to capture data. Used a piece of software to be able to run the AI/ML based algorithms against that data.
And then we store the data, replicate the data from on-prem into cloud, and then that basically created the concept of a rapidly deployable, highly scalable, secure enterprise Data Fabric deployed at the edge. That was the concept. And from a NetApp perspective, the value we brought to the Navy in this case, there is countless, and I say countless cuz there’s really no way to qualify that on this call.
But there’s countless deployments of NetApp technology within the Navy and Marine Corps’s infrastructure. Whether it’s in a data center, whether it’s out at the edge, whether it’s deployed in theater we have a lot of enterprise deployments and a lot has to do with the fact that we understand the customer we work with our Naval and Marine Corps customers every day. We understand accreditation, we understand security. We’re tried and proven. We’ve got many, many accredited platforms deployed in production. So we’ve gone through the security process to help our customers achieve their authority to operate, to include NetApp technology in their infrastructure.
We’re a known entity. We’re very secure and capable, and our systems can service all of our customers’ data needs while still meeting their rigorous security requirements.
Justin Parisi: So Dan what sort of involvement did you have with this particular project?
Dan Holmay: Yeah. Well, I think Frank, I think you did a pretty good job of kind of laying the foundation on the outlook of what we were there to do, right? That easily deployable hybrid data fabric to demonstrate the capability of gathering data at the edge and providing it back to either a US or a CONUS-based command and control, or to be used for training AI models in the cloud, wherever that needs to happen. My role was really to come in and we actually identified a way that we could demonstrate this to be a little bit more interesting than some of the things, cause we’re really the underpinning technologies that enable all these things. But we worked with the Marine Corps and Frank, I think you did a good job of helping guide us towards what’s gonna be a valuable output for the US Marine Corps. And unfortunately there were events that had happened that were related to evacuation command and control scenarios.
And so we looked at that capability of helping increase security in those scenarios either in a disputed zone or at the very edge. So we brought together our partners both at NVIDIA as well as a company called Protopia AI, and we worked with them on a machine learning algorithm that helped us do facial recognition to recognize bad actors in a combat zone or in evacuation command and control scenario without necessarily having to expose identities and still have a high rate of accuracy or efficiency for identifying, again, from a global database, bad actors that were in that zone. So in this case we helped Protopia AI build out a facial recognition based on some of the national standards that come from NIST. But use their transformation algorithm to mask everyone’s personal information but still be able to alert to bad actors in that zone. And then from there we actually were able to demonstrate the capability of not only doing that in real time at the edge, but as Frank said, help push that data back to a cloud environment to be used for any number of different tasks, right? So again, my role was really working with Eddie and Protopia to make sure that we had something that was demonstrable for the Navy and the Marine Corps to say, okay, that’s interesting. That helps increase security in these types of scenarios, but more importantly, drive home some of the areas that we know our DoD customers need to be thinking about. The risks that are involved with gathering millions of terabytes of data from all sorts of streaming sensor sources, and utilizing those in different ways. And as AI becomes more heavily utilized by the DoD to make decisions or provide insight in real time, this is something that I think was also relevant.
It was maybe secondary to what we were there to prove out. But I think it’s got a lot of relevance, especially as we’ve been talking to other DoD customers since last July. And Eddie, I think maybe we wanna talk a little bit more about what we actually did.
Eddie Huerta: So for me if we kind of take a step back a little bit, we got to this point by being able to come up with our own ideas, right? The Navy didn’t ask us to bring a specific solution. The experiment is, hey, what can you bring to the table? Once we came up with these use cases, bringing in a Protopia and Nvidia, Microsoft, brought in our partners at CDW, it was kind of up to me to put it all together, right? So how is this thing gonna actually work? How is it gonna operate? How are we gonna showcase this when we’re out on island setting this up?
So going through the little transitions of running the software with a CPU based processor or do we use GPUs? And seeing the increase of the frame rates and going through the whole workflow to tweak it and working with our partners was my role.
So, we’re the three here that owned it, but we had a pretty big ecosystem of partners helping us out.
Justin Parisi: Okay, cool. Are there any other initiatives that go on with the public sector that NetApp gets involved with other than Trident Warrior?
Dan Holmay: Yeah, I can draw a direct line to the work that we did with Trident Warrior. Actually we’ve socialized this experiment with our friends on the other side of the pond within NATO and really looking at what we did from an obfuscation standpoint with the Protopia technology as well as the hybrid cloud data fabric deployment and really helping them address some very real problems of how do you share insight with your alliance friends without necessarily breaking some of the data governance laws that are different across every single one of those NATO countries?
Eddie Huerta: So how we can help deploy that both on-prem and in a cloud and share data in those types of scenarios as well as even looking at something as simple as taking a direct replication of what we did for Trident Warrior of Facial Recognition on obfuscated data to protect sensitive information, but provide real-time INS alerts. US Air Force has expressed interest in being able to track who’s maybe approaching different bases and utilizing different technologies to increase security in those scenario.
And then beyond that, we’re actually talking to Army Intel as well about being able to deploy this for all kinds of different workflows or complex needs that they have from automated target acquisition to again, increased security or even within the VA of being able to inference against clinical data without necessarily exposing HIPAA information and being able to get better outcomes for our veterans’ health. There’s some really interesting areas where this has already caught traction. I think the biggest thing to Eddie’s point was because we weren’t confined to a specific ask, we were able to think about something that we could do that demonstrated not only value to the US Marine Corps for Trident Warrior, but now has applicability across other areas across the DoD.
So with the AI solutions, are you pushing the NetApp ONTAP AI package or are we doing other things like implementing Active IQ or Cloud Insights or all of the above?
Dan Holmay: You know, it is all of the above, but it does start with the ONTAP AI. And if you look at where enterprise customers have traditionally trained and deployed AI models, they would do that on-premises, but for the economies of scale and efficiency, a lot of our DoD customers are looking to the cloud, either public or hybrid, private cloud to expand their compute capabilities for deploying AI. ONTAP AI is where we really start being able to deploy some of these things, even with BlueXP or Cloud Data Insights, to be able to tell them where their data is and who has access to it and how it could be potentially leveraged to increase efficiency for some of these AI models is important. That actually, however, exposes some real risks when you’re exposing data across different infrastructures. Some data is meant to only live in a specific enclave. And so those specific technology capabilities that are already built into ONTAP are very useful when it comes to helping them be able to share data across on-prem and in the cloud, and minimize those risks of migrating data to and from, to train AI workloads. Beyond that, we’ve looked at how do we actually build more things like Active IQ into our DoD customers’ daily usage so that they have more control over how they build out their future architectures, right? They need to understand building more data silos isn’t necessarily gonna get them closer to what they want faster. So we can deploy ONTAP, we can deploy BlueXP or Cloud Data Insights and help them make the right decisions now so they’re doing less retooling down the road.
Justin Parisi: You mentioned the hybrid cloud challenge and data governance and that sort of thing. And as far as I understand it, the way that it’s being handled by the government entities is creating basically their own cloud infrastructure. And I guess there’s that JEDI initiative, if I’m understanding it correctly. Is that something that you see as well? And if so, where does NetApp fit into that?
Frank Alcantara: Let me speak to that a little bit, Justin. We’re working very closely with our hyperscale partners. We’re also working very closely with our end user customers. You’re right, cloud is definitely an open ocean for our customers. Some are a little further along than others. Our customers are looking for, I’ll call it direction, and finding ways for them to easily adopt cloud. So Eddie and I are working daily and regularly with our customer base around cloud. So the Navy does have different cloud initiatives. They actually have some pretty large enterprise wide cloud-based deployments for certain services. Things like email and collaboration, like, using Teams and things like that.
So they do have that out there, but there definitely is some consumption out there that’s being challenged. Now, and I want to tie this back to Trident Warrior and RIMPAC, but one of the things that we’re very focused on is our customers are looking to go to cloud, are looking to adopt cloud into their everyday operational work environment. And so one of the things that we work very hard to do is to help our customers see NetApp as an on-ramp into cloud. And to your point, when you look across the major hyperscalers, right?
You know, Google, AWS and Microsoft Azure. Our technologies are available in their cloud spaces, whether it’s doing something like a CVO deployment or doing something like ANF or FSxN, we’re tied very closely to that.
And there is so much NetApp deployed around the Naval enterprise and the Marine Corps enterprise today, that we try to help our customers see that, hey, you can do that very easily because you can one spin up software-defined ONTAP or land on an ONTAP volume in the cloud, and then as long as they can deploy the networking and the security controls in place to create these private networks, now they can have one, data mobility into the hybrid cloud and two, be able to manage all that from a single pane of glass. And where I want to tie it back to the Trident Warrior, that really was our experiment. That really was the focus, right? The focus was to how do you take everything that’s available to our customer today?
ONTAP is widely deployed and widely available by our customer. VMware, right? Everybody knows VMware. We use it on-prem. We use VMware. We used an X 86 server. We used a space in Microsoft Azure to deploy our own vnet and then deploy CVO. We deployed typical technology and solutions that the Navy uses today, and I should step back and say, we originally started this by building an entire lab environment in one of our partners. CDW here in San Diego has a lab, and Eddie and I were able to put a bunch of NetApp technology into that lab so that Eddie could now have the canvas to be able to build out the architecture.
So the one thing we wanted to demonstrate Justin, through this whole process was, when you have these conversations and you talk about the art of the possible, and you talk about the different capabilities, that’s all fine, but when you can demonstrate that in an easy to understand and an easy to see you know, experiment package, that’s where it makes sense and that’s where, from a NetApp perspective, with the Trident Warrior RIMPAC experiment, it really made it easy because our customers are able to see what a data fabric look like. Right? What did data movement look like? We took pictures, we had descriptions of everything that we did, and we also made sure to depict the workflows, how the data was generated, where we captured the data with the cameras, how we stored that data on the NetApp storage system, running a hypervisor like VMware where we had the VMs running. Then we were able to replicate that data using NetApp SnapMirror to the cloud via a Cradlepoint device, via a 5G network. So we were using a public network. We had a VPN between the Cradlepoint to the cloud. Then we deployed CVO in Azure. Eddie had another workstation deployed in the cloud. So what we were able to do was take the cameras, capture the data, run the models against that data, obfuscate that data so we were able to be sensitive to PII, store the data on the NetApp, Snapmirror the data, replicate it to the cloud, and somebody 20 feet away in a command tent could sit there, connect to the cloud workstation, view the images, and we were Snapmirroring as often as possible within five minutes.
So they were within minutes of viewing the data that we were capturing 20 feet away, and they were viewing the data as it was stored in an Azure data center back on the mainland. So that concept of building that enterprise data fabric, creating that data management layer for the enterprise. And that was all possible with all of our different technology partners.
And then finally, I think the last one was the Nvidia component. . We initially tried to do the experiment without an Nvidia GPU. We weren’t getting the performance we needed. We installed a GPU into our server. And Eddie, keep me honest, but what was the benefit to that? The increase?
Eddie Huerta: Yeah. Well, so during the testing, using the CPUs with the Protopia software, we would see anywhere from three to four frames per. Being able to capture the feed and render it through the Protopia software, through the AI engine. But with using GPUs, it wasn’t a big surprise to us, but using the GPUs increased it to somewhere like 30 to 35 frames per second.
So it was a much better use of the GPUs obviously. And we actually had an older Tesla V 100 card in there, so this was our experiment that we owned and, and brought all the pieces together. So had we had a newer version, it might even been better. But yeah, obviously a great increase to use the GPUs.
Dan Holmay: Yeah. And I just wanna take a real quick shot cuz actually Frank, you did a really good job of kind of teeing up some things that we’re seeing. Back to the Jedi question. You know, Jedi was a contract that was awarded and it actually has, I think, morphed and changed to move over to what is the JWCC. The DoD is looking at there being four different options: AWS, Google Microsoft and Oracle to be able to provide cloud capabilities and what Frank laid out, where NetApp has a unique position in the world is, with the exception of OCI, we work with all of those large CSPs. We’ve got first party services available there. In this case, we leveraged Azure but we would be able to deploy something like this. And you look across specifically for the DoD and what they’re actually trying to obtain by building out something like this data fabric across different cloud and on-prem infrastructure is again that flexible and reliable cloud and on-prem infrastructure, managed data service, searchable and collaborative data catalogs, modern tools, applying the power of GPUs for advanced analytics. And then enterprise visibility that Frank talked about being able to have that data available in near real time for command and control that may be 20 feet or thousands of miles away to then also provide data accessibility across the Navy, and increase the viability of different partnerships across that enterprise. And so when I think about the driving forces that the Navy are looking for across DoD, NetApp really hits a lot of those things. The experiment we were going to demonstrate on island wasn’t just about finding bad guys that were trying to load in through an evacuation command and control scenario. It was about demonstrating this in a very real way, these capabilities that our DoD customers are ultimately looking for.
Frank Alcantara: One thing I did want to add to the last thing Dan mentioned there, obviously you start to test a lot of different things, right? And one of the things that we did in our final experiment, for the 10 days we were there, we did a total of three different experiments. And the final one, what we did was how long it would take to do a standup, right? So imagine we show up in the morning, I mean, we happen to have everything in the back of a truck, but imagine you stand up and everything’s in a CONEX box and it’s dropped into a location in theater. How long would it take to stand that up? And we started the clock, the moment Eddie started to turn the first screw or lever on the box to uncreate it. And I think we were, Eddie, keep me honest, but I think we were within 20 minutes, we have full data flow going and the system was completely up and running 100%.
Eddie Huerta: Yeah, spot on. 20 minutes cause once everything’s set up really was being able to just fire everything up, check the replication, make sure the data’s being ingested properly.
And then from there, like you mentioned earlier, we’re able to go to a completely different experiment and give them access to our Azure environment and they could see the data as it was being ingested into the cameras through the ONTAP system up into Azure.
And they were watching the video near real time. So it’s pretty cool.
Frank Alcantara: Our customer was there with us and that was one of the things that they wanted to measure was, hey, now that we had it all up and running, how do we flex the muscle a little bit? That was one of the things they wanted to see is what would it look like if we had to stand this up in a real world environment. So we treated it as such. We treated it as showing up and doing that process. We hadn’t planned on doing that, but it was great to see how we could bring all the services online and have that up and running within 20 minutes from unboxing the first component.
Justin Parisi: Do they airdrop you in? Do you parachute?
Frank Alcantara: There were helicopters around us, so it sounded like it, but No.
Justin Parisi: That’s good. It makes it realistic. So as far as standing this up quickly, are you doing any sort of automation here? Are you using things to try to help that along and error proof it?
Dan Holmay: Actually we’re taking what we’re calling TW22 2.0 to our lab in RTP, and again, I mentioned a few of those other potential customers that are interested in the specific experiment we did to do more of a rapid deployment. So even under 20 minutes, which I would argue is pretty good for our first time out. But we’re doing a little bit of troubleshooting with both Protopia and Nvidia to update some of the technology. Eddie mentioned we were using an older generation of GPU and then some of the automation where a Marine or any other service man or woman would be able to deploy with very little IT background, or no background at all ,to be able to set up cameras, set up the deployed data centers at the edge. Plug it in and go. We’re gonna be looking at trying to come back, to RIMPAC TW in 2024.
But in the meantime, we’ve got some traction. So we actually hope by this July we’ll have sort of our 2.0 that we’ll be actually demonstrating in our lab. And then beyond that, we want to take the opportunity with customers to have them bring us their use cases, right? This isn’t just for video streaming data.
This can be any kind of data that’s captured at the edge or anywhere from a sensor or sharing data across a network that’s in disparate, disconnected. And how you, you can leverage the data protection and encryption and data governance tools that we already have in place.
But again, making that very easy to use and deploy in a rapid way.
Justin Parisi: And I imagine this involves failover testing and disaster recovery and that sort of thing, because that’s important in those situations as well.
Dan Holmay: Well, absolutely, and Frank, maybe you can speak to this, but one of the things that I really thought was interesting that we got feedback when we were at FCL was, a gentleman came up from the Navy and he had said, really what you’re also worried about making your data useless to your adversaries. So in terms of disaster recovery and backup, but what if that box or that data falls into the wrong hands, and how do you make sure that it doesn’t become an asset for your opponent.
Frank Alcantara: I’ll speak to that. So what Dan’s referencing is why we’re so widely deployed throughout the Navy is because of our enterprise data management and the fact that we understand the security implications and we work very, very diligently. We have an entire area focused around security and certifications. . And so one of the things that we did, we also talked through and I hadn’t brought up FCL West 23, but we had the opportunity to brief the Trident Warrior RIMPAC experiment at a showcase presentation panel and we basically had a 20 minute opportunity to talk about what we did. And in there we talked about the architecture, talked about the workflow, and then talked about all the security that was layered within there. From a NetApp perspective, we take security very, very serious, as you know. It really is a defense in-depth approach. So we talked about all the different accreditations and certifications we have for NetApp, how we follow DISA STIG standards. And then we also talked about those certifications, how they applied within NetApp and how we had things like encryption enabled and an encrypted tunnel between cloud and NetApp. So we’re encrypting the data at rest, we’re encrypting the data in flight and so you really kind of talked about that end-to-end data protection and that is also part of what made the whole experiment slash solution attractive, because it takes all of this into consideration, but it didn’t really impede the experiment.
It doesn’t impede the actual use cases or the actual work that needs to be performed, that all happens as part of the mundane architecture. But it’s so fundamental, it’s so important and we really try to express that.
Justin Parisi: The other benefit there is the simplicity of the security, cuz that’s often one of the hardest things to get right, is securing your environment. So making that easier for your admins is gonna be a huge benefit for them.
Eddie Huerta: Yeah, absolutely Justin, that was something that we basically just, Hey, let’s turn on encryption and make it really easy. And, once you have it turned on, you’re good to go. Right? That worry is no longer there. So, focusing on that simplicity and the security and ease of management from a NetApp perspective is really easy.
Frank Alcantara: One of the biggest things that Eddie and I and Dan, we talked about this a lot. For me, I like to keep things simple. I don’t want to have these complex discussions that everyone talks about, but how do you implement those? And we wanted Trident Warrior, our RIMPAC opportunity to experiment something, a capability that was easy. And I think we demonstrated a very, very advanced concept with the whole Protopia Nvidia data moving to the cloud. It really was a complex orchestrated experiment that Dan and Eddie were able to put together, but fundamentally though, it was easy to understand the elegance, in my opinion. The elegance was in the simplicity of it. it was easy to understand the technologies and capabilities were there. We’re able to show it, demonstrate it.
I think that’s what was so elegant about the solution. Cause if you pick it apart, it looks simple. When you put it together, it’s a complex set of orchestrated processes, but the fact that you don’t have to be an AI data fabric, hybrid cloud storage expert. You were able to understand those workflows and what we were doing and that’s really what we wanted to demonstrate as well. I will give Eddie and Dan a pat on the back on the fact that they were able to deliver that while making it easy to use, easy to understand, and I thought they did a great job.
Justin Parisi: All right, Eddie, if I wanted to find more information about all this, where would I go?
Eddie Huerta: Yeah, you can reach me directly at h-u-e-r-t-a at netapp.com or find me on LinkedIn.
Justin Parisi: All right. And Frank.
Frank Alcantara: Yeah. Thanks Justin. Again, frank.alcantara@netapp.com. We do appreciate the opportunity to speak through this and we are working internally with multiple teams to make this available as a piece of material, like whether it be a data sheet or some type of light read document. So that’ll be available sometime here in the short near future. But in the meantime, any questions, feel free to reach out to me directly frank.alcantar@netapp.com, or my direct line is area code 760-216-2954.
Thanks, Justin.
Justin Parisi: All right. And Dan.
Dan Holmay: Thanks again, Justin. Yeah, easy to reach me. It’s D as in Dan, h-o-l- m-a-y@netapp.com. If you are gonna be checking out my LinkedIn page here in the next couple of weeks, I’ll be releasing a couple of docs that are just gonna be available for public consumption, or you can look at netapp.com/artificial intelligence and we’re actually gonna have that on our main page as.
Justin Parisi: All right. Excellent. Well, thanks everyone for joining us and talking to us all about Trident Warrior, as well as how NetApp assists with the US public sector and their storage needs.
All right. That music tells me it’s time to go. If you’d like to get in touch with us, send us an email to podcast@netapp.com or send us a tweet @NetApp.
As always, if you’d like to subscribe, find us on iTunes, Spotify, GooglePlay, iHeartRadio, SoundCloud, Stitcher, or via techontappodcast.com. If you liked the show today, leave us a review. On behalf of the entire Tech ONTAP podcast team I’d like to thank Frank Alcantar, Dan Holmay, and Eddie Huerta for joining us today.
As always, thanks for listening.
Podcast Intro/Outro: [Outro]