Why Is the Internet Broken: Greatest Hits

When I started this site back in October of 2014, it was mainly to drive traffic to my NetApp Insight sessions -and it worked.

(By the way… stay tuned for a blog on this year’s new Insight sessions by yours truly. Now with more lab!)

As I continued writing, my goal was to keep creating content – don’t be the guy who just shows up during conference season.

blogfieldofdreams

So far, so good.

But since I create so much content, it gets hard to find for new visitors to this site, The WordPress archives/table of contents is lacking. So, what I’ve done is create my own table of contents of the top 5 most visited posts.

Top 5 Blogs (by number of visits)

TECH::Using NFS with Docker – Where does it fit in?

SMB1 Vulnerabilities: How do they affect NetApp’s Data ONTAP?

TECH::Become a clustered Data ONTAP CLI Ninja

ONTAP 9.1 is now generally available (GA)!

NetApp FlexGroup: An evolution of NAS

DataCenterDude

I also used to write for datacenterdude.com on occasion.

To read those, go to this link:

My DataCenterDude stuff

How else do I find stuff?

You can also search on the site or click through the archives, if you choose. Or, subscribe to the RSS feed. If you have questions or want to see something changed or added to the site, follow me on Twitter @NFSDudeAbides or comment on one of the posts here!

You can also email me at whyistheinternetbroken@gmail.com.

Behind the Scenes Episode 314: What’s New in NetApp Active IQ Digital Advisor – Jan 2022

Welcome to the Episode 314, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, Principal TME Brett Albertson (bretta@netapp.com) joins us to discuss the latest updates to NetApp ActiveIQ Digital Advisor (@netappactiveiq) and how to get the most out of your ActiveIQ software.

For more information:

Podcast Transcriptions

Transcripts not available currently

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 4: Initial configuration challenges and unexpected surprises

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

In the previous post of this series, I finally got a K8s cluster installed using kubectl. But, there were some curiosities…

Unexpected (and hidden) surprises

First of all, it looks like the scripts I used interacted with my Google Cloud account and created a bunch of stuff for me – which is fine – but I was kind of hoping to create an on-prem instance of Kubernetes. Instead, the scripts spun up all the stuff I needed in the cloud. I can see that I’m already using compute, which means I’ll be using $$.

When I go to the “Compute Engine” section of my project, I see 4 VMs:

Those seem to have been spun off from the instance templates that also got added:

Along with those VMs are several disks used by the VM instances:

Those aren’t the only places I might start to see costs creep up on this project. That deployment also created networks and firewall rules. I scrolled down the left menu to find “VPC networks” and I find several external IP addresses and firewall rules.

When I look at the “Billing” overview on the main page, it says there are no estimated charges, which is weird, because I know using stuff in the cloud isn’t free:

When I click on “View detailed charges” I see the “fine print” – I’ve used $12.37 in my credits in the past few days:

So, it wasn’t immediately apparent, but if I leave these VMs running, I will eat through my $300 credit faster than the 90 days they give. And when you scroll down, it gives more of an indication of what is being charged, but they all read $0.

That’s because there’s an option to show “Promotions and others” on the right-hand “Filters” menu. I uncheck that, and I get the *real* story of what is being charged.

So, if I hadn’t dove deeper into the costs, this learning exercise would have started to get expensive. But that’s ok – that’s a lesson learned. Maybe not about Kubernetes or containers, but about cost… and about where these types of deployments are going.

Cloud residency

When I started this exercise, I wanted to deploy an on-prem instance to learn more about how Kubernetes works. And I still may try to accomplish that, but it’s apparent that the kubectl method found in the Kubernetes docs isn’t the way to do that. The get-kube.sh and other scripts create cloud VM instances and all the necessary pieces of a full K8s cluster, which is great for simplicity and shows that the future for K8S is a fully managed platform as a service, hosted in the cloud. That’s why things like Azure Kubernetes Services, Google Kubernetes Engine, AmazonEKS, RedHat OpenShift and others exist – to make this journey simple and effective for administrators that don’t have expertise to create their own Kubernetes clusters/container management systems.

And to that end, managed storage and backup services like NetApp Astra add simplicity and reliability to the mix. As I continue on this learning exercise, I am seeing more and more why that is, and why things like OpenStack didn’t take off like they were supposed to. Kubernetes seems to have learned the lessons OpenStack didn’t – that complexity and lack of a managed service offering reduces accessibility to the platform, regardless of how many problems it may solve.

But I’ll cover that in future posts. This series is still about learning K8S. And I still can’t access the management plane on my K8S cluster due to the error mentioned in the previous post:

So why is that?

User verboten!

So, in this case, when I tried to access the IP address for the K8S dashboard, I was getting denied access. After googling a bit and trying a few different things, the answer became apparent – I had a cert issue.

The first lead I got was this StackOverflow post:

https://stackoverflow.com/questions/62204651/kubernetes-forbidden-user-systemanonymous-cannot-get-path

There, I found some useful things to try, such as accessing /readyz and /version (which both worked for me). But it didn’t solve my issue. Even the original poster had given up and started over.

So I kept googling and came across my answer here:

https://jhooq.com/message-services-https-kubernetes-dashboard-is-forbidden-user/

Turns out, I had to create and import a certificate to the web browser. The post above references vagrant as the deployment, which wasn’t what I used, but the steps all worked for me once I found the .kube/config location.

Once I did that, I could see the list of paths available, which is useful, but wasn’t exactly what I thought accessing the IP address would give me. I thought there would be an actual dashboard! I did find that things like /logs and /metrics wouldn’t work without the cert, so at least I can access those now. But I suspect the managed services will have more robust ways to manage the clusters than a bunch of REST calls.

Progress?

So, I got a bit farther now and am starting to unpeel this onion, but there’s still a lot more to learn. I’m starting to wonder if I need to back up and start with some sort of Kubernetes training to learn a bit more before I try this again. One thing I do know is that I need to kill this Kubernetes instance in GCP before it kills my wallet! 🙂

Lessons learned for me in this post:

  • Setting up your own K8S cluster is a bit more of a challenge than I had anticipated
  • Learning the basics ahead of time might be the best approach
  • Managed Kubernetes is likely going to be the answer to the “how should I deploy” except in specific circumstances
  • K8S out of the box is missing some functionality that would be pretty useful, such as a GUI dashboard
  • Pay attention to what actually gets deployed – and where – and dig into your billing!

Feel free to add your comments or thoughts below and stay tuned for the next post in this series (topic TBD).

Behind the Scenes Episode 313: Autonomous SAN with Broadcom and NetApp

Welcome to the Episode 313, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, AJ Casamento (aj.casamento@broadcom.com) and Ant Tyrell (anthony.tyrell@netapp.com) join us to discuss how Broadcom and NetApp are changing the way enterprise SAN remediates issues and adjusts for performance changes with autonomous SAN.

For more information:

Podcast Transcriptions

Transcripts not available currently

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 3: Installing kubectl is a multi-round fight (Round 2)

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

In my previous post called “This is the Way – My K8s Learning Journey, Part 2: Installing kubectl is a multi-round fight (Round 1)”, I failed miserably in my attempt to install kubectl.

And that’s OK!

That’s what this exercise is for; realism, learning and discovery.

Previously, my cluster build failed here:

# ./kube-up.sh 
... Starting cluster in us-central1-b using provider gce 
... calling verify-prereqs 
... calling verify-kube-binaries 
... calling verify-release-tars 
... calling kube-up 
Project: dark-garden-252515 
Network Project: dark-garden-252515 
Zone: us-central1-b BucketNotFoundException: 404 gs://kubernetes-staging-afea54f323 bucket does not exist. 
Creating gs://kubernetes-staging-afea54f323 
Creating gs://kubernetes-staging-afea54f323/... AccessDeniedException: 403 The project to be billed is associated with a closed billing account.

Using my superpower of deduction (and, well, reading comprehension), I figured the issue was probably related to the project I selected when I first tried to set up gcloud during the kube-up.sh script setup.

Inspector Gadget 80's T-Shirt

In that, I selected a pre-existing project.

You are logged in as: [whyistheinternetbroken@gmail.com]. 
Pick cloud project to use: 
[1] dark-garden-252515 
[2] wise-dispatcher-252515 
[3] Create a new project 
Please enter numeric choice or text value (must exactly match list item): 1 

This time, I selected “Create a new project.”

Pick cloud project to use:
[1] cv-solution-architect-lab
[2] dark-garden-252515
[3] versatile-hash-335321
[4] wise-dispatcher-252515
[5] Create a new project
Please enter numeric choice or text value (must exactly match list item): 5

Enter a Project ID. Note that a Project ID CANNOT be changed later.
Project IDs must be 6-30 characters (lowercase ASCII, digits, or
hyphens) in length and start with a lowercase letter. this-is-the-way-witib
Waiting for [operations/cp.8622655766389923424] to finish…done.
Your current project has been set to: [this-is-the-way-witib].

The kube-up.sh script, if you recall, wasn’t the right script. Instead, get-kube.sh is.

# ./kube-up.sh 
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
… calling verify-kube-binaries
!!! kubectl appears to be broken or missing
Required release artifacts appear to be missing. Do you wish to download them? [Y/n]
y
Can't determine Kubernetes release.
/k8s/kubernetes/cluster/get-kube-binaries.sh should only be run from a prebuilt Kubernetes release.
Did you mean to use get-kube.sh instead?

Now, when I run through get-kube.sh again… it still fails. But with a different error:

Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist.
Creating gs://kubernetes-staging-91a201ac9a
Creating gs://kubernetes-staging-91a201ac9a/…
AccessDeniedException: 403 The project to be billed is associated with an absent billing account.

Maybe that means I need to associate the project?

I went to my Google cloud project and clicked “Billing” and maybe that’s where the issue is:

I link the billing account:

Now, I re-run the script:

# ./get-kube.sh
'kubernetes' directory already exist. Should we skip download step and start to create cluster based on it? [Y]/n
y
Skipping download step.
Creating a kubernetes on gce…
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
… calling verify-kube-binaries
… calling verify-release-tars
… calling kube-up
Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist.
Creating gs://kubernetes-staging-91a201ac9a
Creating gs://kubernetes-staging-91a201ac9a/…
+++ Staging tars to Google Storage: gs://kubernetes-staging-91a201ac9a/kubernetes-devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha512 = be1b895d3b6e7d36d6e8b9415cb9b52bb47d34a3cb289029a8bce8a26933c6cc5c0fe2e9760c3e02b20942aa8b0633b543715ccf629119946c3f2970245f6446)
+++ kubernetes-manifests.tar.gz uploaded (sha512 = 26c653dd65408db5abe61a97d6654d313d4b70bf0136e8bcee2fc1afb19a3e2d26367b57816ec162a721b10dd699fddd4b28211866853b808a434660ddf5e860)

Hey, that was it! But I’m wondering how much $$ this is going to cost me, because stuff is getting created and downloaded…

API [compute.googleapis.com] not enabled on project [493331638462]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y

Enabling service [compute.googleapis.com] on project [493331638462]…
Operation "operations/acf.p2-493331638462-3027f3b3-c93e-4a03-bdcb-f7767fa0406a" finished successfully.
...
Creating firewall...
Creating firewall...
IP aliases are disabled.
Creating firewall...
Found subnet for region us-central1 in network default: default
Starting master and configuring firewalls
Configuring firewall for apiserver konnectivity server
Creating firewall...
Creating firewall...
Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/zones/us-central1-b/disks/kubernetes-master-pd].
..Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/global/firewalls/kubernetes-default-internal-master].
NAME                  ZONE           SIZE_GB  TYPE    STATUS
kubernetes-master-pd  us-central1-b  20       pd-ssd  READY

Luckily, I still have my free trial!

Now, I have a public IP

Looking for address 'kubernetes-master-ip'
Using master: kubernetes-master (external IP: x.x.x.x; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

This will continually check to see if the API for kubernetes is reachable.
This may time out if there was some uncaught error during start up.

……….Kubernetes cluster created.
Cluster "this-is-the-way-witib_kubernetes" set.
User "this-is-the-way-witib_kubernetes" set.
Context "this-is-the-way-witib_kubernetes" created.
Switched to context "this-is-the-way-witib_kubernetes".
User "this-is-the-way-witib_kubernetes-basic-auth" set.
Wrote config for this-is-the-way-witib_kubernetes to /root/.kube/config

Kubernetes cluster is running. The master is running at:
https://x.x.x.x

The user name and password to use is located in /root/.kube/config.

And the nodes are all validated and the installation succeeds!

Validating gce cluster, MULTIZONE=
Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
No resources found
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
No resources found
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 4 ready nodes. 0 ready nodes, 2 registered. Retrying.
Found 4 node(s).
NAME                           STATUS                     ROLES    AGE   VERSION
kubernetes-master              Ready,SchedulingDisabled   <none>   24s   v1.23.1
kubernetes-minion-group-fmrn   Ready                      <none>   12s   v1.23.1
kubernetes-minion-group-g7l6   Ready                      <none>   21s   v1.23.1
kubernetes-minion-group-hf4j   Ready                      <none>   16s   v1.23.1
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}
etcd-0               Healthy   {"health":"true","reason":""}
controller-manager   Healthy   ok
scheduler            Healthy   ok
Cluster validation succeeded
Done, listing cluster services:

Kubernetes control plane is running at https://x.x.x.x
GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Kubernetes binaries at /k8s/kubernetes/cluster/kubernetes/cluster/
You may want to add this directory to your PATH in $HOME/.profile
Installation successful!

One thing I don’t like is that I think I installed kubernetes twice. I’m guessing I needed to navigate to a different place when I ran the script (such as to /k8s, where the kubernetes directory would have been found and no need to re-download). This may be why I had to run get-kube.sh instead of kube-up.sh.

But I don’t think it’s a big enough problem to start over – yet – so I’ll keep going.

kubectl cluster-info runs successfully!

# kubectl cluster-info
Kubernetes control plane is running at https://x.x.x.x
GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

When I try to navigate to the control plane, however:

kind	"Status"
apiVersion	"v1"
metadata	{}
status	"Failure"
message	"forbidden: User \"system:anonymous\" cannot get path \"/\""
reason	"Forbidden"
details	{}
code	403

Going back to when the script ran successfully, I saw this:

The user name and password to use is located in /root/.kube/config.

So, I cat that file and see this:

  user:
    password: #########
    username: admin

Step by step

Here are the commands I ran on an initial installation of a fresh CentOS8 VM.

Now, the question is… how do I log in using that info? Or do I even need to?

# yum update -y
# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
# echo "$(<kubectl.sha256) kubectl" | sha256sum --check
# sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# kubectl version --client
# yum install -y git
# mkdir /k8s
# cd /k8s
# git clone https://github.com/kubernetes/kubernetes.git
# yum install -y python2 python3
# curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-367.0.0-linux-x86_64.tar.gz
# tar -xvf google-cloud-sdk-367.0.0-linux-x86_64.tar.gz --directory ~
# ~/google-cloud-sdk/install.sh
# ~/google-cloud-sdk/bin/gcloud init
# gcloud components install alpha
# gcloud components install beta
# cd cluster/
# ./get-kube.sh

I started to Google the error, but this looks like a deeper dive and the initial point of this post was to successfully install kubectl – which it looks like I did!

So, next post will be… how to get this working?

It’s a Kerberos Khristmas!

Recently, I was working on a project where I was creating a POC of an Ubuntu container that could authenticate to LDAP and mount NFS Kerberos mounts without any interaction that would eventually be used in a Kubernetes environment. It was an improvement on the container image I created a while back in “Securing NFS mounts in a Docker container,” and I’ll write more about it in another post later.

When Will There Be A Bad Santa 3?

But, since it’s almost Christmas, I wanted to deliver a few gifts I discovered during this process that can help with other deployments.

KSU

No, I didn’t discover Kansas State University.

Kansas State Wildcats 12'' Spirit Size Laser-Cut Steel Mascot Logo Wall Art

What I *did* discover was a way to “su” as a user while also running kinit at the same time. The ksu utility allows you to do that with the following:

# ksu username -n username

When you run that, you “su” as the UID and kinit kicks off:

root@cfeac39a405f:/# ksu student1 -n student1
WARNING: Your password may be exposed if you enter it here and are logged
in remotely using an unsecure (non-encrypted) channel.
Kerberos password for student1@NTAP.LOCAL: :
Changing uid to student1 (1301)

And this is the ticket (NFS service ticket came with the autofs mount of the homedir):

student1@cfeac39a405f:/$ klist
Ticket cache: FILE:/tmp/krb5cc_1301.kufVdH57
Default principal: student1@NTAP.LOCAL
Valid starting Expires Service principal
12/23/21 12:04:15 12/23/21 13:04:15 krbtgt/NTAP.LOCAL@NTAP.LOCAL
renew until 12/30/21 12:04:15
12/23/21 12:04:16 12/23/21 13:04:15 nfs/demo.ntap.local@NTAP.LOCAL
renew until 12/30/21 12:04:15

student1@cfeac39a405f:/$ mount | grep home
auto.home on /home type autofs (rw,relatime,fd=18,pgrp=118,timeout=50,minproto=5,maxproto=5,indirect,pipe_ino=1359512)

Pretty cool, but not as cool as the next thing I discovered…

msktutil

Getting Kerberos keytabs on Linux clients has traditionally been a pain – especially when using Active Directory as the KDC. Commands like “net ads” and “realm join” have made this simpler, but they usually require admin interaction and with containers, you can’t always do that. Nor would you really want to – who wants to join every container you create to the domain?

For Kerberos mounts, a machine account keytab is used for the initial mount to the ONTAP NFS server and the SPN you create will map into ONTAP and try to authenticate to a valid UNIX user that the ONTAP SVM has to know about. In AD KDCs, when you create a keytab, you have to map it to a user, which then populate the userPrincipalName field, which is then passed to ONTAP as the incoming SPN.

With CVS/ANF instances of ONTAP, you are at the mercy of whatever the instance has configured, so you need to either configure the SPN to MACHINE$@DOMAIN.COM or root/fqdn.domain.com@DOMAIN.COM for things to work without any other configuration.

In AD, there’s a utility called “ktpass,” which works fine, but then you have to worry about transferring the keytab file to the Linux client every time you change it and if you want to rotate keytabs on a regular basis, this gets to be cumbersome.

What I like to do with containers is have them all share a common keytab for their LDAP bind with SSSD and the NFS mount authentication. Regular users won’t be able to access the mount without using kinit anyway, and most containers don’t require a huge level of security on the container side. Kerberos is usually just there to encrypt the traffic in flight.

Since we need a keytab for containers to share located in a common location and we don’t really want to share the same keytab as the host, we can’t necessarily join the domain with “realm.” And since we’re not AD domain admins with regular access to the KDCs, ktpass is hard to manage as well.

The msktutil tool allows interaction with the KDC for keytab creation on the local Linux machine, as well as keytab updates if you want to periodically refresh the keytabs (which you should, for better overall security).

All you need to do on the Linux client is kinit as a user with permissions on the KDC to create objects. Then, when you run the “msktutil create” command, it will create a machine account in AD and a keytab based on your desired parameters. This sample command creates a keytab with AES-256 encryption and a SPN/UPN of root/fqdn:

# msktutil create --verbose --computer-name UBUNTU-CONTAINER -k ubuntu-container.keytab --enable --enctypes 0x10 --service root/ubuntu-container.ntap.local --upn root/ubuntu-container.ntap.local

This is the resulting keytab file:

# klist -kte ubuntu-container.keytab
Keytab name: FILE:ubuntu-container.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   1 12/21/2021 22:33:34 UBUNTU-CONTAINER$@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   1 12/21/2021 22:33:34 root/ubuntu-container.ntap.local@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   1 12/21/2021 22:33:34 host/centos83-perf2@NTAP.LOCAL (aes256-cts-hmac-sha1-96)

When that attempts to authenticate to the ONTAP SVM, root/ubuntu-container maps to root and mounts work fine.

If I wanted to update that keytab file, I can run this:

# msktutil update --verbose --computer-name UBUNTU-CONTAINER -k ubuntu-container.keytab --enable --enctypes 0x10 --service root/ubuntu-container.ntap.local --upn root/ubuntu-container.ntap.local

When I do that, the keytab updates and the kvno (Key version number) changes:

# klist -kte ubuntu-container.keytab
Keytab name: FILE:ubuntu-container.keytab
KVNO Timestamp           Principal
---- ------------------- ------------------------------------------------------
   1 12/21/2021 22:33:34 UBUNTU-CONTAINER$@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   1 12/21/2021 22:33:34 root/ubuntu-container.ntap.local@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   1 12/21/2021 22:33:34 host/centos83-perf2@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   2 12/23/2021 12:23:59 UBUNTU-CONTAINER$@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   2 12/23/2021 12:23:59 root/ubuntu-container.ntap.local@NTAP.LOCAL (aes256-cts-hmac-sha1-96)
   2 12/23/2021 12:23:59 host/centos83-perf2@NTAP.LOCAL (aes256-cts-hmac-sha1-96)

If I don’t want extra entries, I can delete the old keytab and re-create it. Once you rotate the keytab, you may have to re-build the container to ensure the keytab file is updated in the container.

Restart services when a container starts

One challenge I faced when setting these up was that the LDAP and NFS services didn’t always work properly when you first start the container. So I found myself having to manually restart the services. When you’re running a container, the idea is that you shouldn’t have to babysit them. So, I found that creating a simple shell script to restart services, like this:

#!#!/bin/sh
sudo service dbus start
sudo /sbin/rpcbind
sudo service nfs-common restart
sudo /usr/sbin/rpc.gssd
sudo /usr/sbin/rpc.svcgssd
sudo service sssd start
sudo service autofs start
sudo service sssd restart

That script gets copied during the container run with these lines in the dockerfile:

# Script to start services
COPY configure-nfs-ubuntu.sh /usr/local/bin/configure-nfs-ubuntu.sh
RUN chmod +x /usr/local/bin/configure-nfs-ubuntu.sh

And then adding this line to the end of .bashrc did the trick:

sh /usr/local/bin/configure-nfs-ubuntu.sh

And this is how it looks when the container starts:

# docker exec -it containername bash
 * system message bus already started; not starting.
rpcbind: another rpcbind is already running. Aborting
 * Stopping NFS common utilities                                                                                                                      [ OK ]
 * Starting NFS common utilities                                                                                                                      [ OK ]
 * Starting automount... 

That’s all I have for now. Stay tuned for more on this topic, including updates to the “This is the Way” series detailing my first installation of Kubernetes. Feel free to comment below!

Behind the Scenes Episode 312: NFS over RDMA – NetApp ONTAP 9.10.1

Welcome to the Episode 312, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, NetApp TME David Arnette (@daveA_netapp) joins us to discuss the new NFS over RDMA feature in NetApp ONTAP 9.10.1 and how it boosts performance for AI, ML and other HPC workloads.

Other resources:

Click to access Manage_NFS_over_RDMA.pdf

Podcast Transcriptions

If you want a searchable transcript of the episode, check it out here (just set expectations accordingly):

Transcripts not available currently

Just use the search field to look for words you want to read more about. (For example, search for “storage”)

transcript.png

Be sure to give us feedback (or if you need a full text transcript – Gong does not support sharing those yet) on the transcription in the comments here or via podcast@netapp.com! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 2: Installing kubectl is a multi-round fight (Round 1)

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

In my previous post, I had a decision to make – what should I use to install my first Kubernetes cluster?

I got some helpful feedback and decided on using straight up kubectl over something like minikube to install and configure the cluster. My plan is to read the docs, but to read them like I suspect most people read them – skim them, use the parts you need at the time you’re using them and miss important details. And it’s no surprise that I ran into issues by doing it that way.

And that’s OK!

That’s what this exercise is for; realism, learning and discovery. But, fair warning: this post will have twists and turns and you’ll start to feel like Charlie from Always Sunny…

It's Always Sunny in Philadelphia" Sweet Dee Has a Heart Attack (TV Episode  2008) - IMDb

So, I followed the guidance for installing kubectl on my CentOS 8.4 system, which can be found here:

https://kubernetes.io/docs/tasks/tools/install-kubectl-linux/

I ran through the first few steps, and it worked like a charm. TL;DR warning, though – this blog does *not* result in success. But we did get some valuable lessons.

# curl -LO "https://dl.k8s.io/release/$(curl -L -s https:// dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 100 154 0 0 29 0 0:00:05 0:00:05 --:--:-- 32
100 44.4M 100 44.4M 0 0 6519k 0 0:00:06 0:00:06 --:--:-- 32.9M

# curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.i o/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 154 100 154 0 0 29 0 0:00:05 0:00:05 --:--:-- 30
100 64 100 64 0 0 9 0 0:00:07 0:00:06 0:00:01 47

# echo "$(<kubectl.sha256) kubectl" | sha256sum --check
kubectl: OK

# sudo install -o root -g root -m 0755 kubectl /usr/local/b in/kubectl

# kubectl version --client
Client Version: version.Info{Major:"1", Minor:"23", GitVersion:"v1.23.1", GitCom mit:"86ec240af8cbd1b60bcc4c03c20da9b98005b92e", GitTreeState:"clean", BuildDate: "2021-12-16T11:41:01Z", GoVersion:"go1.17.5", Compiler:"gc", Platform:"linux/amd 64"}

Then, I skipped past the “Install using package management” because I had already installed it and ran the next step of “Verify kubectl configuration.” Rather than read the text right below it, I copied/pasted the next command into my client:

# kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server localhost:8080 was refused - did you specify the right host or port?

Wait. You mean I don’t have a cluster right now???

I read up a bit and saw this very helpful line:

In order for kubectl to find and access a Kubernetes cluster, it needs a kubeconfig file, which is created automatically when you create a cluster using kube-up.sh or successfully deploy a Minikube cluster.

Oops.

# find / -name kube-up.sh
#

That script doesn’t seem to exist when I run the above, so I have to copy/paste it from the GitHib repo it links to or manually run through kubectl config to create a config file. If I’m an admin, I’m probably copy and pasting a script, so YOLO!

# vi /tmp/kube-up.sh
# chmod 777 /tmp/kube-up.sh
# ./kube-up.sh
./kube-up.sh: line 33: ./../cluster/kube-util.sh: No such file or directory

Ok… what did I do wrong *THIS* time??

From the looks of the error, there’s a kube-util.sh file that it can’t find. And apparently, neither can I!

# find / -name kube-util.sh
#

Now, I didn’t see *any* mention of where to put the script, nor did it mention this other script. So I check out the README on the GitHub, and… it’s less than useful.

When I click on Getting started, it just takes me back to the setup docs for K8s, with a link back to “Install tools including kubectl”… So it seems I am now in a time loop.

10 Best Time Loop Movies

But I think I can work my way out of this. On the GitHub repo, there is a kube-util.sh script. Now, it was last updated 2 years ago, so YMMV.

Since the error referenced /cluster, I went ahead and created that directory and moved the kube-up.sh script. Then I created kube-util.sh and re-ran kube-up.sh and….

# ./kube-up.sh
./../cluster/kube-util.sh: line 23: ./../cluster/../cluster/skeleton/util.sh: No such file or directory

Well shucks. Now it’s referencing *another* script. And it looks like it’s a folder (skeleton) in the same repo. So it seems like I need to copy the entire thing.

The easiest way to do that is to use GitHub to clone it to my local node.

https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository

So I install git and try to clone the repo.

# yum install -y git
# mkdir kubernetes
# cd kubernetes/
# git clone https://github.com/kubernetes/kubernetes.git
Cloning into 'kubernetes'…
remote: Enumerating objects: 1294436, done.
remote: Counting objects: 100% (179/179), done.
remote: Compressing objects: 100% (118/118), done.
remote: Total 1294436 (delta 80), reused 62 (delta 61), pack-reused 1294257
Receiving objects: 100% (1294436/1294436), 803.12 MiB | 29.92 MiB/s, done.
Resolving deltas: 100% (932833/932833), done.
Updating files: 100% (23342/23342), done.

So I created a “kubernetes” folder, but it turns out I didn’t need to. But that’s ok, it should still work. I see all the files and folders I’d expect.

# ls -la
total 4
drwxr-xr-x. 3 root root 24 Dec 16 14:37 .
dr-xr-xr-x. 18 root root 242 Dec 16 14:37 ..
drwxr-xr-x. 19 root root 4096 Dec 16 14:38 kubernetes
# cd kubernetes/
# ls -la
total 204
drwxr-xr-x. 19 root root 4096 Dec 16 14:38 .
drwxr-xr-x. 3 root root 24 Dec 16 14:37 ..
drwxr-xr-x. 4 root root 57 Dec 16 14:38 api
drwxr-xr-x. 7 root root 4096 Dec 16 14:38 build
drwxr-xr-x. 2 root root 4096 Dec 16 14:38 CHANGELOG
lrwxrwxrwx. 1 root root 19 Dec 16 14:38 CHANGELOG.md -> CHANGELOG/README.md
drwxr-xr-x. 9 root root 4096 Dec 16 14:38 cluster
drwxr-xr-x. 25 root root 4096 Dec 16 14:38 cmd
-rw-r--r--. 1 root root 148 Dec 16 14:38 code-of-conduct.md
-rw-r--r--. 1 root root 525 Dec 16 14:38 CONTRIBUTING.md
drwxr-xr-x. 2 root root 38 Dec 16 14:38 docs
-rw-r--r--. 1 root root 766 Dec 16 14:38 .generated_files
drwxr-xr-x. 8 root root 163 Dec 16 14:38 .git
-rw-r--r--. 1 root root 381 Dec 16 14:38 .gitattributes
drwxr-xr-x. 3 root root 93 Dec 16 14:38 .github
-rw-r--r--. 1 root root 2634 Dec 16 14:38 .gitignore
-rw-r--r--. 1 root root 1112 Dec 16 14:38 .golangci.yaml
-rw-r--r--. 1 root root 34923 Dec 16 14:38 go.mod
-rw-r--r--. 1 root root 58959 Dec 16 14:38 go.sum
drwxr-xr-x. 12 root root 8192 Dec 16 14:38 hack
-rw-r--r--. 1 root root 11358 Dec 16 14:38 LICENSE
drwxr-xr-x. 4 root root 68 Dec 16 14:38 LICENSES
drwxr-xr-x. 2 root root 4096 Dec 16 14:38 logo
lrwxrwxrwx. 1 root root 19 Dec 16 14:38 Makefile -> build/root/Makefile
lrwxrwxrwx. 1 root root 35 Dec 16 14:38 Makefile.generated_files -> build/root/Makefile.generated_files
-rw-r--r--. 1 root root 782 Dec 16 14:38 OWNERS
-rw-r--r--. 1 root root 10612 Dec 16 14:38 OWNERS_ALIASES
drwxr-xr-x. 32 root root 4096 Dec 16 14:38 pkg
drwxr-xr-x. 3 root root 31 Dec 16 14:38 plugin
-rw-r--r--. 1 root root 3387 Dec 16 14:38 README.md
-rw-r--r--. 1 root root 563 Dec 16 14:38 SECURITY_CONTACTS
drwxr-xr-x. 4 root root 66 Dec 16 14:38 staging
-rw-r--r--. 1 root root 1110 Dec 16 14:38 SUPPORT.md
drwxr-xr-x. 17 root root 250 Dec 16 14:38 test
drwxr-xr-x. 5 root root 85 Dec 16 14:38 third_party
drwxr-xr-x. 16 root root 4096 Dec 16 14:38 vendor
# pwd
/kubernetes/kubernetes

So, let’s try it!

Here we go…

# cd cluster
# ./kube-up.sh
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
/usr/bin/which: no gcloud in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
Can't find gcloud in PATH, please fix and retry. The Google Cloud
SDK can be downloaded from https://cloud.google.com/sdk/.

Well… that’s progress, right?

Installing Google Cloud SDK

To install the Google Cloud SDK, go here (I google so you don’t have to!):

https://cloud.google.com/sdk/docs/install

First, you need Python2 and Python3.

# yum install -y python2 python3

Then, download one of the archive files. I grabbed the one for Linux 64-bit:

# curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-367.0.0-linux-x86_64.tar.gz

Then extract. I extracted to my homedir:

# tar -xvf google-cloud-sdk-367.0.0-linux-x86_64.tar.gz --directory ~

Then I run the included install.sh script:

# ~/google-cloud-sdk/install.sh

I went ahead and let it set the PATH variable and then I run init:

# ~/google-cloud-sdk/bin/gcloud init

There’s an auth link/verification code process, but other than that, pretty straightforward.

When I run the kube-up.sh again, I get this:

# ./kube-up.sh
... Starting cluster in us-central1-b using provider gce
... calling verify-prereqs
/usr/bin/which: no gcloud in (/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/root/bin)
Can't find gcloud in PATH, please fix and retry. The Google Cloud
SDK can be downloaded from https://cloud.google.com/sdk/.

That’s because the PATH doesn’t work until the .bash profile is re-read. Just run a source and try it again.

# source ~/.bash_profile
# ./kube-up.sh
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
missing required gcloud component "alpha"
Try running $(gcloud components install alpha)
missing required gcloud component "beta"
Try running $(gcloud components install beta)

ARGGGGH

Office Space Printer GIFs | Tenor

Installing gcloud alpha and beta…

# gcloud components install alpha
# gcloud components install beta

So I re-run it and…

!!! kubectl appears to be broken or missing
Required release artifacts appear to be missing. Do you wish to download them? [Y/n]
y
Can't determine Kubernetes release.
/kubernetes/kubernetes/cluster/get-kube-binaries.sh should only be run from a prebuilt Kubernetes release.
Did you mean to use get-kube.sh instead?

Sigh. Maybe? Let’s try get-kube.sh… But I’m not getting a ton of confidence from the K8s docs.

It looks like get-kube.sh simply repeats the steps from the first few commands as we began this blog. I’m beginning to think I’ve just installed Kubernetes twice in different places…

Downloading kubernetes release v1.23.1
from https://dl.k8s.io/v1.23.1/kubernetes.tar.gz
to /kubernetes/kubernetes/cluster/kubernetes.tar.gz
Is this ok? [Y]/n

So it finishes running and I get this error during bucket creation:

Creating gs://kubernetes-staging-afea54f323/...
AccessDeniedException: 403 The project to be billed is associated with a closed billing account.

The project it uses is the same one I defined when I first set up gcloud.

Add '/kubernetes/kubernetes/cluster/kubernetes/client/bin' to your PATH to use newly-installed binaries.
Creating a kubernetes on gce…
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
… calling verify-kube-binaries
… calling verify-release-tars
… calling kube-up
Project: dark-garden-252515
Network Project: dark-garden-252515

This is what I did when I set up gcloud:

You are logged in as: [whyistheinternetbroken@gmail.com].
Pick cloud project to use:
[1] dark-garden-252515
[2] wise-dispatcher-252515
[3] Create a new project
Please enter numeric choice or text value (must exactly match list item): 1

Your current project has been set to: [dark-garden-252515].

So I guess because there’s no billing account associated, that’s why this fails?

I set up the billing account and re-run it the script. And it fails with the same error:

Creating gs://kubernetes-staging-afea54f323/…
AccessDeniedException: 403 The project to be billed is associated with a closed billing account.

So maybe the account that hosts the project is closed? So, I dig into the script that creates the “Can’t determine Kubernetes release” error (get-kube-binaries.sh) to see why that error gets generated. At this point, it feels like I’ll be reverting the snapshot on this VM and re-doing the installation once I work through all the errors.

Here’s where that error occurs:

function detect_kube_release() {
  if [[ -n "${KUBE_VERSION:-}" ]]; then
    return 0  # Allow caller to explicitly set version
  fi

  if [[ ! -e "${KUBE_ROOT}/version" ]]; then
    echo "Can't determine Kubernetes release." >&2
    echo "${BASH_SOURCE[0]} should only be run from a prebuilt Kubernetes release." >&2
    echo "Did you mean to use get-kube.sh instead?" >&2
    exit 1
  fi

So it tries to find KUBE_VERSION and then if it can’t, the error is generated. That is a variable that is set here:

KUBE_VERSION=$(cat "${KUBE_ROOT}/version")

KUBE_ROOT is this:

KUBE_ROOT=$(cd "$(dirname "${BASH_SOURCE[0]}")/.." && pwd)

So it seems to be trying to cat a file named “version” and uses a path determined by KUBE_ROOT. With a “find” I was able to get a file with that info:

# find /kubernetes/kubernetes/ -name version
/kubernetes/kubernetes/cluster/kubernetes/version

So I went in and changed the script to make KUBE_ROOT the /kubernetes/kubernetes/cluster/kubernetes/ path. And that gets me past the “can’t determine Kubernetes version, but…

# ./kube-up.sh
... Starting cluster in us-central1-b using provider gce
... calling verify-prereqs
... calling verify-kube-binaries
... calling verify-release-tars
... calling kube-up
Project: dark-garden-252515
Network Project: dark-garden-252515
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-afea54f323 bucket does not exist.
Creating gs://kubernetes-staging-afea54f323
Creating gs://kubernetes-staging-afea54f323/...
AccessDeniedException: 403 The project to be billed is associated with a closed billing account. <<<<<WTF

So it’s looking like this method *may* be a dead end. At this point, I’m going to start over and when I get prompted for a cloud project to use, I’m selecting this one:

[3] Create a new project 

Current working theory is that the person who created that Git repo doesn’t maintain it anymore and whoever created those projects has let them expire.

Lessons learned in this attempt:

  • Read everything first. Then try.
  • Just because you read everything doesn’t mean it will work; sometimes the documentation sucks.
  • Make sure you are prepared to tear it all down and start over.
  • Knowing shell scripting is a useful tool to have and don’t be afraid to modify shell scripts – especially if they’re old.
  • Everything has dependencies. In this case, we seem to be dependent on a now-defunct cloud bucket.
  • Know Linux basics like “find,” “ls,” “chmod,” etc.
  • Know that shell scripts require special permissions to run them.

We’ll learn more lessons in the upcoming post…

Behind the Scenes Episode 311: Introducing the NetApp AFF A900

Welcome to the Episode 311, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, Cheryl George and Chris Lueth join us to discuss the latest addition to the NetApp ONTAP platform portfolio, the AFF A900!

Joining us:

Other resources:

https://www.linkedin.com/embed/feed/update/urn:li:ugcPost:6876521251993718785

Podcast Transcriptions

If you want a searchable transcript of the episode, check it out here (just set expectations accordingly):

Transcripts not available currently

Just use the search field to look for words you want to read more about. (For example, search for “storage”)

transcript.png

Be sure to give us feedback (or if you need a full text transcript – Gong does not support sharing those yet) on the transcription in the comments here or via podcast@netapp.com! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 1: Installing my First K8s Cluster

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

I started a new role at NetApp a month or two ago on the cloud/Astra team and have been spending some time learning more about Kubernetes, watching videos, recording podcasts and the more I learn, the more I realize how little I actually know. This is not unlike when I first started at NetApp in the support center 15 years ago, where I knew nothing about, well, anything.

I have a decent base in Linux (mostly via NFS), containers and storage, which means I have a head start on many admins starting to dive into this world. And as I comb through documentation, try to install things like Minikube, etc. I realize there’s not a ton of information out there that consolidates this stuff. For instance, I started out trying to create a two node K8s cluster.

First, I found this doc when I googled “install kubernetes,” helpfully entitled “Getting started | Kubernetes.” It states:

If you’re learning Kubernetes, use the tools supported by the Kubernetes community, or tools in the ecosystem to set up a Kubernetes cluster on a local machine. See Install tools.

Great! Sounds like me!

On the tools page, I don’t really get a step by step. Instead, I get CHOICES. As a n00b, I don’t want choices! I don’t know what to pick!

Elmo In Flames Meme Becomes Real-Life At A Protest in Philadelphia

They have kubectl, which I have heard of in my reading/watching/listening journey, so I’ll probably choose that. But then I read some more and I see things like “minikube” which I have also heard of. So which one do I choose???

Like all good n00bs, I asked the experts. So, off to the NetApp Pub Slack Channel I went! (identities removed to protect the innocent)

Note: You, too, can get access to experts here. Just sign up! It’s a free, responsive community!

At this point, it looks like if I want to just create a learning sandbox where I run some commands, I use minikube. But if I want to use multiple nodes (which I do) and want to get more of a real-world k8s experience, kubectl is the way (for me, at least).
The Mandalorian - This is the Way - YouTube

So, now the first important step has been taken – choosing how I want to deploy. I have two CentOS 8 clients ready to be configured and used as nodes. These are VMs, with 32GB of RAM and 4 CPUs each. Many people trying to learn K8s don’t have the luxury of large VM lab instances like I do here at NetApp, so minikube might be best for you. Or, if you want to give the cloud a whirl, you can stand up some VMs there and play around. Just remember, it’ll cost you after that free trial runs out.

I *could* have made this all super simple and gone with one of the many managed Kubernetes services out there, such as AKS, GKE, VMware Tanzu, Amazon EKS or whatever your managed service of choice is. (A managed service, of course, is someone else running the show behind the curtain via automation and putting it behind a user-friendly GUI, which, honestly, is the way a lot of these types of deployments are going.)

But that’s not the point of this exercise. I learn better by starting from scratch, half-reading the docs, making mistakes, fixing the mistakes and feeling the pain. That’s because I think that’s how most people take this journey.

Hopefully, we’ll all learn some things on the way and these posts will serve as a cautionary tale. Comment below with corrections, questions and suggestions.

Behind the Scenes Episode 310: How NetApp US Public Sector Innovation Secures Top Secret Data

Welcome to the Episode 310, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, we discuss NetApp’s latest efforts to achieve US Public Sector certification for Top Secret data and how NetApp ONTAP delivers the best enterprise security for storage in the industry.

Joining us:

Other resources:

Podcast Transcriptions

If you want a searchable transcript of the episode, check it out here (just set expectations accordingly):

Transcripts not available currently

Just use the search field to look for words you want to read more about. (For example, search for “storage”)

transcript.png

Be sure to give us feedback (or if you need a full text transcript – Gong does not support sharing those yet) on the transcription in the comments here or via podcast@netapp.com! If you have requests for other previous episode transcriptions, let me know!

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss