Behind the Scenes Episode 314: What’s New in NetApp Active IQ Digital Advisor – Jan 2022

Welcome to the Episode 314, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, Principal TME Brett Albertson (bretta@netapp.com) joins us to discuss the latest updates to NetApp ActiveIQ Digital Advisor (@netappactiveiq) and how to get the most out of your ActiveIQ software.

For more information:

Podcast Transcriptions

Transcripts not available currently

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 4: Initial configuration challenges and unexpected surprises

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

In the previous post of this series, I finally got a K8s cluster installed using kubectl. But, there were some curiosities…

Unexpected (and hidden) surprises

First of all, it looks like the scripts I used interacted with my Google Cloud account and created a bunch of stuff for me – which is fine – but I was kind of hoping to create an on-prem instance of Kubernetes. Instead, the scripts spun up all the stuff I needed in the cloud. I can see that I’m already using compute, which means I’ll be using $$.

When I go to the “Compute Engine” section of my project, I see 4 VMs:

Those seem to have been spun off from the instance templates that also got added:

Along with those VMs are several disks used by the VM instances:

Those aren’t the only places I might start to see costs creep up on this project. That deployment also created networks and firewall rules. I scrolled down the left menu to find “VPC networks” and I find several external IP addresses and firewall rules.

When I look at the “Billing” overview on the main page, it says there are no estimated charges, which is weird, because I know using stuff in the cloud isn’t free:

When I click on “View detailed charges” I see the “fine print” – I’ve used $12.37 in my credits in the past few days:

So, it wasn’t immediately apparent, but if I leave these VMs running, I will eat through my $300 credit faster than the 90 days they give. And when you scroll down, it gives more of an indication of what is being charged, but they all read $0.

That’s because there’s an option to show “Promotions and others” on the right-hand “Filters” menu. I uncheck that, and I get the *real* story of what is being charged.

So, if I hadn’t dove deeper into the costs, this learning exercise would have started to get expensive. But that’s ok – that’s a lesson learned. Maybe not about Kubernetes or containers, but about cost… and about where these types of deployments are going.

Cloud residency

When I started this exercise, I wanted to deploy an on-prem instance to learn more about how Kubernetes works. And I still may try to accomplish that, but it’s apparent that the kubectl method found in the Kubernetes docs isn’t the way to do that. The get-kube.sh and other scripts create cloud VM instances and all the necessary pieces of a full K8s cluster, which is great for simplicity and shows that the future for K8S is a fully managed platform as a service, hosted in the cloud. That’s why things like Azure Kubernetes Services, Google Kubernetes Engine, AmazonEKS, RedHat OpenShift and others exist – to make this journey simple and effective for administrators that don’t have expertise to create their own Kubernetes clusters/container management systems.

And to that end, managed storage and backup services like NetApp Astra add simplicity and reliability to the mix. As I continue on this learning exercise, I am seeing more and more why that is, and why things like OpenStack didn’t take off like they were supposed to. Kubernetes seems to have learned the lessons OpenStack didn’t – that complexity and lack of a managed service offering reduces accessibility to the platform, regardless of how many problems it may solve.

But I’ll cover that in future posts. This series is still about learning K8S. And I still can’t access the management plane on my K8S cluster due to the error mentioned in the previous post:

So why is that?

User verboten!

So, in this case, when I tried to access the IP address for the K8S dashboard, I was getting denied access. After googling a bit and trying a few different things, the answer became apparent – I had a cert issue.

The first lead I got was this StackOverflow post:

https://stackoverflow.com/questions/62204651/kubernetes-forbidden-user-systemanonymous-cannot-get-path

There, I found some useful things to try, such as accessing /readyz and /version (which both worked for me). But it didn’t solve my issue. Even the original poster had given up and started over.

So I kept googling and came across my answer here:

https://jhooq.com/message-services-https-kubernetes-dashboard-is-forbidden-user/

Turns out, I had to create and import a certificate to the web browser. The post above references vagrant as the deployment, which wasn’t what I used, but the steps all worked for me once I found the .kube/config location.

Once I did that, I could see the list of paths available, which is useful, but wasn’t exactly what I thought accessing the IP address would give me. I thought there would be an actual dashboard! I did find that things like /logs and /metrics wouldn’t work without the cert, so at least I can access those now. But I suspect the managed services will have more robust ways to manage the clusters than a bunch of REST calls.

Progress?

So, I got a bit farther now and am starting to unpeel this onion, but there’s still a lot more to learn. I’m starting to wonder if I need to back up and start with some sort of Kubernetes training to learn a bit more before I try this again. One thing I do know is that I need to kill this Kubernetes instance in GCP before it kills my wallet! 🙂

Lessons learned for me in this post:

  • Setting up your own K8S cluster is a bit more of a challenge than I had anticipated
  • Learning the basics ahead of time might be the best approach
  • Managed Kubernetes is likely going to be the answer to the “how should I deploy” except in specific circumstances
  • K8S out of the box is missing some functionality that would be pretty useful, such as a GUI dashboard
  • Pay attention to what actually gets deployed – and where – and dig into your billing!

Feel free to add your comments or thoughts below and stay tuned for the next post in this series (topic TBD).

Behind the Scenes Episode 313: Autonomous SAN with Broadcom and NetApp

Welcome to the Episode 313, part of the continuing series called “Behind the Scenes of the NetApp Tech ONTAP Podcast.”

2019-insight-design2-warhol-gophers

This week, AJ Casamento (aj.casamento@broadcom.com) and Ant Tyrell (anthony.tyrell@netapp.com) join us to discuss how Broadcom and NetApp are changing the way enterprise SAN remediates issues and adjusts for performance changes with autonomous SAN.

For more information:

Podcast Transcriptions

Transcripts not available currently

Tech ONTAP Community

We also now have a presence on the NetApp Communities page. You can subscribe there to get emails when we have new episodes.

Tech ONTAP Podcast Community

techontap_banner2

Finding the Podcast

You can find this week’s episode here:

You can also find the Tech ONTAP Podcast on:

I also recently got asked how to leverage RSS for the podcast. You can do that here:

http://feeds.soundcloud.com/users/soundcloud:users:164421460/sounds.rss

This is the Way – My K8s Learning Journey, Part 3: Installing kubectl is a multi-round fight (Round 2)

This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.

In my previous post called “This is the Way – My K8s Learning Journey, Part 2: Installing kubectl is a multi-round fight (Round 1)”, I failed miserably in my attempt to install kubectl.

And that’s OK!

That’s what this exercise is for; realism, learning and discovery.

Previously, my cluster build failed here:

# ./kube-up.sh 
... Starting cluster in us-central1-b using provider gce 
... calling verify-prereqs 
... calling verify-kube-binaries 
... calling verify-release-tars 
... calling kube-up 
Project: dark-garden-252515 
Network Project: dark-garden-252515 
Zone: us-central1-b BucketNotFoundException: 404 gs://kubernetes-staging-afea54f323 bucket does not exist. 
Creating gs://kubernetes-staging-afea54f323 
Creating gs://kubernetes-staging-afea54f323/... AccessDeniedException: 403 The project to be billed is associated with a closed billing account.

Using my superpower of deduction (and, well, reading comprehension), I figured the issue was probably related to the project I selected when I first tried to set up gcloud during the kube-up.sh script setup.

Inspector Gadget 80's T-Shirt

In that, I selected a pre-existing project.

You are logged in as: [whyistheinternetbroken@gmail.com]. 
Pick cloud project to use: 
[1] dark-garden-252515 
[2] wise-dispatcher-252515 
[3] Create a new project 
Please enter numeric choice or text value (must exactly match list item): 1 

This time, I selected “Create a new project.”

Pick cloud project to use:
[1] cv-solution-architect-lab
[2] dark-garden-252515
[3] versatile-hash-335321
[4] wise-dispatcher-252515
[5] Create a new project
Please enter numeric choice or text value (must exactly match list item): 5

Enter a Project ID. Note that a Project ID CANNOT be changed later.
Project IDs must be 6-30 characters (lowercase ASCII, digits, or
hyphens) in length and start with a lowercase letter. this-is-the-way-witib
Waiting for [operations/cp.8622655766389923424] to finish…done.
Your current project has been set to: [this-is-the-way-witib].

The kube-up.sh script, if you recall, wasn’t the right script. Instead, get-kube.sh is.

# ./kube-up.sh 
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
… calling verify-kube-binaries
!!! kubectl appears to be broken or missing
Required release artifacts appear to be missing. Do you wish to download them? [Y/n]
y
Can't determine Kubernetes release.
/k8s/kubernetes/cluster/get-kube-binaries.sh should only be run from a prebuilt Kubernetes release.
Did you mean to use get-kube.sh instead?

Now, when I run through get-kube.sh again… it still fails. But with a different error:

Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist.
Creating gs://kubernetes-staging-91a201ac9a
Creating gs://kubernetes-staging-91a201ac9a/…
AccessDeniedException: 403 The project to be billed is associated with an absent billing account.

Maybe that means I need to associate the project?

I went to my Google cloud project and clicked “Billing” and maybe that’s where the issue is:

I link the billing account:

Now, I re-run the script:

# ./get-kube.sh
'kubernetes' directory already exist. Should we skip download step and start to create cluster based on it? [Y]/n
y
Skipping download step.
Creating a kubernetes on gce…
… Starting cluster in us-central1-b using provider gce
… calling verify-prereqs
… calling verify-kube-binaries
… calling verify-release-tars
… calling kube-up
Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist.
Creating gs://kubernetes-staging-91a201ac9a
Creating gs://kubernetes-staging-91a201ac9a/…
+++ Staging tars to Google Storage: gs://kubernetes-staging-91a201ac9a/kubernetes-devel
+++ kubernetes-server-linux-amd64.tar.gz uploaded (sha512 = be1b895d3b6e7d36d6e8b9415cb9b52bb47d34a3cb289029a8bce8a26933c6cc5c0fe2e9760c3e02b20942aa8b0633b543715ccf629119946c3f2970245f6446)
+++ kubernetes-manifests.tar.gz uploaded (sha512 = 26c653dd65408db5abe61a97d6654d313d4b70bf0136e8bcee2fc1afb19a3e2d26367b57816ec162a721b10dd699fddd4b28211866853b808a434660ddf5e860)

Hey, that was it! But I’m wondering how much $$ this is going to cost me, because stuff is getting created and downloaded…

API [compute.googleapis.com] not enabled on project [493331638462]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y

Enabling service [compute.googleapis.com] on project [493331638462]…
Operation "operations/acf.p2-493331638462-3027f3b3-c93e-4a03-bdcb-f7767fa0406a" finished successfully.
...
Creating firewall...
Creating firewall...
IP aliases are disabled.
Creating firewall...
Found subnet for region us-central1 in network default: default
Starting master and configuring firewalls
Configuring firewall for apiserver konnectivity server
Creating firewall...
Creating firewall...
Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/zones/us-central1-b/disks/kubernetes-master-pd].
..Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/global/firewalls/kubernetes-default-internal-master].
NAME                  ZONE           SIZE_GB  TYPE    STATUS
kubernetes-master-pd  us-central1-b  20       pd-ssd  READY

Luckily, I still have my free trial!

Now, I have a public IP

Looking for address 'kubernetes-master-ip'
Using master: kubernetes-master (external IP: x.x.x.x; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.

This will continually check to see if the API for kubernetes is reachable.
This may time out if there was some uncaught error during start up.

……….Kubernetes cluster created.
Cluster "this-is-the-way-witib_kubernetes" set.
User "this-is-the-way-witib_kubernetes" set.
Context "this-is-the-way-witib_kubernetes" created.
Switched to context "this-is-the-way-witib_kubernetes".
User "this-is-the-way-witib_kubernetes-basic-auth" set.
Wrote config for this-is-the-way-witib_kubernetes to /root/.kube/config

Kubernetes cluster is running. The master is running at:
https://x.x.x.x

The user name and password to use is located in /root/.kube/config.

And the nodes are all validated and the installation succeeds!

Validating gce cluster, MULTIZONE=
Project: this-is-the-way-witib
Network Project: this-is-the-way-witib
Zone: us-central1-b
No resources found
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
No resources found
Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying.
Waiting for 4 ready nodes. 0 ready nodes, 2 registered. Retrying.
Found 4 node(s).
NAME                           STATUS                     ROLES    AGE   VERSION
kubernetes-master              Ready,SchedulingDisabled   <none>   24s   v1.23.1
kubernetes-minion-group-fmrn   Ready                      <none>   12s   v1.23.1
kubernetes-minion-group-g7l6   Ready                      <none>   21s   v1.23.1
kubernetes-minion-group-hf4j   Ready                      <none>   16s   v1.23.1
Warning: v1 ComponentStatus is deprecated in v1.19+
Validate output:
Warning: v1 ComponentStatus is deprecated in v1.19+
NAME                 STATUS    MESSAGE                         ERROR
etcd-1               Healthy   {"health":"true","reason":""}
etcd-0               Healthy   {"health":"true","reason":""}
controller-manager   Healthy   ok
scheduler            Healthy   ok
Cluster validation succeeded
Done, listing cluster services:

Kubernetes control plane is running at https://x.x.x.x
GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Kubernetes binaries at /k8s/kubernetes/cluster/kubernetes/cluster/
You may want to add this directory to your PATH in $HOME/.profile
Installation successful!

One thing I don’t like is that I think I installed kubernetes twice. I’m guessing I needed to navigate to a different place when I ran the script (such as to /k8s, where the kubernetes directory would have been found and no need to re-download). This may be why I had to run get-kube.sh instead of kube-up.sh.

But I don’t think it’s a big enough problem to start over – yet – so I’ll keep going.

kubectl cluster-info runs successfully!

# kubectl cluster-info
Kubernetes control plane is running at https://x.x.x.x
GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy

When I try to navigate to the control plane, however:

kind	"Status"
apiVersion	"v1"
metadata	{}
status	"Failure"
message	"forbidden: User \"system:anonymous\" cannot get path \"/\""
reason	"Forbidden"
details	{}
code	403

Going back to when the script ran successfully, I saw this:

The user name and password to use is located in /root/.kube/config.

So, I cat that file and see this:

  user:
    password: #########
    username: admin

Step by step

Here are the commands I ran on an initial installation of a fresh CentOS8 VM.

Now, the question is… how do I log in using that info? Or do I even need to?

# yum update -y
# curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
# curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256"
# echo "$(<kubectl.sha256) kubectl" | sha256sum --check
# sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl
# kubectl version --client
# yum install -y git
# mkdir /k8s
# cd /k8s
# git clone https://github.com/kubernetes/kubernetes.git
# yum install -y python2 python3
# curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-367.0.0-linux-x86_64.tar.gz
# tar -xvf google-cloud-sdk-367.0.0-linux-x86_64.tar.gz --directory ~
# ~/google-cloud-sdk/install.sh
# ~/google-cloud-sdk/bin/gcloud init
# gcloud components install alpha
# gcloud components install beta
# cd cluster/
# ./get-kube.sh

I started to Google the error, but this looks like a deeper dive and the initial point of this post was to successfully install kubectl – which it looks like I did!

So, next post will be… how to get this working?