This post is one of a series about my journey in learning Kubernetes from the perspective of a total n00b. Feel free to suggest topics in the comments. For more posts in this series, click this link.
In my previous post called “This is the Way – My K8s Learning Journey, Part 2: Installing kubectl is a multi-round fight (Round 1)”, I failed miserably in my attempt to install kubectl.
And that’s OK!
That’s what this exercise is for; realism, learning and discovery.
Previously, my cluster build failed here:
# ./kube-up.sh ... Starting cluster in us-central1-b using provider gce ... calling verify-prereqs ... calling verify-kube-binaries ... calling verify-release-tars ... calling kube-up Project: dark-garden-252515 Network Project: dark-garden-252515 Zone: us-central1-b BucketNotFoundException: 404 gs://kubernetes-staging-afea54f323 bucket does not exist. Creating gs://kubernetes-staging-afea54f323 Creating gs://kubernetes-staging-afea54f323/... AccessDeniedException: 403 The project to be billed is associated with a closed billing account.
Using my superpower of deduction (and, well, reading comprehension), I figured the issue was probably related to the project I selected when I first tried to set up gcloud during the kube-up.sh script setup.

In that, I selected a pre-existing project.
You are logged in as: [whyistheinternetbroken@gmail.com]. Pick cloud project to use: [1] dark-garden-252515 [2] wise-dispatcher-252515 [3] Create a new project Please enter numeric choice or text value (must exactly match list item): 1
This time, I selected “Create a new project.”
Pick cloud project to use: [1] cv-solution-architect-lab [2] dark-garden-252515 [3] versatile-hash-335321 [4] wise-dispatcher-252515 [5] Create a new project Please enter numeric choice or text value (must exactly match list item): 5 Enter a Project ID. Note that a Project ID CANNOT be changed later. Project IDs must be 6-30 characters (lowercase ASCII, digits, or hyphens) in length and start with a lowercase letter. this-is-the-way-witib Waiting for [operations/cp.8622655766389923424] to finish…done. Your current project has been set to: [this-is-the-way-witib].
The kube-up.sh script, if you recall, wasn’t the right script. Instead, get-kube.sh is.
# ./kube-up.sh … Starting cluster in us-central1-b using provider gce … calling verify-prereqs … calling verify-kube-binaries !!! kubectl appears to be broken or missing Required release artifacts appear to be missing. Do you wish to download them? [Y/n] y Can't determine Kubernetes release. /k8s/kubernetes/cluster/get-kube-binaries.sh should only be run from a prebuilt Kubernetes release. Did you mean to use get-kube.sh instead?
Now, when I run through get-kube.sh again… it still fails. But with a different error:
Project: this-is-the-way-witib Network Project: this-is-the-way-witib Zone: us-central1-b BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist. Creating gs://kubernetes-staging-91a201ac9a Creating gs://kubernetes-staging-91a201ac9a/… AccessDeniedException: 403 The project to be billed is associated with an absent billing account.
Maybe that means I need to associate the project?
I went to my Google cloud project and clicked “Billing” and maybe that’s where the issue is:

I link the billing account:

Now, I re-run the script:
# ./get-kube.sh 'kubernetes' directory already exist. Should we skip download step and start to create cluster based on it? [Y]/n y Skipping download step. Creating a kubernetes on gce… … Starting cluster in us-central1-b using provider gce … calling verify-prereqs … calling verify-kube-binaries … calling verify-release-tars … calling kube-up Project: this-is-the-way-witib Network Project: this-is-the-way-witib Zone: us-central1-b BucketNotFoundException: 404 gs://kubernetes-staging-91a201ac9a bucket does not exist. Creating gs://kubernetes-staging-91a201ac9a Creating gs://kubernetes-staging-91a201ac9a/… +++ Staging tars to Google Storage: gs://kubernetes-staging-91a201ac9a/kubernetes-devel +++ kubernetes-server-linux-amd64.tar.gz uploaded (sha512 = be1b895d3b6e7d36d6e8b9415cb9b52bb47d34a3cb289029a8bce8a26933c6cc5c0fe2e9760c3e02b20942aa8b0633b543715ccf629119946c3f2970245f6446) +++ kubernetes-manifests.tar.gz uploaded (sha512 = 26c653dd65408db5abe61a97d6654d313d4b70bf0136e8bcee2fc1afb19a3e2d26367b57816ec162a721b10dd699fddd4b28211866853b808a434660ddf5e860)
Hey, that was it! But I’m wondering how much $$ this is going to cost me, because stuff is getting created and downloaded…
API [compute.googleapis.com] not enabled on project [493331638462]. Would you like to enable and retry (this will take a few minutes)? (y/N)? y Enabling service [compute.googleapis.com] on project [493331638462]… Operation "operations/acf.p2-493331638462-3027f3b3-c93e-4a03-bdcb-f7767fa0406a" finished successfully. ... Creating firewall... Creating firewall... IP aliases are disabled. Creating firewall... Found subnet for region us-central1 in network default: default Starting master and configuring firewalls Configuring firewall for apiserver konnectivity server Creating firewall... Creating firewall... Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/zones/us-central1-b/disks/kubernetes-master-pd]. ..Created [https://www.googleapis.com/compute/v1/projects/this-is-the-way-witib/global/firewalls/kubernetes-default-internal-master]. NAME ZONE SIZE_GB TYPE STATUS kubernetes-master-pd us-central1-b 20 pd-ssd READY
Luckily, I still have my free trial!
Now, I have a public IP
Looking for address 'kubernetes-master-ip'
Using master: kubernetes-master (external IP: x.x.x.x; internal IP: (not set))
Waiting up to 300 seconds for cluster initialization.
This will continually check to see if the API for kubernetes is reachable.
This may time out if there was some uncaught error during start up.
……….Kubernetes cluster created.
Cluster "this-is-the-way-witib_kubernetes" set.
User "this-is-the-way-witib_kubernetes" set.
Context "this-is-the-way-witib_kubernetes" created.
Switched to context "this-is-the-way-witib_kubernetes".
User "this-is-the-way-witib_kubernetes-basic-auth" set.
Wrote config for this-is-the-way-witib_kubernetes to /root/.kube/config
Kubernetes cluster is running. The master is running at:
https://x.x.x.x
The user name and password to use is located in /root/.kube/config.
And the nodes are all validated and the installation succeeds!
Validating gce cluster, MULTIZONE= Project: this-is-the-way-witib Network Project: this-is-the-way-witib Zone: us-central1-b No resources found Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying. No resources found Waiting for 4 ready nodes. 0 ready nodes, 0 registered. Retrying. Waiting for 4 ready nodes. 0 ready nodes, 2 registered. Retrying. Found 4 node(s). NAME STATUS ROLES AGE VERSION kubernetes-master Ready,SchedulingDisabled <none> 24s v1.23.1 kubernetes-minion-group-fmrn Ready <none> 12s v1.23.1 kubernetes-minion-group-g7l6 Ready <none> 21s v1.23.1 kubernetes-minion-group-hf4j Ready <none> 16s v1.23.1 Warning: v1 ComponentStatus is deprecated in v1.19+ Validate output: Warning: v1 ComponentStatus is deprecated in v1.19+ NAME STATUS MESSAGE ERROR etcd-1 Healthy {"health":"true","reason":""} etcd-0 Healthy {"health":"true","reason":""} controller-manager Healthy ok scheduler Healthy ok Cluster validation succeeded Done, listing cluster services: Kubernetes control plane is running at https://x.x.x.x GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. Kubernetes binaries at /k8s/kubernetes/cluster/kubernetes/cluster/ You may want to add this directory to your PATH in $HOME/.profile Installation successful!
One thing I don’t like is that I think I installed kubernetes twice. I’m guessing I needed to navigate to a different place when I ran the script (such as to /k8s, where the kubernetes directory would have been found and no need to re-download). This may be why I had to run get-kube.sh instead of kube-up.sh.
But I don’t think it’s a big enough problem to start over – yet – so I’ll keep going.
kubectl cluster-info runs successfully!
# kubectl cluster-info Kubernetes control plane is running at https://x.x.x.x GLBCDefaultBackend is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy CoreDNS is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy Metrics-server is running at https://x.x.x.x/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
When I try to navigate to the control plane, however:
kind "Status" apiVersion "v1" metadata {} status "Failure" message "forbidden: User \"system:anonymous\" cannot get path \"/\"" reason "Forbidden" details {} code 403
Going back to when the script ran successfully, I saw this:
The user name and password to use is located in /root/.kube/config.
So, I cat that file and see this:
user: password: ######### username: admin
Step by step
Here are the commands I ran on an initial installation of a fresh CentOS8 VM.
Now, the question is… how do I log in using that info? Or do I even need to?
# yum update -y # curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl" # curl -LO "https://dl.k8s.io/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl.sha256" # echo "$(<kubectl.sha256) kubectl" | sha256sum --check # sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl # kubectl version --client # yum install -y git # mkdir /k8s # cd /k8s # git clone https://github.com/kubernetes/kubernetes.git # yum install -y python2 python3 # curl -O https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-367.0.0-linux-x86_64.tar.gz # tar -xvf google-cloud-sdk-367.0.0-linux-x86_64.tar.gz --directory ~ # ~/google-cloud-sdk/install.sh # ~/google-cloud-sdk/bin/gcloud init # gcloud components install alpha # gcloud components install beta # cd cluster/ # ./get-kube.sh
I started to Google the error, but this looks like a deeper dive and the initial point of this post was to successfully install kubectl – which it looks like I did!
So, next post will be… how to get this working?
Pingback: A Year in Review: 2022 Highlights | Why Is The Internet Broken?
Pingback: New Year, New Role! | Why Is The Internet Broken?