Learning Environment¶
To create a zero-cost learning experience we want to use a local, low resource vanilla Kubernetes cluster using minikube
Multipass environment on Mac¶
The follow the steps below:
-
Create the cloud-init based on the Toolkit init file. Name the file cli-tools-minikube.yaml and set the content to:
Note
This file needs to be kept in line with this file
## cloud-init to create a VM with the following: ## ## - terraform ## - terragrunt ## - git ## - jq ## - yq ## - oc ## - kubectl ## - helm ## - ibmcloud cli apt: sources: helm.list: source: "deb https://baltocdn.com/helm/stable/debian/ all main" key: | -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBF6yP7IBEADWk4aijQ7Vhj7wn2oz+8asnfzsD0+257qjWy1m+cN4RP6T2NBG S2M5+vzbsKNmGAja8jOpo46pHo/SCdc8Bwv+QHH+JbuBbDNEHwIBGV5p+ZRETiHq l8UsyUAPCWinKR6evZrANCBEzXtOEVJ4thuPoBuZkteKNTdPlOg9MBqD5zz+4iQX 2CJJNW7+1sxAAVozHJxjJbu6c84yPvNFAiCAct+x5WJZFJWuO+l55vl6va8cV7tw DgHomk+1Q7w00Z0gh28Pe1yfvvw3N+pCSYn88mSgZtdP3wz3pABkMe4wMobNWuyX bIjGMuFDs7vGBY6UCL6alI/VC7rrSZqJZjntuoNI0Xlfc3BjUHWzinlbA7UFk5Lv qZO61V439Wm4x2n1V+4Kj/nPwtgBrNghaeDjxWLlgqaqynltSpXYnv2qGWYLRUb9 WFymbYCJ0piqRdNVNNI8Ht9nFaya6qjDcIxFwFMF9QcrECG1HCK1M5JjdJpzr6Jq Z27/2ZG7DhabArSR5aoyBqhCylJfXugneDhitmilJiQd5EzefjiDO29EuBSMwkAs +IKg9jxGyI47m3+ITWhMDWTFTYBF/O69iKXfFvf4zrbfGMcf2w8vIOEBU3eTSYoY RhHXROedwcYpaVGJmsaT38QTSMqWTn12zlvmW5f6mEI5LQq398gN9eiWDwARAQAB tERIZWxtIGhvc3RlZCBieSBCYWx0byAoUmVwb3NpdG9yeSBzaWduaW5nKSA8Z3Bn c2VjdXJpdHlAZ2V0YmFsdG8uY29tPokCVAQTAQoAPhYhBIG/gy4vGc0qoEcZWSlK xIJ8GhaKBQJesj+yAhsvBQkSzAMABQsJCAcCBhUKCQgLAgQWAgMBAh4BAheAAAoJ EClKxIJ8GhaKPHEP/RRzvYCetoLeIj5FtedbeumGcWaJj97L4R1j7iK0dc0uvg0T 5JeMDttAt69dFPHyB0kR1BLSwgJBhYCtvwalvD/g7DmL5l5HIM7o/VrkXDay1Pee wkCclA18y2wNM5EXKAuoFX5FMkRpTtSQhMMllbKsNNSvwvEZWvqMQlwJ/2HgNoVl 2NtfY65UXHvIV2nTTmCVDq4OYBlHoUX5rRE7fOgFZ+u6Su7yopTYy13yY8ZVDNf/ qNUWqA41gRYnwYtSq1DogHq1dcyr/SW/pFsn4n4LjG+38CIkSjFKOeusg2KPybZx l/z0/l0Yv4pTaa91rh1hGWqhvYDbLr2XqvI1wpcsIRPpU8lasycyQ8EeI4B5FVel ea2Z6rvGtMG92wVNCZ6YMYzpvRA9iRgve4J4ztlCwr0Tm78vY/vZfU5jkPW1VOXJ 6nW/RJuc2mecuj8YpJtioNVPbfxE/CjCCnGEnqn511ZYqKGd+BctqoFlWeSihHst tuSqJoqjOmt75MuN6zUJ0s3Ao+tzCmYkQzn2LUwnYisioyTW4gMtlh/wsU6Rmims s5doyG2Mcc0QfstXLMthVkrBpbW4XT+Q6aTGUMlMv1BhKycDUmewI2AMNth5Hood iEt18+X26+Q2exojaMHOCdkUJ+C44XPDy6EvG4RyO4bILHz5obD/9QZO/lzK =BFdd -----END PGP PUBLIC KEY BLOCK----- packages: - git - jq - helm - unzip - openvpn - ca-certificates - graphviz - ubuntu-desktop - xrdp - firefox groups: - docker snap: commands: - [install, docker] - [install, kubectl, --classic] runcmd: - adduser ubuntu docker - mkdir -p /run/tmp - export TERRAFORM_VERSION=1.2.4 && curl -Lso /run/tmp/terraform.zip https://releases.hashicorp.com/terraform/${TERRAFORM_VERSION}/terraform_${TERRAFORM_VERSION}_linux_$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi).zip && mkdir -p /run/tmp/terraform && cd /run/tmp/terraform && unzip /run/tmp/terraform.zip && sudo mv ./terraform /usr/local/bin && cd - && rm -rf /run/tmp/terraform && rm /run/tmp/terraform.zip - export TERRAGRUNT_VERSION=0.36.10 && curl -sLo /run/tmp/terragrunt https://github.com/gruntwork-io/terragrunt/releases/download/v${TERRAGRUNT_VERSION}/terragrunt_linux_$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi) && chmod +x /run/tmp/terragrunt && sudo mv /run/tmp/terragrunt /usr/local/bin/terragrunt - export YQ_VERSION=4.25.2 && curl -Lso /run/tmp/yq "https://github.com/mikefarah/yq/releases/download/v${YQ_VERSION}/yq_linux_$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi)" && chmod +x /run/tmp/yq && sudo mv /run/tmp/yq /usr/local/bin/yq - curl -Lo /run/tmp/kubectl "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi)/kubectl" && chmod +x /run/tmp/kubectl && sudo mv /run/tmp/kubectl /usr/local/bin - export OPENSHIFT_CLI_VERSION=4.10 && sudo curl -Lo /usr/local/oc-client.tar.gz https://mirror.openshift.com/pub/openshift-v4/$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi)/clients/ocp/stable-${OPENSHIFT_CLI_VERSION}/openshift-client-linux.tar.gz && sudo mkdir /usr/local/oc-client && cd /usr/local/oc-client && tar xzf /usr/local/oc-client.tar.gz && sudo mv ./oc /usr/local/bin && cd - && sudo rm -rf /usr/local/oc-client && sudo rm /usr/local/oc-client.tar.gz - curl -fsSL https://clis.cloud.ibm.com/install/linux | sh && ibmcloud plugin install container-service -f && ibmcloud plugin install container-registry -f && ibmcloud plugin install observe-service -f && ibmcloud plugin install vpc-infrastructure -f && ibmcloud config --check-version=false - curl -sL https://iascable.cloudnativetoolkit.dev/install.sh | sh - curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi) - install minikube-linux-$(if [ `uname -m` = arm64 -o `uname -m` = aarch64 ]; then echo "arm64"; else echo "amd64"; fi) /usr/local/bin/minikube
-
launch the Multipass environment:
Info
This step can take several minutes to complete
multipass launch --name cli-tools --cloud-init ./cli-tools-minikube.yaml --disk 60G --mem 6G --cpus 6 --timeout 600
-
mount the parent directory into the multipass environment:
multipass mount $PWD/.. cli-tools:/automation
-
enter the Multipass environment:
multipass shell cli-tools
-
set a password for ubuntu user for GUI access
sudo passwd ubuntu
enter the password you want two times as prompted. This will be the password you need to use when using remote access tools to be able to access the GUI and a web browser within the multipass environment
-
create the kubernetes environment:
minikube start --driver=docker --addons=dashboard,dns,ingress
-
expose the dashboard using the ingtress
cat <<EOF | kubectl create -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.info http: paths: - pathType: Prefix path: "/" backend: service: name: kubernetes-dashboard port: number: 80 EOF
-
add an entry in /etc/hosts to resolve the dashboard URL
-
find the IP address of the dashboard using command
oc get ingress -A
this will create an output similar to:
NAMESPACE NAME CLASS HOSTS ADDRESS PORTS AGE kubernetes-dashboard dashboard-ingress nginx dashboard.info 192.168.49.2 80 36s
-
edit the /etc/hosts file to map dashboard.info to the address shown for the dashboard
sudo nano /etc/hosts
add the line for the dashboard, so you end up with a file similar to:
# Your system has configured 'manage_etc_hosts' as True. # As a result, if you wish for changes to this file to persist # then you will need to either # a.) make changes to the master file in /etc/cloud/templates/hosts.debian.tmpl # b.) change or remove the value of 'manage_etc_hosts' in # /etc/cloud/cloud.cfg or cloud-config from user-data # 127.0.1.1 cli-tools cli-tools 127.0.0.1 localhost 192.168.49.2 dashboard.info # The following lines are desirable for IPv6 capable hosts ::1 localhost ip6-localhost ip6-loopback ff02::1 ip6-allnodes ff02::2 ip6-allrouters
-
-
Open a RDP client, such as Microsoft Remote Desktop and connect to the multipass machine. Details on options are provided in the multipass documentation. The packages (ubuntu-desktop, xrdp and the Firefox browser were installed as part of the multipass start initialisation)
When you are in the ubuntu graphical user interface, launch Firefox and go to url
http://dashboard.info
. This should launch the Kubernetes dashboard -
create your first bill of materials
cat <<EOF > tutorial-bom.yaml apiVersion: cloud.ibm.com/v1alpha1 kind: BillOfMaterial metadata: name: tutorial spec: modules: - name: cluster-login EOF
-
build the bill of material using command
Temporary workaround
The cluster login module is not yet published, so need to add it manually so iascable can find it
instead of the iascable command below this box use this command
iascable build -i tutorial-bom.yaml -c file:/automation/terraform-cluster-login/output/cluster-login/index.yaml
DON"T RUN THE COMMAND BELOW - THAT IS FOR WHEN THIS WORKAROUND CAN BE REMOVED move to step 12
iascable build -i tutorial-bom.yaml
-
create the configuration needed to log into the cluster. Change the server_url to the address you used for the dashboard.info resolution in step 8
cat <<EOF > output/tutorial/variables.yaml variables: - name: server_url value: https://192.168.49.2:8443 - name: cluster_login_user value: minikube - name: cluster_ca_cert value: $(base64 -w 0 -i ~/.minikube/ca.crt) - name: cluster_user_cert value: $(base64 -w 0 -i ~/.minikube/profiles/minikube/client.crt) - name: cluster_user_key value: $(base64 -w 0 -i ~/.minikube/profiles/minikube/client.key) EOF
-
change to the output directory where the BOM was generated
cd output/tutorial
-
apply the bill of materials
./apply.sh
Podman on Mac¶
-
start podman machine
podman machine init --cpus=4 --memory=6096 -v $HOME:$HOME --rootful --now
You need to know details of the podman network to get minikube working:
podman network ls
my output looks like this:
NETWORK ID NAME DRIVER 2f259bab93aa podman bridge
note the name of the network - podman then get details of the network with the
podman network inspace <network name>
command:podman network inspect podman
which generated the following output:
[ { "name": "podman", "id": "2f259bab93aaaaa2542ba43ef33eb990d0999ee1b9924b557b7be53c0b7a1bb9", "driver": "bridge", "network_interface": "podman0", "created": "2022-10-19T16:57:22.965684025+01:00", "subnets": [ { "subnet": "10.88.0.0/16", "gateway": "10.88.0.1" } ], "ipv6_enabled": false, "internal": false, "dns_enabled": false, "ipam_options": { "driver": "host-local" } } ]
the gateway address is needed when starting minikube - 10.88.0.1 in the example shown above.
-
install minikube
Todo
add instructions
-
create the kubernetes environment:
minikube start --driver=podman --apiserver-ips=127.0.0.1,10.88.0.1 --addons=dashboard,dns,ingress
Note
change 10.88.0.1 in the above command to the gateway address of your podman network - found in step 1
Once minikube is started you need to find what port the api is being served on. To do this enter command
podman ps
this will generate output similar to
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d3a7f23e854b gcr.io/k8s-minikube/kicbase:v0.0.34 10 minutes ago Up 10 minutes ago 0.0.0.0:32865->22/tcp, 0.0.0.0:40515->2376/tcp, 0.0.0.0:35427->5000/tcp, 0.0.0.0:39641->8443/tcp, 0.0.0.0:33853->32443/tcp minikube
look for the local port being mapped to container port 8443 : 0.0.0.0:39641->8443/tcp in the example above
within podman containers the minikube cluster api can be accessed at
https://10.88.0.1:39641
. Note down the port number on your system as you will need it in step 11. -
expose dashboard via the ingress:
cat <<EOF | kubectl create -f - apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: dashboard-ingress namespace: kubernetes-dashboard spec: rules: - host: dashboard.info http: paths: - pathType: Prefix path: "/" backend: service: name: kubernetes-dashboard port: number: 80 EOF
-
add name resolution for dashboard by adding
127.0.0.1 dashboard.info
to /etc/hosts file. Choose your preferred editor, but you will need to edit the file using sudosudo nano /etc/hosts
add the line
127.0.0.1 dashboard.info
to the bottom of the file, then Ctrl-o to write the file then Ctrl-x to quit the editor -
expose the ingress outside the minikube container
minikube tunnel
enter your password when prompted, then press and hold the Ctrl key then press z. The prompt should return then enter
bg
and press return. Your dashboard should now be available -
access the dashboard at
http://dashboard.info
- create/navigate to a directory where you want to work using the
cd
command. This must be within your home directory for podman containers to be able to access the directory. This will be referred to as the project directory (e.g. ~/project/minikube) -
create your first bill of materials
cat <<EOF > tutorial-bom.yaml apiVersion: cloud.ibm.com/v1alpha1 kind: BillOfMaterial metadata: name: tutorial spec: modules: - name: cluster-login EOF
-
build the bill of material using command
Temporary workaround
The cluster login module is not yet published, so need to add it manually so iascable can find it
place the terraform-cluster-login.tgz file in the same folder as the tutorial-bom.yaml file then run the following command
tar zxvf terraform-cluster-login.tgz
instead of the iascable command below this box use this command
iascable build -i tutorial-bom.yaml -c file:${PWD}/terraform-cluster-login/output/cluster-login/index.yaml
then make the terraform-cluster-login.tgz file available within the tools container by moving it into the output directory
mv terraform-cluster-login.tgz output
DON"T RUN THE COMMAND BELOW - THAT IS FOR WHEN THIS WORKAROUND CAN BE REMOVED move to step 11
iascable build -i tutorial-bom.yaml
-
create the configuration needed to log into the cluster. Change the server_url to the minikube api server address you discovered in step 3
cat <<EOF > tutorial/variables.yaml variables: - name: server_url value: https://10.88.0.1:39641 - name: cluster_login_user value: minikube - name: cluster_ca_cert value: $(base64 -i ~/.minikube/ca.crt) - name: cluster_user_cert value: $(base64 -i ~/.minikube/profiles/minikube/client.crt) - name: cluster_user_key value: $(base64 -i ~/.minikube/profiles/minikube/client.key) EOF
-
change to the output directory and launch into the cli-tools container
cd output ./launch.sh podman
Temporary workaround
The cluster login module is not yet published, so need to add it manually into the correct place for it to be found when the Bill of Materials is applied
sudo mkdir /automation cd /automation sudo tar zxvf /terraform/terraform-cluster-login.tgz cd /terraform
-
apply the bill of materials
cd tutorial ./apply.sh
-
set the kube config file to be able to access minikube within the cli-tools container
export KUBECONFIG=/terraform/tutorial/terraform/.tmp/.kube/config