Kubernetes

玩转 Kubernetes(一):离线安装 Kubernetes 2

玩转 Kubernetes(一):离线安装 Kubernetes 2

D瓜哥
在 玩转 Kubernetes(一):离线安装 Kubernetes 1 中,D瓜哥基于 Kubespray 进行魔改的脚本搭建起来容器镜像仓库。但是,每次都魔改着实麻烦,所以,探索 Kubespray 原生支持才是更为委托的长久之计。 经过多次探索,终于,可以几乎无需魔改就可以利用 Kubespray 原生支持进行 Kubernetes 的离线安装。 以下是在 Mac 上的操作,在 Linux 等系统上操作类似。 按照 Python 依赖 在 Mac 的虚拟机离线安装 Kubernetes,使用 Mac 当做容器镜像服务器和二进制安装文件下载服务器是一个非常好的选择。为此,需要在完成一些基本的操作。 由于运行 Kubespray,需要一个 Python 环境以及相关依赖,所以,就需要先安装相关依赖。 # 配置 Python 镜像 pip config set global.index-url https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple # 进入 Kubespray 的上层目录 cd /PATH/TO/kubespray/.. # 按照 Python 相关依赖 VENVDIR=kubespray-venv KUBESPRAYDIR=kubespray python3 -m venv $VENVDIR source $VENVDIR/bin/activate cd $KUBESPRAYDIR pip install -U -r requirements.txt 生成镜像列表及二进制文件列表 安装完相关依赖,就需要生成相关文件列表: # 生成镜像列表以及相关二进制文件列表 cd /PATH/TO/kubespray/contrib/offline ./generate_list.sh 注意:大多数情况下,我们的安装目标是 Linux。所以,建议这步操作在 Linux 上完成,这样得到的下载文件列表是 Linux 格式的。在 Mac 上完成,那么部分文件的格式就是 Mac 的,不能用于 Linux 的安装。
玩转 Kubernetes(一):离线安装 Kubernetes 1

玩转 Kubernetes(一):离线安装 Kubernetes 1

D瓜哥
在 基于 Docker 搭建开发环境(三):链路追踪 等几篇文章中,D瓜哥分享了如何使用 Docker Compose 在本地搭建起来一套应用可观测性环境。感觉还不够好玩,毕竟正在在企业中,Kubernetes 已经是绝对的主流。要玩就玩最具挑战性的东西,玩最符合企业所需的技能和工具。所以,打算将上面那套简易玩具,按照企业级的要求,搬到 Kubernetes 上去。 如果想玩 Kubernetes,首先面临的一个问题就是 Kubernetes 集群的搭建。本来是一个非常简单的事情,但是由于众所周知的原因,变得非常具有挑战性。经过各种探索和多次试验,发现一种“离线”安装方式,感觉是一个不错的方式。 本方法是基于 Kubespray 的一种安装办法,Kubespray 是由 Kubernetes SIG 小组来负责维护的一整套安装方式。既可以支持在裸机环境上安装,也支持云上环境安装。而且,只需要简单几行可以复制粘贴的命令,即可完成安装工作。非常适合入门玩耍使用。 本安装方法所需的软件,D瓜哥都已经上传到 GitHub,如果需要下载,请移步: Kubespray-2.26.0 安装包大全。 搭建服务器集群 这里推荐使用 Vagrant 搭建集群。搭配 VirtualBox,只需要一个配置文件,就可以轻轻松松搭建一个 Linux 服务器集群。搭建集群的配置文件 Vagrantfile 如下: # -*- mode: ruby -*- # vi: set ft=ruby : # @author D瓜哥 · https://www.diguage.com/ # All Vagrant configuration is done below. The "2" in Vagrant.configure # configures the configuration version (we support older styles for # backwards compatibility). Please don't change it unless you know what # you're doing. Vagrant.configure("2") do |config| # The most common configuration options are documented and commented below. # For a complete reference, please see the online documentation at # https://docs.vagrantup.com # 三节点集群 (1..3).each do |i| config.vm.define "node#{i}" do |node| # Every Vagrant development environment requires a box. You can search for # boxes at https://vagrantcloud.com/search node.vm.box = "alvistack/ubuntu-24.04" node.vm.box_version = "20250210.0.0" # 设置虚拟机的主机名 node.vm.hostname = "node#{i}" config.vm.boot_timeout = 600 # Disable automatic box update checking. If you disable this, then # boxes will only be checked for updates when the user runs # `vagrant box outdated`. This is not recommended. # config.vm.box_check_update = false # Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine. In the example below, # accessing "localhost:8080" will access port 80 on the guest machine. # NOTE: This will enable public access to the opened port # config.vm.network "forwarded_port", guest: 80, host: 8080 # Create a forwarded port mapping which allows access to a specific port # within the machine from a port on the host machine and only allow access # via 127.0.0.1 to disable public access # config.vm.network "forwarded_port", guest: 80, host: 8080, host_ip: "127.0.0.1" # Create a private network, which allows host-only access to the machine # using a specific IP. # 设置虚拟机的IP node.vm.network "private_network", ip: "10.0.2.#{20+i}", auto_config: true # Create a public network, which generally matched to bridged network. # Bridged networks make the machine appear as another physical device on # your network. # config.vm.network "public_network" # Share an additional folder to the guest VM. The first argument is # the path on the host to the actual folder. The second argument is # the path on the guest to mount the folder. And the optional third # argument is a set of non-required options. # 设置主机与虚拟机的共享目录,根据需要开启 node.vm.synced_folder "/path/to/#{i}", "/data" # Disable the default share of the current code directory. Doing this # provides improved isolation between the vagrant box and your host # by making sure your Vagrantfile isn't accessible to the vagrant box. # If you use this you may want to enable additional shared subfolders as # shown above. # config.vm.synced_folder ".", "/vagrant", disabled: true # Provider-specific configuration so you can fine-tune various # backing providers for Vagrant. These expose provider-specific options. # Example for VirtualBox: node.vm.provider "virtualbox" do |vb| # 设置虚拟机的名称 # vb.name = "node#{i}" # if node.vm.hostname == "node1" # # Display the VirtualBox GUI when booting the machine # vb.gui = true # end # Customize the amount of memory on the VM: vb.memory = "6144" # 设置虚拟机的CPU个数 vb.cpus = 2 end # View the documentation for the provider you are using for more # information on available options. # Enable provisioning with a shell script. Additional provisioners such as # Ansible, Chef, Docker, Puppet and Salt are also available. Please see the # documentation for more information about their specific syntax and use. # config.vm.provision "shell", inline: <<-SHELL # sudo yum makecache --refresh # sudo yum install -y tcpdump # sudo yum install -y nc # sudo yum install -y net-tools # SHELL end end end
killercoda CKA:Troubleshooting - 3

killercoda CKA:Troubleshooting - 3

D瓜哥
1. Troubleshooting - Service account, role, role binding Issue Troubleshooting - Service account, role, role binding Issue You have a service account named dev-sa, a Role named dev-role-cka, and a RoleBinding named dev-role-binding-cka. we are trying to create list and get the pods and services. However, using dev-sa service account is not able to perform these operations. fix this issue. # @author D瓜哥 · https://www.diguage.com $ kubectl get serviceaccounts dev-sa -o yaml apiVersion: v1 kind: ServiceAccount metadata: creationTimestamp: "2025-01-22T09:48:06Z" name: dev-sa namespace: default resourceVersion: "2270" uid: 48b68f34-8c19-4477-9631-4f368f6ecc66 $ kubectl get role dev-role-cka NAME CREATED AT dev-role-cka 2025-01-22T09:48:06Z $ kubectl get role dev-role-cka -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: "2025-01-22T09:48:06Z" name: dev-role-cka namespace: default resourceVersion: "2271" uid: 7a011481-8edd-4417-a1b8-8d15290d3e9f rules: - apiGroups: - "" resources: - secrets verbs: - get $ kubectl get rolebindings dev-role-binding-cka -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: creationTimestamp: "2025-01-22T09:48:07Z" name: dev-role-binding-cka namespace: default resourceVersion: "2272" uid: 888af489-86b6-4d38-a723-a8ff13656d2b roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: dev-role-cka subjects: - kind: ServiceAccount name: dev-sa namespace: default # 将 Role 删掉,重建即可 $ kubectl delete role dev-role-cka --force --grace-period 0 Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. role.rbac.authorization.k8s.io "dev-role-cka" force deleted $ kubectl create role dev-role-cka --resource=pods,services --verb=create,list,get role.rbac.authorization.k8s.io/dev-role-cka created $ kubectl get role dev-role-cka -o yaml apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: creationTimestamp: "2025-01-22T09:49:46Z" name: dev-role-cka namespace: default resourceVersion: "2414" uid: b3d7fc62-f029-4f4b-88a5-99ee9840af05 rules: - apiGroups: - "" resources: - pods - services verbs: - create - list - get
killercoda CKA:Troubleshooting - 2

killercoda CKA:Troubleshooting - 2

D瓜哥
1. Troubleshooting - Persistent Volume, Persistent Volume Claim - Issue Troubleshooting - Persistent Volume, Persistent Volume Claim - Issue my-pvc Persistent Volume Claim is stuck in a Pending state, fix this issue, make sure it is in Bound state # @author D瓜哥 · https://www.diguage.com $ kubectl get pvc my-pvc -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE VOLUMEMODE my-pvc Pending standard <unset> 38s Filesystem $ kubectl get pv my-pv -o wide NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS VOLUMEATTRIBUTESCLASS REASON AGE VOLUMEMODE my-pv 100Mi RWO Retain Available standard <unset> 51s Filesystem $ kubectl get pvc my-pvc -o yaml | tee pv.yaml apiVersion: v1 kind: PersistentVolumeClaim metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"my-pvc","namespace":"default"},"spec":{"accessModes":["ReadWriteMany"],"resources":{"requests":{"storage":"150Mi"}},"storageClassName":"standard"}} creationTimestamp: "2025-01-20T13:08:41Z" finalizers: - kubernetes.io/pvc-protection name: my-pvc namespace: default resourceVersion: "2002" uid: a4c6c044-4118-47a4-97b9-ceb69fac3bc2 spec: accessModes: - ReadWriteMany resources: requests: storage: 150Mi storageClassName: standard volumeMode: Filesystem status: phase: Pending $ kubectl get pv my-pv -o yaml apiVersion: v1 kind: PersistentVolume metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"PersistentVolume","metadata":{"annotations":{},"name":"my-pv"},"spec":{"accessModes":["ReadWriteOnce"],"capacity":{"storage":"100Mi"},"hostPath":{"path":"/mnt/data"},"persistentVolumeReclaimPolicy":"Retain","storageClassName":"standard"}} creationTimestamp: "2025-01-20T13:08:41Z" finalizers: - kubernetes.io/pv-protection name: my-pv resourceVersion: "2003" uid: 85a371c4-0931-4b57-87ea-fc1fceb674c1 spec: accessModes: - ReadWriteOnce capacity: storage: 100Mi hostPath: path: /mnt/data type: "" persistentVolumeReclaimPolicy: Retain storageClassName: standard volumeMode: Filesystem $ vim pv.yaml # 两个问题: # 1、 PVC 和 PV 的 accessModes 不一致,改为 ReadWriteOnce即可 # 2、 PVC 的存储是 150Mi,而 PV 只有 100Mi,也改为 100Mi 即可。 $ kubectl delete -f pv.yaml --force --grace-period 0 Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. persistentvolumeclaim "my-pvc" force deleted $ kubectl apply -f pv.yaml persistentvolumeclaim/my-pvc created $ kubectl get pvc my-pvc -o wide NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE VOLUMEMODE my-pvc Bound my-pv 100Mi RWO standard <unset> 10s Filesystem
killercoda CKA:Troubleshooting - 1

killercoda CKA:Troubleshooting - 1

D瓜哥
1. Troubleshooting - Pod Issue Troubleshooting - Pod Issue hello-kubernetes pod not running, fix that issue # @author D瓜哥 · https://www.diguage.com $ kubectl get pod NAME READY STATUS RESTARTS AGE hello-kubernetes 0/1 RunContainerError 2 (6s ago) 29s $ kubectl describe pod hello-kubernetes Name: hello-kubernetes Namespace: default Priority: 0 Service Account: default Node: node01/172.30.2.2 Start Time: Mon, 20 Jan 2025 07:21:57 +0000 Labels: <none> Annotations: cni.projectcalico.org/containerID: 2e010161283b56bfd70d604c31ece3dc3189882f1e24c2ea57647dbaec3b2bdb cni.projectcalico.org/podIP: 192.168.1.4/32 cni.projectcalico.org/podIPs: 192.168.1.4/32 Status: Running IP: 192.168.1.4 IPs: IP: 192.168.1.4 Containers: echo-container: Container ID: containerd://4f01851fcb908cd7bd1031a1726b8b75873d69fb246a5eebdd5c3dc003be7c19 Image: redis Image ID: docker.io/library/redis@sha256:ca65ea36ae16e709b0f1c7534bc7e5b5ac2e5bb3c97236e4fec00e3625eb678d Port: <none> Host Port: <none> Command: shell -c while true; do echo 'Hello Kubernetes'; sleep 5; done State: Waiting Reason: CrashLoopBackOff Last State: Terminated Reason: StartError Message: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "shell": executable file not found in $PATH: unknown Exit Code: 128 Started: Thu, 01 Jan 1970 00:00:00 +0000 Finished: Mon, 20 Jan 2025 07:22:20 +0000 Ready: False Restart Count: 2 Environment: <none> Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xk5qj (ro) Conditions: Type Status PodReadyToStartContainers True Initialized True Ready False ContainersReady False PodScheduled True Volumes: kube-api-access-xk5qj: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 41s default-scheduler Successfully assigned default/hello-kubernetes to node01 Normal Pulled 35s kubelet Successfully pulled image "redis" in 5.57s (5.57s including waiting). Image size: 45006722 bytes. Normal Pulled 33s kubelet Successfully pulled image "redis" in 422ms (422ms including waiting). Image size: 45006722 bytes. Normal Pulling 19s (x3 over 40s) kubelet Pulling image "redis" Normal Created 18s (x3 over 35s) kubelet Created container echo-container Warning Failed 18s (x3 over 34s) kubelet Error: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "shell": executable file not found in $PATH: unknown Normal Pulled 18s kubelet Successfully pulled image "redis" in 467ms (467ms including waiting). Image size: 45006722 bytes. Warning BackOff 6s (x4 over 32s) kubelet Back-off restarting failed container echo-container in pod hello-kubernetes_default(5a459cd4-866a-4e57-8d44-ae83156e1e0b) $ kubectl get pod hello-kubernetes -o yaml | tee pod.yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 2e010161283b56bfd70d604c31ece3dc3189882f1e24c2ea57647dbaec3b2bdb cni.projectcalico.org/podIP: 192.168.1.4/32 cni.projectcalico.org/podIPs: 192.168.1.4/32 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"hello-kubernetes","namespace":"default"},"spec":{"containers":[{"command":["shell","-c","while true; do echo 'Hello Kubernetes'; sleep 5; done"],"image":"redis","name":"echo-container"}]}} creationTimestamp: "2025-01-20T07:21:57Z" name: hello-kubernetes namespace: default resourceVersion: "2157" uid: 5a459cd4-866a-4e57-8d44-ae83156e1e0b spec: containers: - command: - shell - -c - while true; do echo 'Hello Kubernetes'; sleep 5; done image: redis imagePullPolicy: Always name: echo-container resources: {} terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-xk5qj readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: node01 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-xk5qj projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace # 省略了 status 字段 $ vim pod.yaml # 根据提示,没有 shell,将 shell 修改为 sh 即可。 $ kubectl replace -f pod.yaml Error from server (Conflict): error when replacing "pod.yaml": Operation cannot be fulfilled on pods "hello-kubernetes": the object has been modified; please apply your changes to the latest version and try again # 不能替换,就直接删除,再重建 $ kubectl delete -f pod.yaml --force --grace-period 0 Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "hello-kubernetes" force deleted $ kubectl apply -f pod.yaml pod/hello-kubernetes created $ kubectl get pod NAME READY STATUS RESTARTS AGE hello-kubernetes 1/1 Running 0 5s
killercoda CKA:Workloads & Scheduling

killercoda CKA:Workloads & Scheduling

D瓜哥
1. Workloads & Scheduling - Pod Workloads & Scheduling - Pod Fresher deployed a pod named my-pod. However, while specifying the resource limits, they mistakenly given 100Mi storage limit instead of 50Mi node doesn’t have sufficient resources, So change it to 50Mi only. # @author D瓜哥 · https://www.diguage.com $ kubectl get pod my-pod -o yaml | tee pod.yaml apiVersion: v1 kind: Pod metadata: annotations: cni.projectcalico.org/containerID: 8414bfefda21fa6ca74ef8d499c92a22ae6cc0dbb6d0bc4d82eb0129a795d75d cni.projectcalico.org/podIP: 192.168.1.4/32 cni.projectcalico.org/podIPs: 192.168.1.4/32 kubectl.kubernetes.io/last-applied-configuration: | {"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"my-pod","namespace":"default"},"spec":{"containers":[{"image":"nginx:latest","name":"my-container","resources":{"limits":{"memory":"100Mi"},"requests":{"memory":"50Mi"}}}]}} creationTimestamp: "2025-01-14T07:53:50Z" name: my-pod namespace: default resourceVersion: "2026" uid: fcf1e97e-cec0-45b0-b82d-766ad0c51823 spec: containers: - image: nginx:latest imagePullPolicy: Always name: my-container resources: limits: memory: 100Mi requests: memory: 50Mi terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: kube-api-access-thchj readOnly: true dnsPolicy: ClusterFirst enableServiceLinks: true nodeName: node01 preemptionPolicy: PreemptLowerPriority priority: 0 restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - name: kube-api-access-thchj projected: defaultMode: 420 sources: - serviceAccountToken: expirationSeconds: 3607 path: token - configMap: items: - key: ca.crt path: ca.crt name: kube-root-ca.crt - downwardAPI: items: - fieldRef: apiVersion: v1 fieldPath: metadata.namespace path: namespace # 省略没用的 status 字段 $ vim pod.yaml # 将 limit 中,100Mi 改为 50Mi $ kubectl delete -f pod.yaml --force --grace-period 0 Warning: Immediate deletion does not wait for confirmation that the running resource has been terminated. The resource may continue to run on the cluster indefinitely. pod "my-pod" force deleted $ kubectl apply -f pod.yaml pod/my-pod created
killercoda CKA:Storage

killercoda CKA:Storage

D瓜哥
1. Storage - Persistent Volume Storage - Persistent Volume Create a PersistentVolume (PV) named black-pv-cka with the following specifications: Volume Type: hostPath Path: /opt/black-pv-cka Capacity: 50Mi # @author D瓜哥 · https://www.diguage.com $ vim pv.yaml # 编写 YAML 文件 $ cat pv.yaml apiVersion: v1 kind: PersistentVolume metadata: name: black-pv-cka spec: capacity: storage: 50Mi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Retain hostPath: path: /opt/black-pv-cka $ kubectl apply -f pv.yaml persistentvolume/black-pv-cka created 2. Storage - Persistent Volume Claim Storage - Persistent Volume Claim
killercoda CKA:Services & Networking

killercoda CKA:Services & Networking

D瓜哥
1. Services & Networking - Services Services & Networking - Services You have an existing Nginx pod named nginx-pod. Perform the following steps: Expose the nginx-pod internally within the cluster using a Service named nginx-service . Use port forwarding to service to access the Welcome content of nginx-pod using the curl command. # @author D瓜哥 · https://www.diguage.com $ kubectl get pod --show-labels NAME READY STATUS RESTARTS AGE LABELS nginx-pod 1/1 Running 0 8m48s app=nginx $ cat svc.yaml apiVersion: v1 kind: Service metadata: name: nginx-service spec: selector: app: nginx ports: - name: http protocol: TCP port: 80 targetPort: 80 $ kubectl apply -f svc.yaml service/nginx-service created $ kubectl port-forward service/nginx-service 8081:80 Forwarding from 127.0.0.1:8081 -> 80 Forwarding from [::1]:8081 -> 80 Handling connection for 8081 # 打开另外一个终端 $ curl localhost:8081 <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>
killercoda CKA:Architecture, Installation & Maintenance

killercoda CKA:Architecture, Installation & Maintenance

D瓜哥
1. Architecture, Installation & Maintenance - Create Pod Architecture, Installation & Maintenance - Create Pod Create a pod called sleep-pod using the nginx image and also sleep (using command ) for give any value for seconds. # @author D瓜哥 · https://www.diguage.com $ cat nginx.yaml apiVersion: v1 kind: Pod metadata: name: sleep-pod spec: containers: - name: nginx image: nginx command: - sleep - "3600" $ kubectl apply -f nginx.yaml pod/sleep-pod created $ kubectl get pod NAME READY STATUS RESTARTS AGE sleep-pod 1/1 Running 0 5s

告别 2019,迎接 2020

D瓜哥
我的 2019 在 2018 年底,我写了 “2019 年度计划” 对 2019 年做了一个简单的规划。2019 年如白驹过隙,稍纵即逝。又到一年年底,现在对这一年做一个总结。 虽然这一年,我一直在意有这么一个计划,但是从来没有认真总结反思过。为了回顾一下,刚刚又打开计划去看,生活才发现过得如此苍白无力。抛开计划,按时间顺序,从头到尾,随意来说吧。 读书 今年读了几本书: 《小岛经济学》 《小王子(英文版)》 《未来世界的幸存者》 《白夜行》 《教父》 《远见》 《如何阅读一本书?》 《文明之光(第一册)》 《基督山伯爵》 《Kubernetes in Action》 《富爸爸穷爸爸》 《巴菲特与索罗斯的投资习惯》 《幸福的方法》 《魔鬼聊天术》 《拆掉思维里的墙》 挑选几本来简单说说吧。 《小岛经济学》 《小岛经济学》是今年读完的第一本书,也算是去年留下的一个尾巴。奥地利经济学派主张政府不干预的观点。这本书简单描述了经济发展的过程,从莽荒时代,手工作业开始,逐步引入工具,提高生产力;随着生产力的提升,人口的增多,经济活动增多,开始物物交换;从物物交换过渡到货币交易,然后银行业兴起;生产力再次提高,以及货币的增加,投资也成为必要,资本市场应运而生;随着地区优势的凸显,资源整合也势在必行,接着发展起了跨国贸易,换汇成为必要,汇率也就被引入进来;随着资本市场的繁荣,埋藏的地雷被引爆,通货膨胀,货币贬值,资本市场崩溃,社会进入大萧条,然后重置继续发展…… 整个过程完整而清晰,犹如亲历过去发展的波澜壮阔。作为一本经济学入门教程还是很赞的。最近开始看曼昆的《经济学原理》,再来看这本书,真的只能是入门。也许智商不够用,也许对经济学的知识了解得过于浅薄,当经济发展到引入汇率之后,后面的发展过程理解的就不是很透彻了。等看一下金融的书,结合起来,希望能更好地理解。 《未来世界的幸存者》 在年初看完的这本书,看完后震撼很大。把当时的读后感直接贴到这里作为总结吧。 作者站在一个技术人员的角度,去思考技术如何改变世界?未来会是什么样子? 作者的思考和我的思考结果是一样的: 技术的快速发展,机器必定完成大部分重复性劳动,使得大部分劳动者被取代。 大部分人必定只能找到低端工作,工资仅够温饱。 贫富分化将越来越严重,而且会世袭,穷人毫无翻身的可能。 科技发展的结果,我自己感觉是喜忧参半: 作为一名IT从业者,真正在用技术改变这个世界,带给大家更多的便利,这是百年难得一遇的机遇,生在这个伟大的时代,能参与这个伟大的浪潮之中,可以用自己的双手去推动世界的改变,何其幸哉! 同时,技术带给人们的便利真是难以想象,足不出户点餐、购物、约车、娱乐… 但是,也有很多人是不幸的,他们没有一技之长,或者人到中年,却惨遭失业;还有一部分人,或者说大部分生活在底层的人们,因为没有一技之长(仅指不容易被技术取代的技能),也许永无翻身之日,无所事事,惶惶终日。 技术的发展,造成人类事业,伴随而来的可能会是“黑镜第一季第二集”的样子:每人一张床,四周都是显示器,想看什么看什么,什么活也不用干,平时运动挣积分,拿积分再消费… 进一步发展,也许就是《西部世界》,尤其是娱乐业。 人类本身很可能会再进化一步:进化成碳基+硅基的组合体。碳基就是目前的人(也许仅仅只需要保留大脑),硅基就是各种芯片+维持大脑活动的必要设备和机械躯体… 基因改造将带来彻底的不平衡:人类被分为被改造人和原始人。 《远见》 这本书是在刷知乎时,别人推荐的书。就找来看了看。真是相见恨晚!如果在大学毕业时就看到这本书,也许…… 简单做个摘录作为总结吧。 本书讲人生的职业生涯分为三个阶段: 第一阶段是强势开局的时候。你在职业上的努力必须着重于为前方的漫长道路挖掘和装备自己。你的学习曲线要比职位、职称更加重要。在这一阶段,要为职业生涯打好基础并建立起良好的早期习惯。 第二阶段是聚焦长板的时候。该阶段的首要目标是寻找自己的甜蜜区,即你所擅长的、所热爱的和这个世界所需要的这三者之间的交集。这个时候你要展现自我,让自己鹤立鸡群,想方设法平稳地走在那条收获最大的职场路径上。你要专注于自己的长板,且大可忽略自己的短板。 第三阶段致力于实现持续的影响力,以及寻找一条可以稳定延续到60多岁甚至70多岁的新的可持续职业道路。你要在第三阶段完成三个关键任务:完成继任计划、保持关联性,以及为自己点燃一团新的职业之火。 用一句话来总结:第一阶段:加添燃料,强势开局; 第二阶段:聚焦长板,达到高点; 第三阶段:优化长尾,持续发挥影响力。 三大职场燃料来源:可迁移技能、有意义的经验和持久的关系。 5个数字,树立正确的职场思维 职业生涯的长度:用62减去你目前的年龄。 精通一项技能所需的时间:要花多少小时才能在某一方面达到“精通”? 40岁之后能赚到的个人财富百分比:在40岁之后,你赚到的钱会占你一生个人财富的百分之多少?大部分人的估计是60%。 社交货币:你有多少社交网络好友? 职场支持者的人数:你认为能在“职业生涯的天堂”里遇到多少人,也就是说有多少人能对你的职业生涯和人生带来真正的变化? 这本书的摘要已经发布出来,感兴趣的小伙伴请移步: 《远见》之读书笔记。