TECHSTEP

ITインフラ関連の記事を公開してます。

CentOS7にk3sを立てる ~nginxをデプロイするまで~

はじめに

2019年2月26日、Rancher社からKubernetesの軽量版であるk3sが発表されました

f:id:FY0323:20190227232403j:plain

詳細についてはこちらの記事をご確認ください。

すでにラズパイ上に構築した方もいらっしゃるようですが、ひとまずCentOS7上にserver、agentを構築し、nginxをデプロイしてcurlが返ってくるまでをやってみました。

構築環境

今回はserver、agentそれぞれ1台ずつ構築しました。

Server構築手順

構築手順は公式ページにあるようにやっただけです。以下のコマンドを実行することでインストール・起動までが完了します。

curl -sfL https://get.k3s.io | sh -

以下のようにログが表示されます。

[root@ip-10-0-0-160 ~]# curl -sfL https://get.k3s.io | sh -
[INFO]  Creating uninstall script /usr/local/bin/k3s-uninstall.sh
[INFO]  Downloading https://github.com/rancher/k3s/releases/download/v0.1.0/sha256sum-amd64.txt
[INFO]  Downloading https://github.com/rancher/k3s/releases/download/v0.1.0/k3s
[INFO]  Verifying download
[INFO]  Installing k3s to /usr/local/bin/k3s
[INFO]  Creating /usr/local/bin/kubectl symlink to k3s
[INFO]  Creating /usr/local/bin/crictl symlink to k3s
[INFO]  systemd: Creating /etc/systemd/system/k3s.service
[INFO]  systemd: Enabling k3s unit
Created symlink from /etc/systemd/system/multi-user.target.wants/k3s.service to /etc/systemd/system/k3s.service.
[INFO]  systemd: Starting k3s
[root@ip-10-0-0-160 ~]# 

動作確認をしてみます。

# サービスの起動状態確認
[root@ip-10-0-0-160 ~]# systemctl status -l k3s
● k3s.service - Lightweight Kubernetes
   Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2019-02-27 13:11:01 UTC; 39s ago
     Docs: https://k3s.io
  Process: 26706 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
  Process: 26692 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
 Main PID: 26712 (k3s-server)
    Tasks: 84
   Memory: 519.3M
   CGroup: /system.slice/k3s.service
           ├─26728 containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd
           ├─27047 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/87a6e2d1d8139324cc2f21c56dbb70b1dc6f1a39914f32277b59a8584ba688ab -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ├─27292 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/021bd92dd893fcac228603b4464d5bbce7ae48ec3b48b0354ae0def9eb24f29c -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ├─27862 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/a7bb126da2cfa0b8a0ef0b7f62a128465529d56793df2443d1592452df1765ef -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ├─28051 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/a80e0bd67827ef302f68a887a1254f217e134cf91470cb1b5495c39cab5e7e76 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ├─28180 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/41e0cda9075dd80b9f66beb6920e1f9ed2c1f6d94a87e59ce2b5c4050a54d350 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ├─28307 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/f1235ab531ee41c4048e0f8caebf205163e57c826de04a5376c9051e252844e7 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           └─28488 containerd-shim -namespace k8s.io -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/k8s.io/d543d67fe59ebbf048cb595cd89d1dadb0d0e919e2ecc4b9b5b627675c0a52c8 -address /run/k3s/containerd/containerd.sock -containerd-binary /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin/containerd
           ‣ 26712 /usr/local/bin/k3s server

Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:09.646094506Z" level=info msg="Connecting to proxy" url="wss://localhost:6443/v1-k3s/connect"
Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:09.650274810Z" level=info msg="Handling backend connection request [ip-10-0-0-160.us-west-2.compute.internal]"
Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:09.651270063Z" level=warning msg="Disabling CPU quotas due to missing cpu.cfs_period_us"
Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:09.651314991Z" level=info msg="Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override ip-10-0-0-160.us-west-2.compute.internal --cpu-cfs-quota=false"
Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: Flag --allow-privileged has been deprecated, will be removed in a future version
Feb 27 13:11:09 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:09.670323459Z" level=info msg="waiting for node ip-10-0-0-160.us-west-2.compute.internal: nodes \"ip-10-0-0-160.us-west-2.compute.internal\" not found"
Feb 27 13:11:11 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:11.672082577Z" level=info msg="waiting for node ip-10-0-0-160.us-west-2.compute.internal CIDR not assigned yet"
Feb 27 13:11:13 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:13.673690247Z" level=info msg="waiting for node ip-10-0-0-160.us-west-2.compute.internal CIDR not assigned yet"
Feb 27 13:11:15 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:15.675320853Z" level=info msg="waiting for node ip-10-0-0-160.us-west-2.compute.internal CIDR not assigned yet"
Feb 27 13:11:17 ip-10-0-0-160.us-west-2.compute.internal k3s[26712]: time="2019-02-27T13:11:17.676902518Z" level=info msg="waiting for node ip-10-0-0-160.us-west-2.compute.internal CIDR not assigned yet"
[root@ip-10-0-0-160 ~]# 


# kubectlコマンド確認
[root@ip-10-0-0-160 ~]#  k3s kubectl get node
NAME                                          STATUS   ROLES    AGE   VERSION
ip-10-0-0-160.us-west-2.compute.internal   Ready    <none>   27s   v1.13.3-k3s.6
[root@ip-10-0-0-160 ~]# 

Agent構築手順

続いてAgentを構築します。

[root@ip-10-0-0-180 ~]# wget https://github.com/rancher/k3s/releases/download/v0.1.0/k3s
[root@ip-10-0-0-180 ~]# chmod +x k3s 
[root@ip-10-0-0-180 ~]# mv ./k3s /usr/bin/k3s

# k3sコマンドが使えるか確認
[root@ip-10-0-0-180 ~]# k3s -h
NAME:
   k3s - Kubernetes, but small and simple

USAGE:
   k3s [global options] command [command options] [arguments...]

VERSION:
   dev (HEAD)

COMMANDS:
     server   Run management server
     agent    Run node agent
     kubectl  Run kubectl
     crictl   Run crictl
     help, h  Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug        Turn on debug logs
   --help, -h     show help
   --version, -v  print the version
[root@ip-10-0-0-180 ~]#

次にk3s agentコマンドを使ってAgentサーバを構築します。コマンド実行時のtokenは、Serverの/var/lib/rancher/k3s/server/node-tokenに書かれた値を使用します。

[root@ip-10-0-0-180 ~]# k3s agent --server https://10.0.0.160:6443 --token K10cb0d10675f99ff349678412de856285fdd3c6e183422510954ea9c1a0d8f0b96::node:cf80a0b3438f4d85fa31044e5b284652

INFO[2019-02-27T13:21:42.137288318Z] Starting k3s agent v0.1.0 (91251aa)          
INFO[2019-02-27T13:21:42.440462109Z] Logging containerd to /var/lib/rancher/k3s/agent/containerd/containerd.log 
INFO[2019-02-27T13:21:42.440595159Z] Running containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/containerd --root /var/lib/rancher/k3s/agent/containerd 
INFO[2019-02-27T13:21:42.441121820Z] Waiting for containerd startup: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: desc = "transport: Error while dialing dial unix /run/k3s/containerd/containerd.sock: connect: no such file or directory" 
WARN[2019-02-27T13:21:43.445500783Z] failed to write value 1 at /proc/sys/net/bridge/bridge-nf-call-iptables: open /proc/sys/net/bridge/bridge-nf-call-iptables: no such file or directory 
INFO[2019-02-27T13:21:43.447338345Z] Connecting to wss://10.0.0.160:6443/v1-k3s/connect 
INFO[2019-02-27T13:21:43.447375378Z] Connecting to proxy                           url="wss://10.0.0.160:6443/v1-k3s/connect"
WARN[2019-02-27T13:21:43.453824555Z] Disabling CPU quotas due to missing cpu.cfs_period_us 
INFO[2019-02-27T13:21:43.453962443Z] Running kubelet --healthz-bind-address 127.0.0.1 --read-only-port 0 --allow-privileged=true --cluster-domain cluster.local --kubeconfig /var/lib/rancher/k3s/agent/kubeconfig.yaml --eviction-hard imagefs.available<5%,nodefs.available<5% --eviction-minimum-reclaim imagefs.available=10%,nodefs.available=10% --fail-swap-on=false --cgroup-driver cgroupfs --root-dir /var/lib/rancher/k3s/agent/kubelet --cert-dir /var/lib/rancher/k3s/agent/kubelet/pki --seccomp-profile-root /var/lib/rancher/k3s/agent/kubelet/seccomp --cni-conf-dir /var/lib/rancher/k3s/agent/etc/cni/net.d --cni-bin-dir /var/lib/rancher/k3s/data/4df430e1473d0225734948e562863c82f20d658ed9c420c77e168aec42eccdb5/bin --cluster-dns 10.43.0.10 --container-runtime remote --container-runtime-endpoint unix:///run/k3s/containerd/containerd.sock --address 127.0.0.1 --anonymous-auth=false --client-ca-file /var/lib/rancher/k3s/agent/client-ca.pem --hostname-override ip-10-0-0-180.us-west-2.compute.internal --cpu-cfs-quota=false 
Flag --allow-privileged has been deprecated, will be removed in a future version
W0227 13:21:43.454968   26662 server.go:194] WARNING: all flags other than --config, --write-config-to, and --cleanup are deprecated. Please begin using a config file ASAP.
W0227 13:21:43.463453   26662 proxier.go:493] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0227 13:21:43.465574   26662 proxier.go:493] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0227 13:21:43.467630   26662 proxier.go:493] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0227 13:21:43.469688   26662 proxier.go:493] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
W0227 13:21:43.471776   26662 proxier.go:493] Failed to load kernel module nf_conntrack_ipv4 with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules
INFO[2019-02-27T13:21:43.488293741Z] waiting for node ip-10-0-0-180.us-west-2.compute.internal: nodes "ip-10-0-0-180.us-west-2.compute.internal" not found 
W0227 13:21:43.488676   26662 node.go:103] Failed to retrieve node info: nodes "ip-10-0-0-180.us-west-2.compute.internal" not found
I0227 13:21:43.488693   26662 server_others.go:148] Using iptables Proxier.
W0227 13:21:43.488787   26662 proxier.go:314] invalid nodeIP, initializing kube-proxy with 127.0.0.1 as nodeIP
I0227 13:21:43.488849   26662 server_others.go:178] Tearing down inactive rules.
E0227 13:21:43.560292   26662 proxier.go:232] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0227 13:21:43.562894   26662 proxier.go:238] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-PORTALS-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0227 13:21:43.570220   26662 proxier.go:246] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-HOST'

Try `iptables -h' or 'iptables --help' for more information.
E0227 13:21:43.572657   26662 proxier.go:252] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-CONTAINER'

Try `iptables -h' or 'iptables --help' for more information.
E0227 13:21:43.577197   26662 proxier.go:259] Error removing userspace rule: error checking rule: exit status 2: iptables v1.6.2: Couldn't find target `KUBE-NODEPORT-NON-LOCAL'

Try `iptables -h' or 'iptables --help' for more information.
I0227 13:21:43.587024   26662 server.go:464] Version: v1.13.3-k3s.6
I0227 13:21:43.597345   26662 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0227 13:21:43.597376   26662 conntrack.go:52] Setting nf_conntrack_max to 131072
I0227 13:21:43.613730   26662 conntrack.go:83] Setting conntrack hashsize to 32768
I0227 13:21:43.618286   26662 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0227 13:21:43.618341   26662 conntrack.go:103] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0227 13:21:43.618787   26662 config.go:202] Starting service config controller
I0227 13:21:43.618802   26662 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0227 13:21:43.618822   26662 config.go:102] Starting endpoints config controller
I0227 13:21:43.618828   26662 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0227 13:21:43.718998   26662 controller_utils.go:1034] Caches are synced for endpoints config controller
I0227 13:21:43.719093   26662 controller_utils.go:1034] Caches are synced for service config controller
E0227 13:21:43.765317   26662 proxier.go:1335] Failed to execute iptables-restore: exit status 2 (iptables-restore v1.6.2: Couldn't find target `KUBE-MARK-DROP'

Error occurred at line: 50
Try `iptables-restore -h' or 'iptables-restore --help' for more information.
)
I0227 13:21:44.111855   26662 server.go:393] Version: v1.13.3-k3s.6
I0227 13:21:44.116347   26662 server.go:630] --cgroups-per-qos enabled, but --cgroup-root was not specified.  defaulting to /
I0227 13:21:44.116556   26662 container_manager_linux.go:247] container manager verified user specified cgroup-root exists: []
I0227 13:21:44.116570   26662 container_manager_linux.go:252] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:remote CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/rancher/k3s/agent/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:false CPUCFSQuotaPeriod:100ms}
I0227 13:21:44.116647   26662 container_manager_linux.go:271] Creating device plugin manager: true
I0227 13:21:44.116747   26662 state_mem.go:36] [cpumanager] initializing new in-memory state store
I0227 13:21:44.118399   26662 kubelet.go:297] Watching apiserver
I0227 13:21:44.126379   26662 kuberuntime_manager.go:192] Container runtime containerd initialized, version: 1.2.3+unknown, apiVersion: v1alpha2
W0227 13:21:44.126807   26662 probe.go:271] Flexvolume plugin directory at /usr/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating.
I0227 13:21:44.127470   26662 server.go:946] Started kubelet
E0227 13:21:44.133436   26662 cri_stats_provider.go:320] Failed to get the info of the filesystem with mountpoint "/var/lib/rancher/k3s/agent/containerd/io.containerd.snapshotter.v1.overlayfs": unable to find data in memory cache.
E0227 13:21:44.133457   26662 kubelet.go:1229] Image garbage collection failed once. Stats initialization may not have completed yet: invalid capacity 0 on image filesystem
I0227 13:21:44.133717   26662 server.go:133] Starting to listen on 127.0.0.1:10250
I0227 13:21:44.134352   26662 server.go:318] Adding debug handlers to kubelet server.
I0227 13:21:44.135949   26662 fs_resource_analyzer.go:66] Starting FS ResourceAnalyzer
I0227 13:21:44.135977   26662 status_manager.go:152] Starting to sync pod status with apiserver
I0227 13:21:44.135992   26662 kubelet.go:1735] Starting kubelet main sync loop.
I0227 13:21:44.136002   26662 kubelet.go:1752] skipping pod synchronization - [container runtime status check may not have completed yet PLEG is not healthy: pleg has yet to be successful]
I0227 13:21:44.136218   26662 volume_manager.go:248] Starting Kubelet Volume Manager
I0227 13:21:44.137605   26662 desired_state_of_world_populator.go:130] Desired state populator starts to run
W0227 13:21:44.143794   26662 container.go:409] Failed to create summary reader for "/system.slice": none of the resources are being tracked.
W0227 13:21:44.143956   26662 container.go:409] Failed to create summary reader for "/system.slice/rsyslog.service": none of the resources are being tracked.
W0227 13:21:44.144089   26662 container.go:409] Failed to create summary reader for "/system.slice/postfix.service": none of the resources are being tracked.
W0227 13:21:44.144351   26662 container.go:409] Failed to create summary reader for "/system.slice/irqbalance.service": none of the resources are being tracked.
W0227 13:21:44.144486   26662 container.go:409] Failed to create summary reader for "/system.slice/polkit.service": none of the resources are being tracked.
W0227 13:21:44.144706   26662 container.go:409] Failed to create summary reader for "/system.slice/tuned.service": none of the resources are being tracked.
W0227 13:21:44.144837   26662 container.go:409] Failed to create summary reader for "/system.slice/network.service": none of the resources are being tracked.
W0227 13:21:44.144964   26662 container.go:409] Failed to create summary reader for "/system.slice/systemd-logind.service": none of the resources are being tracked.
W0227 13:21:44.145097   26662 container.go:409] Failed to create summary reader for "/system.slice/system-getty.slice": none of the resources are being tracked.
W0227 13:21:44.145353   26662 container.go:409] Failed to create summary reader for "/system.slice/systemd-journald.service": none of the resources are being tracked.
W0227 13:21:44.145484   26662 container.go:409] Failed to create summary reader for "/system.slice/dbus.service": none of the resources are being tracked.
W0227 13:21:44.145622   26662 container.go:409] Failed to create summary reader for "/system.slice/rpcbind.service": none of the resources are being tracked.
W0227 13:21:44.146228   26662 container.go:409] Failed to create summary reader for "/system.slice/sshd.service": none of the resources are being tracked.
W0227 13:21:44.146424   26662 container.go:409] Failed to create summary reader for "/system.slice/chronyd.service": none of the resources are being tracked.
W0227 13:21:44.146552   26662 container.go:409] Failed to create summary reader for "/system.slice/system-serial\\x2dgetty.slice": none of the resources are being tracked.
W0227 13:21:44.146681   26662 container.go:409] Failed to create summary reader for "/system.slice/auditd.service": none of the resources are being tracked.
W0227 13:21:44.146898   26662 container.go:409] Failed to create summary reader for "/system.slice/gssproxy.service": none of the resources are being tracked.
I0227 13:21:44.156419   26662 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
I0227 13:21:44.160912   26662 cpu_manager.go:155] [cpumanager] starting with none policy
I0227 13:21:44.160924   26662 cpu_manager.go:156] [cpumanager] reconciling every 10s
I0227 13:21:44.160934   26662 policy_none.go:42] [cpumanager] none policy: Start
W0227 13:21:44.183073   26662 manager.go:527] Failed to retrieve checkpoint for "kubelet_internal_checkpoint": checkpoint is not found
E0227 13:21:44.186324   26662 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "ip-10-0-0-180.us-west-2.compute.internal" not found
I0227 13:21:44.236439   26662 kubelet_node_status.go:267] Setting node annotation to enable volume controller attach/detach
E0227 13:21:44.236441   26662 kubelet.go:2167] node "ip-10-0-0-180.us-west-2.compute.internal" not found
I0227 13:21:44.238401   26662 kubelet_node_status.go:70] Attempting to register node ip-10-0-0-180.us-west-2.compute.internal
I0227 13:21:44.240852   26662 kubelet_node_status.go:73] Successfully registered node ip-10-0-0-180.us-west-2.compute.internal
I0227 13:21:44.247515   26662 kuberuntime_manager.go:930] updating runtime config through cri with podcidr 10.42.1.0/24
I0227 13:21:44.247888   26662 kubelet_network.go:69] Setting Pod CIDR:  -> 10.42.1.0/24
I0227 13:21:44.337799   26662 reconciler.go:154] Reconciler: start to sync state
I0227 13:21:45.492100   26662 flannel.go:89] Determining IP address of default interface
I0227 13:21:45.492367   26662 flannel.go:99] Using interface with name eth0 and address 10.0.0.180
I0227 13:21:45.493439   26662 kube.go:127] Waiting 10m0s for node controller to sync
I0227 13:21:45.493468   26662 kube.go:306] Starting kube subnet manager
I0227 13:21:46.493677   26662 kube.go:134] Node controller sync successful
I0227 13:21:46.493769   26662 vxlan.go:120] VXLAN config: VNI=1 Port=0 GBP=false DirectRouting=false
I0227 13:21:46.512663   26662 flannel.go:75] Wrote subnet file to /run/flannel/subnet.env
I0227 13:21:46.512681   26662 flannel.go:79] Running backend.
I0227 13:21:46.512688   26662 vxlan_network.go:60] watching for new subnet leases
I0227 13:21:46.514167   26662 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0227 13:21:46.514179   26662 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0227 13:21:46.514649   26662 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0227 13:21:46.515118   26662 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0227 13:21:46.515591   26662 iptables.go:167] Deleting iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0227 13:21:46.516027   26662 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -d 10.42.0.0/16 -j RETURN
I0227 13:21:46.517217   26662 iptables.go:145] Some iptables rules are missing; deleting and recreating rules
I0227 13:21:46.517229   26662 iptables.go:167] Deleting iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0227 13:21:46.517742   26662 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE --random-fully
I0227 13:21:46.518320   26662 iptables.go:167] Deleting iptables rule: -d 10.42.0.0/16 -j ACCEPT
I0227 13:21:46.518807   26662 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.1.0/24 -j RETURN
I0227 13:21:46.519592   26662 iptables.go:155] Adding iptables rule: -s 10.42.0.0/16 -j ACCEPT
I0227 13:21:46.520047   26662 iptables.go:155] Adding iptables rule: ! -s 10.42.0.0/16 -d 10.42.0.0/16 -j MASQUERADE --random-fully
I0227 13:21:46.521296   26662 iptables.go:155] Adding iptables rule: -d 10.42.0.0/16 -j ACCEPT

表示されるログを見るとエラーがかなり表示されていますが、Server側でコマンドを実行すると、ちゃんとAgentノードが登録されているのがわかります。

[root@ip-10-0-0-160 ~]# k3s kubectl get nodes
NAME                                       STATUS   ROLES    AGE     VERSION
ip-10-0-0-160.us-west-2.compute.internal   Ready    <none>   13m     v1.13.3-k3s.6
ip-10-0-0-180.us-west-2.compute.internal   Ready    <none>   2m56s   v1.13.3-k3s.6
[root@ip-10-0-0-160 ~]# 

nginxコンテナをデプロイ

最後にnginxのコンテナをデプロイし、curlで確認するまでを実行します。

# nginxデプロイ
[root@ip-10-0-0-160 ~]# k3s kubectl run nginx --image=nginx:latest
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created
[root@ip-10-0-0-160 ~]# 

# Deploymentの確認
[root@ip-10-0-0-160 ~]# k3s kubectl get deployments
NAME    READY   UP-TO-DATE   AVAILABLE   AGE
nginx   1/1     1            1           22s

[root@ip-10-0-0-160 ~]# k3s kubectl describe deployment nginx
Name:                   nginx
Namespace:              default
CreationTimestamp:      Wed, 27 Feb 2019 13:34:11 +0000
Labels:                 run=nginx
Annotations:            deployment.kubernetes.io/revision: 1
Selector:               run=nginx
Replicas:               1 desired | 1 updated | 1 total | 1 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  run=nginx
  Containers:
   nginx:
    Image:        nginx:latest
    Port:         <none>
    Host Port:    <none>
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   nginx-585fddf4b (1/1 replicas created)
Events:
  Type    Reason             Age   From                   Message
  ----    ------             ----  ----                   -------
  Normal  ScalingReplicaSet  74s   deployment-controller  Scaled up replica set nginx-585fddf4b to 1
[root@ip-10-0-0-160 ~]#

# クラスター外に公開
[root@ip-10-0-0-160 ~]# k3s kubectl expose deployment/nginx --type=NodePort --port=80
service/nginx exposed
[root@ip-10-0-0-160 ~]# 

# Serviceの確認
[root@ip-10-0-0-160 ~]# k3s kubectl get svc
NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.43.0.1       <none>        443/TCP        23m
nginx        NodePort    10.43.153.128   <none>        80:30936/TCP   9s
[root@ip-10-0-0-160 ~]# 

# curlコマンド
[root@ip-10-0-0-160 ~]# curl http://10.0.0.180:30936
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
    body {
        width: 35em;
        margin: 0 auto;
        font-family: Tahoma, Verdana, Arial, sans-serif;
    }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>
[root@ip-10-0-0-160 ~]# 

参照リンク

Rancher Labs、エッジ向けKubernetes軽量ディストリビューションOSSプロジェクト「k3s」を開始

k3s Webinar動画

k3sの呼び方に関するissue