「インスタントアクセスポリシーアップデート」って,カタカナで書くと名前が長いけど,ようするに Amazon ES の「アクセスポリシー」を変更すると,クラスタを再構築せずに,即時反映できる機能と言える.リリースノートにも書いてある通り,全ての Elasticsearch バージョンで使えるのも良かった.個人的には Amazon ES 関連で,過去最大の神アップデートだと思う❗️
You will incur charges as you go through these workshop guides as it will exceed the limits of AWS free tier. An estimate of charges (<$20/day) can be seen at this simple monthly calculator
$ which kubectl
/usr/local/bin/kubectl
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ kops validate cluster
Using cluster from kubectl context: example.cluster.k8s.local
Validating cluster example.cluster.k8s.local
INSTANCE GROUPS
NAME ROLE MACHINETYPE MIN MAX SUBNETS
master-us-east-1a Master m3.medium 11 us-east-1a
master-us-east-1b Master m3.medium 11 us-east-1b
master-us-east-1c Master c4.large 11 us-east-1c
nodes Node t2.medium 55 us-east-1a,us-east-1b,us-east-1c,us-east-1d,us-east-1e,us-east-1f
NODE STATUS
NAME ROLE READY
ip-172-20-126-120.ec2.internal node True
ip-172-20-127-95.ec2.internal master True
ip-172-20-154-201.ec2.internal node True
ip-172-20-187-147.ec2.internal node True
ip-172-20-206-67.ec2.internal node True
ip-172-20-48-37.ec2.internal node True
ip-172-20-52-159.ec2.internal master True
ip-172-20-76-97.ec2.internal master True
Pod Failures in kube-system
NAME
kube-dns-7f56f9f8c7-qxxbd
Validation Failed
Ready Master(s)3 out of 3.
Ready Node(s)5 out of 5.
your kube-system pods are NOT healthy example.cluster.k8s.local
構築後にコンテキストを確認した.
$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* example.cluster.k8s.local example.cluster.k8s.local example.cluster.k8s.local
$ kubectl config current-context
example.cluster.k8s.local
$ kubectl run nginx --image=nginx
deployment.apps "nginx" created
$ kubectl get deployments
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
nginx 1111 10s
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-7c87f569d-vwxjw 1/1 Running 0 30s
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
secret "kubernetes-dashboard-certs" created
serviceaccount "kubernetes-dashboard" created
role.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
rolebinding.rbac.authorization.k8s.io "kubernetes-dashboard-minimal" created
deployment.apps "kubernetes-dashboard" created
service "kubernetes-dashboard" created
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/linkerd.yml
configmap "l5d-config" created
daemonset.extensions "l5d" created
service "l5d" created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
l5d-bd6w4 2/2 Running 0 28s
l5d-l47gd 2/2 Running 0 28s
l5d-ldhxd 2/2 Running 0 28s
l5d-t6c2k 2/2 Running 0 28s
l5d-vrhm7 2/2 Running 0 28s
$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none>443/TCP 32m
l5d LoadBalancer 100.65.92.151 a2a9d866b34c4... 4140:31859/TCP,4141:32072/TCP,9990:30369/TCP 48s
$ LINKERD_ELB=$(kubectl get svc l5d -o jsonpath="{.status.loadBalancer.ingress[0].*}")
$ echo http://$LINKERD_ELB:9990# URL にアクセスをする
次に,Linkerd のデモとして,2種類のマイクロサービス hello と world を起動し,お互いにリクエストを飛ばしてみた.
$ kubectl apply -f https://raw.githubusercontent.com/linkerd/linkerd-examples/master/k8s-daemonset/k8s/hello-world.yml
replicationcontroller "hello" created
service "hello" created
replicationcontroller "world-v1" created
service "world-v1" created
$ http_proxy=$LINKERD_ELB:4140 curl -s http://hello
Hello (100.96.3.5) world (100.96.3.6)!!
すると,マイクロサービス間のリクエストを Linkerd でモニタリングすることができた.
Istio
次に Istio で,同じく kubectl apply でインストールをした.
$ curl -L https://git.io/getLatestIstio | sh -
$ cd istio-0.6.0
$ istioctl version
Version: 0.6.0
GitRevision: 2cb09cdf040a8573330a127947b11e5082619895
User: root@a28f609ab931
Hub: docker.io/istio
GolangVersion: go1.9
BuildStatus: Clean
$ kubectl apply -f install/kubernetes/istio.yaml
$ kubectl get all --namespace istio-system
NAME READY STATUS RESTARTS AGE
istio-ca-cd9dfbdbb-vc8mb 1/1 Running 0 11s
istio-ingress-84c7ddcb7f-g9kmb 0/1 ContainerCreating 0 12s
istio-mixer-67d9bd59cb-8swld 0/3 ContainerCreating 0 16s
istio-pilot-5dd75b8f7f-qfp22 0/2 ContainerCreating 0 13s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingress LoadBalancer 100.66.156.34 a858a858a34c5... 80:32168/TCP,443:31401/TCP 13s
istio-mixer ClusterIP 100.69.49.254 <none>9091/TCP,15004/TCP,9093/TCP,9094/TCP,9102/TCP,9125/UDP,42422/TCP 16s
istio-pilot ClusterIP 100.68.195.148 <none>15003/TCP,8080/TCP,9093/TCP,443/TCP 13s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
istio-ca 1111 12s
istio-ingress 1110 12s
istio-mixer 1110 16s
istio-pilot 1110 13s
NAME DESIRED CURRENT READY AGE
istio-ca-cd9dfbdbb 111 12s
istio-ingress-84c7ddcb7f 110 12s
istio-mixer-67d9bd59cb 110 16s
istio-pilot-5dd75b8f7f 110 13s
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
istio-ca 1111 12s
istio-ingress 1110 12s
istio-mixer 1110 16s
istio-pilot 1110 13s
NAME DESIRED CURRENT READY AGE
istio-ca-cd9dfbdbb 111 12s
istio-ingress-84c7ddcb7f 110 12s
istio-mixer-67d9bd59cb 110 16s
istio-pilot-5dd75b8f7f 110 13s
$ kubectl apply -f<(istioctl kube-inject -f samples/bookinfo/kube/bookinfo.yaml)
service "details" created
deployment.extensions "details-v1" created
service "ratings" created
deployment.extensions "ratings-v1" created
service "reviews" created
deployment.extensions "reviews-v1" created
deployment.extensions "reviews-v2" created
deployment.extensions "reviews-v3" created
service "productpage" created
deployment.extensions "productpage-v1" created
ingress.extensions "gateway" created
$ ISTIO_INGRESS=$(kubectl get ingress gateway -o jsonpath="{.status.loadBalancer.ingress[0].*}")echo http://$ISTIO_INGRESS/productpage# URL にアクセスをする