diff --git a/linkerd.io/content/2-edge/reference/cli/check.md b/linkerd.io/content/2-edge/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2-edge/reference/cli/check.md +++ b/linkerd.io/content/2-edge/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2-edge/reference/iptables.md b/linkerd.io/content/2-edge/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2-edge/reference/iptables.md +++ b/linkerd.io/content/2-edge/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md index 004b50ded6..a44d12a1a5 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2-edge/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md index 2c495e1f20..aaa3a9b43d 100644 --- a/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md b/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md index a43eadb61a..a4a7155edd 100644 --- a/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2-edge/tasks/managing-egress-traffic.md @@ -70,7 +70,7 @@ Now SSH into the client container and start generating some external traffic: ```bash kubectl -n egress-test exec -it client -c client -- sh -$ while sleep 1; do curl -s http://httpbin.org/get ; done +while sleep 1; do curl -s http://httpbin.org/get ; done ``` In a separate shell, you can use the Linkerd diagnostics command to visualize @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -458,7 +458,7 @@ Now let's verify all works as expected: ```bash # plaintext traffic goes as expected to the /get path -$ curl http://httpbin.org/get +curl http://httpbin.org/get { "args": {}, "headers": { @@ -472,14 +472,14 @@ $ curl http://httpbin.org/get } # encrypted traffic can target all paths and hosts -$ curl https://httpbin.org/ip +curl https://httpbin.org/ip { "origin": "51.116.126.217" } # arbitrary unencrypted traffic goes to the internal service -$ curl http://google.com +curl http://google.com { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723", "payload": "You cannot go there right now"} diff --git a/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md index 81969979a0..df55b3ee41 100644 --- a/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -77,10 +77,10 @@ controllers and links are generated for both clusters. ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -100,17 +100,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -118,7 +118,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -129,7 +129,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -169,15 +169,15 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh /$ # prompt for curl pod ``` @@ -185,7 +185,7 @@ If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -217,10 +217,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -234,7 +234,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -250,17 +250,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh +kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local @@ -328,8 +328,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2-edge/tasks/multicluster.md b/linkerd.io/content/2-edge/tasks/multicluster.md index 3a80b3f3ed..2779b7616a 100644 --- a/linkerd.io/content/2-edge/tasks/multicluster.md +++ b/linkerd.io/content/2-edge/tasks/multicluster.md @@ -506,9 +506,9 @@ To cleanup the multicluster control plane, you can run: ```bash # Delete the link CR -$ kubectl --context=west -n linkerd-multicluster delete links east +kubectl --context=west -n linkerd-multicluster delete links east # Delete the test namespace and uninstall multicluster -$ for ctx in west east; do \ +for ctx in west east; do \ kubectl --context=${ctx} delete ns test; \ linkerd --context=${ctx} multicluster uninstall | kubectl --context=${ctx} delete -f - ; \ done diff --git a/linkerd.io/content/2-edge/tasks/restricting-access.md b/linkerd.io/content/2-edge/tasks/restricting-access.md index 5654518600..c9850725f7 100644 --- a/linkerd.io/content/2-edge/tasks/restricting-access.md +++ b/linkerd.io/content/2-edge/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2-edge/tasks/securing-linkerd-tap.md b/linkerd.io/content/2-edge/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2-edge/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2-edge/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2-edge/tasks/troubleshooting.md b/linkerd.io/content/2-edge/tasks/troubleshooting.md index baaa71e206..c65e8fb63c 100644 --- a/linkerd.io/content/2-edge/tasks/troubleshooting.md +++ b/linkerd.io/content/2-edge/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1147,7 +1147,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1155,7 +1155,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1172,7 +1172,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1180,7 +1180,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1197,7 +1197,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1205,7 +1205,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1222,7 +1222,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1230,7 +1230,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1247,7 +1247,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1255,7 +1255,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1272,7 +1272,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1282,7 +1282,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1310,7 +1310,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1400,7 +1400,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1433,7 +1433,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1466,7 +1466,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1612,7 +1612,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1623,7 +1623,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1640,7 +1640,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1652,7 +1652,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1741,7 +1741,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1765,7 +1765,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1936,7 +1936,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2001,7 +2001,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2009,7 +2009,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2024,7 +2024,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2032,7 +2032,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2047,7 +2047,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2055,7 +2055,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2070,7 +2070,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2078,7 +2078,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2116,7 +2116,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2139,7 +2139,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2158,7 +2158,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2217,7 +2217,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2243,7 +2243,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2265,7 +2265,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.10/reference/cli/check.md b/linkerd.io/content/2.10/reference/cli/check.md index 312891f8ef..578e3722d4 100644 --- a/linkerd.io/content/2.10/reference/cli/check.md +++ b/linkerd.io/content/2.10/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.10/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.10/tasks/getting-per-route-metrics.md index 424ede9217..7d6120773c 100644 --- a/linkerd.io/content/2.10/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.10/tasks/getting-per-route-metrics.md @@ -14,7 +14,7 @@ For a tutorial that shows this functionality off, check out the You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -34,7 +34,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -53,7 +53,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.10/tasks/securing-your-cluster.md b/linkerd.io/content/2.10/tasks/securing-your-cluster.md index 94d8f7dcc2..6c0efb9462 100644 --- a/linkerd.io/content/2.10/tasks/securing-your-cluster.md +++ b/linkerd.io/content/2.10/tasks/securing-your-cluster.md @@ -54,7 +54,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -71,7 +71,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -103,7 +103,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -137,14 +137,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -181,14 +181,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -221,6 +221,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.10/tasks/troubleshooting.md b/linkerd.io/content/2.10/tasks/troubleshooting.md index ee27242c04..d30f477a4d 100644 --- a/linkerd.io/content/2.10/tasks/troubleshooting.md +++ b/linkerd.io/content/2.10/tasks/troubleshooting.md @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -323,7 +323,7 @@ linkerd-linkerd-proxy-injector 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -340,7 +340,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -349,7 +349,7 @@ linkerd-linkerd-proxy-injector 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -366,7 +366,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -379,7 +379,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -396,7 +396,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -404,7 +404,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -421,14 +421,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -445,14 +445,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -469,14 +469,14 @@ Example failure: Ensure the Linkerd PodSecurityPolicy exists: ```bash -$ kubectl get podsecuritypolicies | grep linkerd +kubectl get podsecuritypolicies | grep linkerd linkerd-linkerd-control-plane false NET_ADMIN,NET_RAW RunAsAny RunAsAny MustRunAs MustRunAs true configMap,emptyDir,secret,projected,downwardAPI,persistentVolumeClaim ``` Also ensure you have permission to create PodSecurityPolicies: ```bash -$ kubectl auth can-i create podsecuritypolicies +kubectl auth can-i create podsecuritypolicies yes ``` @@ -495,7 +495,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -503,7 +503,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -820,7 +820,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -883,7 +883,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -922,7 +922,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1051,7 +1051,7 @@ Ensure the kube-system namespace has the `config.linkerd.io/admission-webhooks:disabled` label: ```bash -$ kubectl get namespace kube-system -oyaml +kubectl get namespace kube-system -oyaml kind: Namespace apiVersion: v1 metadata: @@ -1124,7 +1124,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1132,7 +1132,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1149,7 +1149,7 @@ Example error: Ensure that the pod security policy exists: ```bash -$ kubectl get psp linkerd-linkerd-cni-cni +kubectl get psp linkerd-linkerd-cni-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1157,7 +1157,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create PodSecurityPolicies: ```bash -$ kubectl auth can-i create PodSecurityPolicies +kubectl auth can-i create PodSecurityPolicies yes ``` @@ -1174,7 +1174,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1182,7 +1182,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1199,7 +1199,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1207,7 +1207,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1224,7 +1224,7 @@ Example error: Ensure that the role exists in the CNI namespace: ```bash -$ kubectl get role linkerd-cni -n linkerd-cni +kubectl get role linkerd-cni -n linkerd-cni NAME AGE linkerd-cni 52m ``` @@ -1232,7 +1232,7 @@ linkerd-cni 52m Also ensure you have permission to create Roles: ```bash -$ kubectl auth can-i create Roles -n linkerd-cni +kubectl auth can-i create Roles -n linkerd-cni yes ``` @@ -1249,7 +1249,7 @@ Example error: Ensure that the role binding exists in the CNI namespace: ```bash -$ kubectl get rolebinding linkerd-cni -n linkerd-cni +kubectl get rolebinding linkerd-cni -n linkerd-cni NAME AGE linkerd-cni 49m ``` @@ -1257,7 +1257,7 @@ linkerd-cni 49m Also ensure you have permission to create RoleBindings: ```bash -$ kubectl auth can-i create RoleBindings -n linkerd-cni +kubectl auth can-i create RoleBindings -n linkerd-cni yes ``` @@ -1274,7 +1274,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1282,7 +1282,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1299,7 +1299,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1307,7 +1307,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1324,7 +1324,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1334,7 +1334,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1362,7 +1362,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1441,7 +1441,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1474,7 +1474,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1507,7 +1507,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1606,7 +1606,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1617,7 +1617,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1634,7 +1634,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1646,7 +1646,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1718,7 +1718,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1742,7 +1742,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1847,7 +1847,7 @@ Example failure: Ensure the linkerd-jaeger ServiceAccounts exist: ```bash -$ kubectl -n linkerd-jaeger get serviceaccounts +kubectl -n linkerd-jaeger get serviceaccounts NAME SECRETS AGE collector 1 23m jaeger 1 23m @@ -1857,7 +1857,7 @@ Also ensure you have permission to create ServiceAccounts in the linkerd-jaeger namespace: ```bash -$ kubectl -n linkerd-jaeger auth can-i create serviceaccounts +kubectl -n linkerd-jaeger auth can-i create serviceaccounts yes ``` @@ -1874,7 +1874,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd-jaeger get configmap/collector-config +kubectl -n linkerd-jaeger get configmap/collector-config NAME DATA AGE collector-config 1 61m ``` @@ -1882,7 +1882,7 @@ collector-config 1 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd-jaeger auth can-i create configmap +kubectl -n linkerd-jaeger auth can-i create configmap yes ``` @@ -1897,7 +1897,7 @@ yes Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1918,7 +1918,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1967,7 +1967,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl --proto '=https' --tlsv1.2 -sSfL https://buoyant.cloud/version.json +curl --proto '=https' --tlsv1.2 -sSfL https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2032,7 +2032,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2040,7 +2040,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2055,7 +2055,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2063,7 +2063,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2078,7 +2078,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2086,7 +2086,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2101,7 +2101,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2109,7 +2109,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2147,7 +2147,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2170,7 +2170,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2189,7 +2189,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2248,7 +2248,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2274,7 +2274,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2296,7 +2296,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.11/reference/cli/check.md b/linkerd.io/content/2.11/reference/cli/check.md index 312891f8ef..578e3722d4 100644 --- a/linkerd.io/content/2.11/reference/cli/check.md +++ b/linkerd.io/content/2.11/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.11/reference/iptables.md b/linkerd.io/content/2.11/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.11/reference/iptables.md +++ b/linkerd.io/content/2.11/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.11/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.11/tasks/getting-per-route-metrics.md index 424ede9217..7d6120773c 100644 --- a/linkerd.io/content/2.11/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.11/tasks/getting-per-route-metrics.md @@ -14,7 +14,7 @@ For a tutorial that shows this functionality off, check out the You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -34,7 +34,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -53,7 +53,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.11/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.11/tasks/restricting-access.md b/linkerd.io/content/2.11/tasks/restricting-access.md index cb4db5c857..7f79a0478b 100644 --- a/linkerd.io/content/2.11/tasks/restricting-access.md +++ b/linkerd.io/content/2.11/tasks/restricting-access.md @@ -16,27 +16,27 @@ Ensure that you have Linkerd version stable-2.11.0 or later installed, and that it is healthy: ```bash -$ linkerd install | kubectl apply -f - +linkerd install | kubectl apply -f - ... -$ linkerd check -o short +linkerd check -o short ... ``` Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` In order to observe what's going on, we'll also install the Viz extension: ```bash -$ linkerd viz install | kubectl apply -f - +linkerd viz install | kubectl apply -f - ... -$ linkerd viz check +linkerd viz check ... ``` diff --git a/linkerd.io/content/2.11/tasks/securing-your-cluster.md b/linkerd.io/content/2.11/tasks/securing-your-cluster.md index 94d8f7dcc2..6c0efb9462 100644 --- a/linkerd.io/content/2.11/tasks/securing-your-cluster.md +++ b/linkerd.io/content/2.11/tasks/securing-your-cluster.md @@ -54,7 +54,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -71,7 +71,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -103,7 +103,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -137,14 +137,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -181,14 +181,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -221,6 +221,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.11/tasks/troubleshooting.md b/linkerd.io/content/2.11/tasks/troubleshooting.md index 3a988974a6..10edc026ad 100644 --- a/linkerd.io/content/2.11/tasks/troubleshooting.md +++ b/linkerd.io/content/2.11/tasks/troubleshooting.md @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -324,7 +324,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -341,7 +341,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -351,7 +351,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -368,7 +368,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -381,7 +381,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -398,7 +398,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -406,7 +406,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -423,14 +423,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -447,14 +447,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -471,14 +471,14 @@ Example failure: Ensure the Linkerd PodSecurityPolicy exists: ```bash -$ kubectl get podsecuritypolicies | grep linkerd +kubectl get podsecuritypolicies | grep linkerd linkerd-linkerd-control-plane false NET_ADMIN,NET_RAW RunAsAny RunAsAny MustRunAs MustRunAs true configMap,emptyDir,secret,projected,downwardAPI,persistentVolumeClaim ``` Also ensure you have permission to create PodSecurityPolicies: ```bash -$ kubectl auth can-i create podsecuritypolicies +kubectl auth can-i create podsecuritypolicies yes ``` @@ -526,7 +526,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -534,7 +534,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -888,7 +888,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -990,7 +990,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -1049,7 +1049,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1198,7 +1198,7 @@ Ensure the kube-system namespace has the `config.linkerd.io/admission-webhooks:disabled` label: ```bash -$ kubectl get namespace kube-system -oyaml +kubectl get namespace kube-system -oyaml kind: Namespace apiVersion: v1 metadata: @@ -1271,7 +1271,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1279,7 +1279,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1296,7 +1296,7 @@ Example error: Ensure that the pod security policy exists: ```bash -$ kubectl get psp linkerd-linkerd-cni-cni +kubectl get psp linkerd-linkerd-cni-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1304,7 +1304,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create PodSecurityPolicies: ```bash -$ kubectl auth can-i create PodSecurityPolicies +kubectl auth can-i create PodSecurityPolicies yes ``` @@ -1321,7 +1321,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1329,7 +1329,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1346,7 +1346,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1354,7 +1354,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1371,7 +1371,7 @@ Example error: Ensure that the role exists in the CNI namespace: ```bash -$ kubectl get role linkerd-cni -n linkerd-cni +kubectl get role linkerd-cni -n linkerd-cni NAME AGE linkerd-cni 52m ``` @@ -1379,7 +1379,7 @@ linkerd-cni 52m Also ensure you have permission to create Roles: ```bash -$ kubectl auth can-i create Roles -n linkerd-cni +kubectl auth can-i create Roles -n linkerd-cni yes ``` @@ -1396,7 +1396,7 @@ Example error: Ensure that the role binding exists in the CNI namespace: ```bash -$ kubectl get rolebinding linkerd-cni -n linkerd-cni +kubectl get rolebinding linkerd-cni -n linkerd-cni NAME AGE linkerd-cni 49m ``` @@ -1404,7 +1404,7 @@ linkerd-cni 49m Also ensure you have permission to create RoleBindings: ```bash -$ kubectl auth can-i create RoleBindings -n linkerd-cni +kubectl auth can-i create RoleBindings -n linkerd-cni yes ``` @@ -1421,7 +1421,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1429,7 +1429,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1446,7 +1446,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1454,7 +1454,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1471,7 +1471,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1481,7 +1481,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1509,7 +1509,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1588,7 +1588,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1621,7 +1621,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1654,7 +1654,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1753,7 +1753,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1764,7 +1764,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1781,7 +1781,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1793,7 +1793,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1865,7 +1865,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1889,7 +1889,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1994,7 +1994,7 @@ Example failure: Ensure the linkerd-jaeger ServiceAccounts exist: ```bash -$ kubectl -n linkerd-jaeger get serviceaccounts +kubectl -n linkerd-jaeger get serviceaccounts NAME SECRETS AGE collector 1 23m jaeger 1 23m @@ -2004,7 +2004,7 @@ Also ensure you have permission to create ServiceAccounts in the linkerd-jaeger namespace: ```bash -$ kubectl -n linkerd-jaeger auth can-i create serviceaccounts +kubectl -n linkerd-jaeger auth can-i create serviceaccounts yes ``` @@ -2021,7 +2021,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd-jaeger get configmap/collector-config +kubectl -n linkerd-jaeger get configmap/collector-config NAME DATA AGE collector-config 1 61m ``` @@ -2029,7 +2029,7 @@ collector-config 1 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd-jaeger auth can-i create configmap +kubectl -n linkerd-jaeger auth can-i create configmap yes ``` @@ -2044,7 +2044,7 @@ yes Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -2065,7 +2065,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -2114,7 +2114,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2179,7 +2179,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2187,7 +2187,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2202,7 +2202,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2210,7 +2210,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2225,7 +2225,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2233,7 +2233,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2248,7 +2248,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2256,7 +2256,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2294,7 +2294,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2317,7 +2317,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2336,7 +2336,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2395,7 +2395,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2421,7 +2421,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2443,7 +2443,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.12/reference/cli/check.md b/linkerd.io/content/2.12/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.12/reference/cli/check.md +++ b/linkerd.io/content/2.12/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.12/reference/iptables.md b/linkerd.io/content/2.12/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.12/reference/iptables.md +++ b/linkerd.io/content/2.12/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md index 24606035be..fc1f8477be 100644 --- a/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.12/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -291,7 +291,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -362,7 +362,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -421,7 +421,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.12/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.12/tasks/getting-per-route-metrics.md index ddd2a4dc3c..9f66470e28 100644 --- a/linkerd.io/content/2.12/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.12/tasks/getting-per-route-metrics.md @@ -24,7 +24,7 @@ per-route authorization. You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -44,7 +44,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -63,7 +63,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.12/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.12/tasks/restricting-access.md b/linkerd.io/content/2.12/tasks/restricting-access.md index 0b0b0c94b7..38ebdaeb3d 100644 --- a/linkerd.io/content/2.12/tasks/restricting-access.md +++ b/linkerd.io/content/2.12/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.12/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.12/tasks/securing-linkerd-tap.md index d3023ec39f..f66601f5dd 100644 --- a/linkerd.io/content/2.12/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.12/tasks/securing-linkerd-tap.md @@ -57,7 +57,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -74,7 +74,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -106,7 +106,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -140,14 +140,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -184,14 +184,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -224,6 +224,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.12/tasks/troubleshooting.md b/linkerd.io/content/2.12/tasks/troubleshooting.md index 7ec6896a2d..b142ee66a9 100644 --- a/linkerd.io/content/2.12/tasks/troubleshooting.md +++ b/linkerd.io/content/2.12/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -921,7 +921,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1045,7 +1045,7 @@ Ensure the kube-system namespace has the `config.linkerd.io/admission-webhooks:disabled` label: ```bash -$ kubectl get namespace kube-system -oyaml +kubectl get namespace kube-system -oyaml kind: Namespace apiVersion: v1 metadata: @@ -1118,7 +1118,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1126,7 +1126,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1143,7 +1143,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1151,7 +1151,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1168,7 +1168,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1176,7 +1176,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1193,7 +1193,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1201,7 +1201,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1218,7 +1218,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1226,7 +1226,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1243,7 +1243,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1253,7 +1253,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1281,7 +1281,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1360,7 +1360,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1393,7 +1393,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1426,7 +1426,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1544,7 +1544,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1555,7 +1555,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1572,7 +1572,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1584,7 +1584,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1673,7 +1673,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1697,7 +1697,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1880,7 +1880,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1901,7 +1901,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1950,7 +1950,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2015,7 +2015,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2023,7 +2023,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2038,7 +2038,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2046,7 +2046,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2061,7 +2061,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2069,7 +2069,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2084,7 +2084,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2092,7 +2092,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2130,7 +2130,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2153,7 +2153,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2172,7 +2172,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2231,7 +2231,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2257,7 +2257,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2279,7 +2279,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.12/tasks/upgrade.md b/linkerd.io/content/2.12/tasks/upgrade.md index 7e608fd341..8f78d275c3 100644 --- a/linkerd.io/content/2.12/tasks/upgrade.md +++ b/linkerd.io/content/2.12/tasks/upgrade.md @@ -290,7 +290,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -323,18 +323,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -348,14 +348,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -363,8 +363,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -375,7 +375,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.13/reference/cli/check.md b/linkerd.io/content/2.13/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.13/reference/cli/check.md +++ b/linkerd.io/content/2.13/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.13/reference/iptables.md b/linkerd.io/content/2.13/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.13/reference/iptables.md +++ b/linkerd.io/content/2.13/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.13/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.13/tasks/configuring-dynamic-request-routing.md index 8137e79797..af753bbcf7 100644 --- a/linkerd.io/content/2.13/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.13/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ to the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -152,7 +152,7 @@ But if we add the "`x-request-id: alternative`" header they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md index 018f3a706a..30e18a67c1 100644 --- a/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.13/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -291,7 +291,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -362,7 +362,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -421,7 +421,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.13/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.13/tasks/getting-per-route-metrics.md index ddd2a4dc3c..9f66470e28 100644 --- a/linkerd.io/content/2.13/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.13/tasks/getting-per-route-metrics.md @@ -24,7 +24,7 @@ per-route authorization. You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -44,7 +44,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -63,7 +63,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.13/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.13/tasks/restricting-access.md b/linkerd.io/content/2.13/tasks/restricting-access.md index 0b0b0c94b7..38ebdaeb3d 100644 --- a/linkerd.io/content/2.13/tasks/restricting-access.md +++ b/linkerd.io/content/2.13/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.13/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.13/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.13/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.13/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.13/tasks/troubleshooting.md b/linkerd.io/content/2.13/tasks/troubleshooting.md index 7ec6896a2d..b142ee66a9 100644 --- a/linkerd.io/content/2.13/tasks/troubleshooting.md +++ b/linkerd.io/content/2.13/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -921,7 +921,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1045,7 +1045,7 @@ Ensure the kube-system namespace has the `config.linkerd.io/admission-webhooks:disabled` label: ```bash -$ kubectl get namespace kube-system -oyaml +kubectl get namespace kube-system -oyaml kind: Namespace apiVersion: v1 metadata: @@ -1118,7 +1118,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1126,7 +1126,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1143,7 +1143,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1151,7 +1151,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1168,7 +1168,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1176,7 +1176,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1193,7 +1193,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1201,7 +1201,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1218,7 +1218,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1226,7 +1226,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1243,7 +1243,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1253,7 +1253,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1281,7 +1281,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1360,7 +1360,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1393,7 +1393,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1426,7 +1426,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1544,7 +1544,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1555,7 +1555,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1572,7 +1572,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1584,7 +1584,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1673,7 +1673,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1697,7 +1697,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1880,7 +1880,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1901,7 +1901,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1950,7 +1950,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2015,7 +2015,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2023,7 +2023,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2038,7 +2038,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2046,7 +2046,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2061,7 +2061,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2069,7 +2069,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2084,7 +2084,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2092,7 +2092,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2130,7 +2130,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2153,7 +2153,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2172,7 +2172,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2231,7 +2231,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2257,7 +2257,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2279,7 +2279,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.13/tasks/upgrade.md b/linkerd.io/content/2.13/tasks/upgrade.md index 08c3e70a35..3c00e0871b 100644 --- a/linkerd.io/content/2.13/tasks/upgrade.md +++ b/linkerd.io/content/2.13/tasks/upgrade.md @@ -303,7 +303,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -336,18 +336,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -361,14 +361,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -376,8 +376,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -388,7 +388,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.14/reference/cli/check.md b/linkerd.io/content/2.14/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.14/reference/cli/check.md +++ b/linkerd.io/content/2.14/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.14/reference/iptables.md b/linkerd.io/content/2.14/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.14/reference/iptables.md +++ b/linkerd.io/content/2.14/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.14/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.14/tasks/configuring-dynamic-request-routing.md index c38f2a438c..e554bc6ac4 100644 --- a/linkerd.io/content/2.14/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.14/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -161,7 +161,7 @@ to the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -171,7 +171,7 @@ But if we add the "`x-request-id: alternative`" header they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md index a5c8b5c2ef..63b79fc6d4 100644 --- a/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.14/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.14/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.14/tasks/getting-per-route-metrics.md index 34ee2bff6a..c2db8c0965 100644 --- a/linkerd.io/content/2.14/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.14/tasks/getting-per-route-metrics.md @@ -24,7 +24,7 @@ per-route authorization. You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -44,7 +44,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -63,7 +63,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.14/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.14/tasks/restricting-access.md b/linkerd.io/content/2.14/tasks/restricting-access.md index 0b0b0c94b7..38ebdaeb3d 100644 --- a/linkerd.io/content/2.14/tasks/restricting-access.md +++ b/linkerd.io/content/2.14/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.14/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.14/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.14/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.14/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.14/tasks/troubleshooting.md b/linkerd.io/content/2.14/tasks/troubleshooting.md index 7ec6896a2d..b142ee66a9 100644 --- a/linkerd.io/content/2.14/tasks/troubleshooting.md +++ b/linkerd.io/content/2.14/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -921,7 +921,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1045,7 +1045,7 @@ Ensure the kube-system namespace has the `config.linkerd.io/admission-webhooks:disabled` label: ```bash -$ kubectl get namespace kube-system -oyaml +kubectl get namespace kube-system -oyaml kind: Namespace apiVersion: v1 metadata: @@ -1118,7 +1118,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1126,7 +1126,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1143,7 +1143,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1151,7 +1151,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1168,7 +1168,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1176,7 +1176,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1193,7 +1193,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1201,7 +1201,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1218,7 +1218,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1226,7 +1226,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1243,7 +1243,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1253,7 +1253,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1281,7 +1281,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1360,7 +1360,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1393,7 +1393,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1426,7 +1426,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1544,7 +1544,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1555,7 +1555,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1572,7 +1572,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1584,7 +1584,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1673,7 +1673,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1697,7 +1697,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1880,7 +1880,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1901,7 +1901,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1950,7 +1950,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2015,7 +2015,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2023,7 +2023,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2038,7 +2038,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2046,7 +2046,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2061,7 +2061,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2069,7 +2069,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2084,7 +2084,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2092,7 +2092,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2130,7 +2130,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2153,7 +2153,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2172,7 +2172,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2231,7 +2231,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2257,7 +2257,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2279,7 +2279,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.14/tasks/upgrade.md b/linkerd.io/content/2.14/tasks/upgrade.md index 32f921e829..14321d64fe 100644 --- a/linkerd.io/content/2.14/tasks/upgrade.md +++ b/linkerd.io/content/2.14/tasks/upgrade.md @@ -317,7 +317,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -350,18 +350,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -375,14 +375,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -390,8 +390,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -402,7 +402,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.15/reference/cli/check.md b/linkerd.io/content/2.15/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.15/reference/cli/check.md +++ b/linkerd.io/content/2.15/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.15/reference/iptables.md b/linkerd.io/content/2.15/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.15/reference/iptables.md +++ b/linkerd.io/content/2.15/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.15/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.15/tasks/configuring-dynamic-request-routing.md index c38f2a438c..e554bc6ac4 100644 --- a/linkerd.io/content/2.15/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.15/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -161,7 +161,7 @@ to the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -171,7 +171,7 @@ But if we add the "`x-request-id: alternative`" header they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md index a5c8b5c2ef..63b79fc6d4 100644 --- a/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.15/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.15/tasks/getting-per-route-metrics.md b/linkerd.io/content/2.15/tasks/getting-per-route-metrics.md index 34ee2bff6a..c2db8c0965 100644 --- a/linkerd.io/content/2.15/tasks/getting-per-route-metrics.md +++ b/linkerd.io/content/2.15/tasks/getting-per-route-metrics.md @@ -24,7 +24,7 @@ per-route authorization. You can view per-route metrics in the CLI by running `linkerd viz routes`: ```bash -$ linkerd viz routes svc/webapp +linkerd viz routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 GET / webapp 100.00% 0.6rps 25ms 30ms 30ms GET /authors/{id} webapp 100.00% 0.6rps 22ms 29ms 30ms @@ -44,7 +44,7 @@ specified in your service profile will end up there. It is also possible to look the metrics up by other resource types, such as: ```bash -$ linkerd viz routes deploy/webapp +linkerd viz routes deploy/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] kubernetes 0.00% 0.0rps 0ms 0ms 0ms GET / webapp 100.00% 0.5rps 27ms 38ms 40ms @@ -63,7 +63,7 @@ Then, it is possible to filter all the way down to requests going from a specific resource to other services: ```bash -$ linkerd viz routes deploy/webapp --to svc/books +linkerd viz routes deploy/webapp --to svc/books ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /books/{id}.json books 100.00% 0.5rps 18ms 29ms 30ms GET /books.json books 100.00% 1.1rps 7ms 12ms 18ms diff --git a/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.15/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.15/tasks/restricting-access.md b/linkerd.io/content/2.15/tasks/restricting-access.md index 0b0b0c94b7..38ebdaeb3d 100644 --- a/linkerd.io/content/2.15/tasks/restricting-access.md +++ b/linkerd.io/content/2.15/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.15/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.15/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.15/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.15/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.15/tasks/troubleshooting.md b/linkerd.io/content/2.15/tasks/troubleshooting.md index bc58809cf8..2c57453aa6 100644 --- a/linkerd.io/content/2.15/tasks/troubleshooting.md +++ b/linkerd.io/content/2.15/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1133,7 +1133,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1141,7 +1141,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1158,7 +1158,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1166,7 +1166,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1183,7 +1183,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1191,7 +1191,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1208,7 +1208,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1216,7 +1216,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1233,7 +1233,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1241,7 +1241,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1258,7 +1258,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1268,7 +1268,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1296,7 +1296,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1375,7 +1375,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1408,7 +1408,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1441,7 +1441,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1559,7 +1559,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1570,7 +1570,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1587,7 +1587,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1599,7 +1599,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1688,7 +1688,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1712,7 +1712,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1895,7 +1895,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1916,7 +1916,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1965,7 +1965,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2030,7 +2030,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2038,7 +2038,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2053,7 +2053,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2061,7 +2061,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2076,7 +2076,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2084,7 +2084,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2099,7 +2099,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2107,7 +2107,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2145,7 +2145,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2168,7 +2168,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2187,7 +2187,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2246,7 +2246,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2272,7 +2272,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2294,7 +2294,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.15/tasks/upgrade.md b/linkerd.io/content/2.15/tasks/upgrade.md index 23547217a4..a73f4d54fc 100644 --- a/linkerd.io/content/2.15/tasks/upgrade.md +++ b/linkerd.io/content/2.15/tasks/upgrade.md @@ -379,7 +379,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -412,18 +412,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -437,14 +437,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -452,8 +452,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.16/reference/cli/check.md b/linkerd.io/content/2.16/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.16/reference/cli/check.md +++ b/linkerd.io/content/2.16/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.16/reference/iptables.md b/linkerd.io/content/2.16/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.16/reference/iptables.md +++ b/linkerd.io/content/2.16/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.16/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.16/tasks/configuring-dynamic-request-routing.md index 004b50ded6..a44d12a1a5 100644 --- a/linkerd.io/content/2.16/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.16/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md index a5c8b5c2ef..63b79fc6d4 100644 --- a/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.16/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md index 912241d181..c8d4400521 100644 --- a/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.16/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.16/tasks/restricting-access.md b/linkerd.io/content/2.16/tasks/restricting-access.md index 5654518600..c9850725f7 100644 --- a/linkerd.io/content/2.16/tasks/restricting-access.md +++ b/linkerd.io/content/2.16/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.16/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.16/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.16/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.16/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.16/tasks/troubleshooting.md b/linkerd.io/content/2.16/tasks/troubleshooting.md index bc58809cf8..2c57453aa6 100644 --- a/linkerd.io/content/2.16/tasks/troubleshooting.md +++ b/linkerd.io/content/2.16/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1133,7 +1133,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1141,7 +1141,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1158,7 +1158,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1166,7 +1166,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1183,7 +1183,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1191,7 +1191,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1208,7 +1208,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1216,7 +1216,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1233,7 +1233,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1241,7 +1241,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1258,7 +1258,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1268,7 +1268,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1296,7 +1296,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1375,7 +1375,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1408,7 +1408,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1441,7 +1441,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1559,7 +1559,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1570,7 +1570,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1587,7 +1587,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1599,7 +1599,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1688,7 +1688,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1712,7 +1712,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1895,7 +1895,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1916,7 +1916,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1965,7 +1965,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2030,7 +2030,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2038,7 +2038,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2053,7 +2053,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2061,7 +2061,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2076,7 +2076,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2084,7 +2084,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2099,7 +2099,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2107,7 +2107,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2145,7 +2145,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2168,7 +2168,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2187,7 +2187,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2246,7 +2246,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2272,7 +2272,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2294,7 +2294,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.16/tasks/upgrade.md b/linkerd.io/content/2.16/tasks/upgrade.md index 23547217a4..a73f4d54fc 100644 --- a/linkerd.io/content/2.16/tasks/upgrade.md +++ b/linkerd.io/content/2.16/tasks/upgrade.md @@ -379,7 +379,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -412,18 +412,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -437,14 +437,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -452,8 +452,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.17/reference/cli/check.md b/linkerd.io/content/2.17/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.17/reference/cli/check.md +++ b/linkerd.io/content/2.17/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.17/reference/iptables.md b/linkerd.io/content/2.17/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.17/reference/iptables.md +++ b/linkerd.io/content/2.17/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.17/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.17/tasks/configuring-dynamic-request-routing.md index 004b50ded6..a44d12a1a5 100644 --- a/linkerd.io/content/2.17/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.17/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md index a5c8b5c2ef..63b79fc6d4 100644 --- a/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.17/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.17/tasks/managing-egress-traffic.md b/linkerd.io/content/2.17/tasks/managing-egress-traffic.md index d77f290917..d579ba4c56 100644 --- a/linkerd.io/content/2.17/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.17/tasks/managing-egress-traffic.md @@ -69,7 +69,7 @@ Now SSH into the client container and start generating some external traffic: ```bash kubectl -n egress-test exec -it client-xxx -c client -- sh -$ while sleep 1; do curl -s http://httpbin.org/get ; done +while sleep 1; do curl -s http://httpbin.org/get ; done ``` In a separate shell, you can use the Linkerd diagnostics command to visualize @@ -190,7 +190,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -413,7 +413,7 @@ Now let's verify all works as expected: ```bash # plaintext traffic goes as expected to the /get path -$ curl http://httpbin.org/get +curl http://httpbin.org/get { "args": {}, "headers": { @@ -427,14 +427,14 @@ $ curl http://httpbin.org/get } # encrypted traffic can target all paths and hosts -$ curl https://httpbin.org/ip +curl https://httpbin.org/ip { "origin": "51.116.126.217" } # arbitrary unencrypted traffic goes to the internal service -$ curl http://google.com +curl http://google.com { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723", "payload": "You cannot go there right now"} diff --git a/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md index 9d8730b5b0..c720c09563 100644 --- a/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.17/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:mateiidavid/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -78,10 +78,10 @@ provided scripts, but feel free to have a look! ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -101,17 +101,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -119,7 +119,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -130,7 +130,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -170,23 +170,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- bin/sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -218,10 +218,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -235,7 +235,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -251,17 +251,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh +kubectl --context=k3d-east exec pod curl-56dc7d945d-96r6p -it -c curl -- bin/sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-west.default.svc.east.cluster.local @@ -329,8 +329,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.17/tasks/restricting-access.md b/linkerd.io/content/2.17/tasks/restricting-access.md index 5654518600..c9850725f7 100644 --- a/linkerd.io/content/2.17/tasks/restricting-access.md +++ b/linkerd.io/content/2.17/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.17/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.17/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.17/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.17/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.17/tasks/troubleshooting.md b/linkerd.io/content/2.17/tasks/troubleshooting.md index a9efbc7ec1..79bacd3f7b 100644 --- a/linkerd.io/content/2.17/tasks/troubleshooting.md +++ b/linkerd.io/content/2.17/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1147,7 +1147,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1155,7 +1155,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1172,7 +1172,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1180,7 +1180,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1197,7 +1197,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1205,7 +1205,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1222,7 +1222,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1230,7 +1230,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1247,7 +1247,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1255,7 +1255,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1272,7 +1272,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1282,7 +1282,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1310,7 +1310,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1400,7 +1400,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1433,7 +1433,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1466,7 +1466,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1584,7 +1584,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1595,7 +1595,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1612,7 +1612,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1624,7 +1624,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1713,7 +1713,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1737,7 +1737,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1920,7 +1920,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1941,7 +1941,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -1990,7 +1990,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2055,7 +2055,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2063,7 +2063,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2078,7 +2078,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2086,7 +2086,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2101,7 +2101,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2109,7 +2109,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2124,7 +2124,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2132,7 +2132,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2170,7 +2170,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2193,7 +2193,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2212,7 +2212,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2271,7 +2271,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2297,7 +2297,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2319,7 +2319,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.17/tasks/upgrade.md b/linkerd.io/content/2.17/tasks/upgrade.md index 23547217a4..a73f4d54fc 100644 --- a/linkerd.io/content/2.17/tasks/upgrade.md +++ b/linkerd.io/content/2.17/tasks/upgrade.md @@ -379,7 +379,7 @@ Find the release name you used for the `linkerd2` chart, and the namespace where this release stored its config: ```bash -$ helm ls -A +helm ls -A NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION linkerd default 1 2021-11-22 17:14:50.751436374 -0500 -05 deployed linkerd2-2.11.1 stable-2.11.1 ``` @@ -412,18 +412,18 @@ the `linkerd-crds`, `linkerd-control-plane` and `linkerd-smi` charts: ```bash # First migrate the CRDs -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind == "CustomResourceDefinition") | .metadata.name' | \ grep -v '\-\-\-' | \ xargs -n1 sh -c \ 'kubectl annotate --overwrite crd/$0 meta.helm.sh/release-name=linkerd-crds meta.helm.sh/release-namespace=linkerd' # Special case for TrafficSplit (only use if you have TrafficSplit CRs) -$ kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ +kubectl annotate --overwrite crd/trafficsplits.split.smi-spec.io \ meta.helm.sh/release-name=linkerd-smi meta.helm.sh/release-namespace=linkerd-smi # Now migrate all the other resources -$ helm -n default get manifest linkerd | \ +helm -n default get manifest linkerd | \ yq 'select(.kind != "CustomResourceDefinition")' | \ yq '.kind, .metadata.name, .metadata.namespace' | \ grep -v '\-\-\-' | @@ -437,14 +437,14 @@ above. ```bash # First make sure you update the helm repo -$ helm repo up +helm repo up # Install the linkerd-crds chart -$ helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds +helm install linkerd-crds -n linkerd --create-namespace linkerd/linkerd-crds # Install the linkerd-control-plane chart # (remember to add any customizations you retrieved above) -$ helm install linkerd-control-plane \ +helm install linkerd-control-plane \ -n linkerd \ --set-file identityTrustAnchorsPEM=ca.crt \ --set-file identity.issuer.tls.crtPEM=issuer.crt \ @@ -452,8 +452,8 @@ $ helm install linkerd-control-plane \ linkerd/linkerd-control-plane # Optional: if using TrafficSplit CRs -$ helm repo add l5d-smi https://linkerd.github.io/linkerd-smi -$ helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi +helm repo add l5d-smi https://linkerd.github.io/linkerd-smi +helm install linkerd-smi -n linkerd-smi --create-namespace l5d-smi/linkerd-smi ``` ##### Cleaning up the old linkerd2 Helm release @@ -464,7 +464,7 @@ remove the Helm release config for the old `linkerd2` chart (assuming you used the "Secret" storage backend, which is the default): ```bash -$ kubectl -n default delete secret \ +kubectl -n default delete secret \ --field-selector type=helm.sh/release.v1 \ -l name=linkerd,owner=helm ``` diff --git a/linkerd.io/content/2.18/reference/cli/check.md b/linkerd.io/content/2.18/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.18/reference/cli/check.md +++ b/linkerd.io/content/2.18/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.18/reference/iptables.md b/linkerd.io/content/2.18/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.18/reference/iptables.md +++ b/linkerd.io/content/2.18/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.18/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.18/tasks/configuring-dynamic-request-routing.md index 004b50ded6..a44d12a1a5 100644 --- a/linkerd.io/content/2.18/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.18/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md index 011c10ff9e..98fb81a708 100644 --- a/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.18/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.18/tasks/managing-egress-traffic.md b/linkerd.io/content/2.18/tasks/managing-egress-traffic.md index a43eadb61a..a4a7155edd 100644 --- a/linkerd.io/content/2.18/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.18/tasks/managing-egress-traffic.md @@ -70,7 +70,7 @@ Now SSH into the client container and start generating some external traffic: ```bash kubectl -n egress-test exec -it client -c client -- sh -$ while sleep 1; do curl -s http://httpbin.org/get ; done +while sleep 1; do curl -s http://httpbin.org/get ; done ``` In a separate shell, you can use the Linkerd diagnostics command to visualize @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -458,7 +458,7 @@ Now let's verify all works as expected: ```bash # plaintext traffic goes as expected to the /get path -$ curl http://httpbin.org/get +curl http://httpbin.org/get { "args": {}, "headers": { @@ -472,14 +472,14 @@ $ curl http://httpbin.org/get } # encrypted traffic can target all paths and hosts -$ curl https://httpbin.org/ip +curl https://httpbin.org/ip { "origin": "51.116.126.217" } # arbitrary unencrypted traffic goes to the internal service -$ curl http://google.com +curl http://google.com { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723", "payload": "You cannot go there right now"} diff --git a/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md index 81969979a0..83c638a4ae 100644 --- a/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.18/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -77,10 +77,10 @@ controllers and links are generated for both clusters. ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -100,17 +100,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -118,7 +118,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -129,7 +129,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -169,23 +169,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -217,10 +217,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -234,7 +234,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -250,17 +250,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh +kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local @@ -328,8 +328,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.18/tasks/multicluster.md b/linkerd.io/content/2.18/tasks/multicluster.md index 3a80b3f3ed..2779b7616a 100644 --- a/linkerd.io/content/2.18/tasks/multicluster.md +++ b/linkerd.io/content/2.18/tasks/multicluster.md @@ -506,9 +506,9 @@ To cleanup the multicluster control plane, you can run: ```bash # Delete the link CR -$ kubectl --context=west -n linkerd-multicluster delete links east +kubectl --context=west -n linkerd-multicluster delete links east # Delete the test namespace and uninstall multicluster -$ for ctx in west east; do \ +for ctx in west east; do \ kubectl --context=${ctx} delete ns test; \ linkerd --context=${ctx} multicluster uninstall | kubectl --context=${ctx} delete -f - ; \ done diff --git a/linkerd.io/content/2.18/tasks/restricting-access.md b/linkerd.io/content/2.18/tasks/restricting-access.md index 5654518600..c9850725f7 100644 --- a/linkerd.io/content/2.18/tasks/restricting-access.md +++ b/linkerd.io/content/2.18/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.18/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.18/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.18/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.18/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.18/tasks/troubleshooting.md b/linkerd.io/content/2.18/tasks/troubleshooting.md index ca2b5b104d..1fdeb9710b 100644 --- a/linkerd.io/content/2.18/tasks/troubleshooting.md +++ b/linkerd.io/content/2.18/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1147,7 +1147,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1155,7 +1155,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1172,7 +1172,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1180,7 +1180,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1197,7 +1197,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1205,7 +1205,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1222,7 +1222,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1230,7 +1230,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1247,7 +1247,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1255,7 +1255,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1272,7 +1272,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1282,7 +1282,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1310,7 +1310,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1400,7 +1400,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1433,7 +1433,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1466,7 +1466,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1612,7 +1612,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1623,7 +1623,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1640,7 +1640,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1652,7 +1652,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1741,7 +1741,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1765,7 +1765,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1948,7 +1948,7 @@ versions in sync by updating either the CLI or linkerd-jaeger as necessary. Ensure all the jaeger pods are injected ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE collector-69cc44dfbc-rhpfg 2/2 Running 0 11s jaeger-6f98d5c979-scqlq 2/2 Running 0 11s @@ -1969,7 +1969,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-jaeger pods are running with 2/2 ```bash -$ kubectl -n linkerd-jaeger get pods +kubectl -n linkerd-jaeger get pods NAME READY STATUS RESTARTS AGE jaeger-injector-548684d74b-bcq5h 2/2 Running 0 5s collector-69cc44dfbc-wqf6s 2/2 Running 0 5s @@ -2018,7 +2018,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2083,7 +2083,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2091,7 +2091,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2106,7 +2106,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2114,7 +2114,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2129,7 +2129,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2137,7 +2137,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2152,7 +2152,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2160,7 +2160,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2198,7 +2198,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2221,7 +2221,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2240,7 +2240,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2299,7 +2299,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2325,7 +2325,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2347,7 +2347,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/2.19/reference/cli/check.md b/linkerd.io/content/2.19/reference/cli/check.md index 7cd61cd237..67a2486908 100644 --- a/linkerd.io/content/2.19/reference/cli/check.md +++ b/linkerd.io/content/2.19/reference/cli/check.md @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them. ## Example output ```bash -$ linkerd check +linkerd check kubernetes-api -------------- √ can initialize the client diff --git a/linkerd.io/content/2.19/reference/iptables.md b/linkerd.io/content/2.19/reference/iptables.md index 67a7ea89de..9b4d229a59 100644 --- a/linkerd.io/content/2.19/reference/iptables.md +++ b/linkerd.io/content/2.19/reference/iptables.md @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you can retrieve them through the following command: ```bash -$ kubectl -n logs linkerd-init +kubectl -n logs linkerd-init # where is the name of the pod # you want to see the iptables rules for ``` diff --git a/linkerd.io/content/2.19/tasks/configuring-dynamic-request-routing.md b/linkerd.io/content/2.19/tasks/configuring-dynamic-request-routing.md index 004b50ded6..a44d12a1a5 100644 --- a/linkerd.io/content/2.19/tasks/configuring-dynamic-request-routing.md +++ b/linkerd.io/content/2.19/tasks/configuring-dynamic-request-routing.md @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod pointed by the Service `backend-a-podinfo`: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service. The previous requests should still reach `backend-a-podinfo` only: ```bash -$ curl -sX POST localhost:9898/echo \ +curl -sX POST localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' PODINFO_UI_MESSAGE=A backend @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to `backend-b-podinfo`: ```bash -$ curl -sX POST \ +curl -sX POST \ -H 'x-request-id: alternative' \ localhost:9898/echo \ | grep -o 'PODINFO_UI_MESSAGE=. backend' diff --git a/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md b/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md index 011c10ff9e..98fb81a708 100644 --- a/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md +++ b/linkerd.io/content/2.19/tasks/configuring-per-route-policy.md @@ -30,7 +30,7 @@ haven't already done this. Inject and install the Books demo application: ```bash -$ kubectl create ns booksapp && \ +kubectl create ns booksapp && \ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \ | linkerd inject - \ | kubectl -n booksapp apply -f - @@ -44,21 +44,21 @@ run in the `booksapp` namespace. Confirm that the Linkerd data plane was injected successfully: ```bash -$ linkerd check -n booksapp --proxy -o short +linkerd check -n booksapp --proxy -o short ``` You can take a quick look at all the components that were added to your cluster by running: ```bash -$ kubectl -n booksapp get all +kubectl -n booksapp get all ``` Once the rollout has completed successfully, you can access the app itself by port-forwarding `webapp` locally: ```bash -$ kubectl -n booksapp port-forward svc/webapp 7000 & +kubectl -n booksapp port-forward svc/webapp 7000 & ``` Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization resources that currently exist for the `authors` deployment: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the currently unauthorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -312,7 +312,7 @@ network (0.0.0.0). Running `linkerd viz authz` again, we can now see that our new policies exist: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the requests to `authors-modify-route`: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - - authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all traffic is authorized: ```bash -$ linkerd viz authz -n booksapp deploy/authors +linkerd viz authz -n booksapp deploy/authors ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/2.19/tasks/managing-egress-traffic.md b/linkerd.io/content/2.19/tasks/managing-egress-traffic.md index a43eadb61a..a4a7155edd 100644 --- a/linkerd.io/content/2.19/tasks/managing-egress-traffic.md +++ b/linkerd.io/content/2.19/tasks/managing-egress-traffic.md @@ -70,7 +70,7 @@ Now SSH into the client container and start generating some external traffic: ```bash kubectl -n egress-test exec -it client -c client -- sh -$ while sleep 1; do curl -s http://httpbin.org/get ; done +while sleep 1; do curl -s http://httpbin.org/get ; done ``` In a separate shell, you can use the Linkerd diagnostics command to visualize @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to initiate HTTPS traffic to the same service, it will not be allowed: ```bash -~ $ curl -v https://httpbin.org/get +curl -v https://httpbin.org/get curl: (35) TLS connect error: error:00000000:lib(0)::reason(0) ``` @@ -458,7 +458,7 @@ Now let's verify all works as expected: ```bash # plaintext traffic goes as expected to the /get path -$ curl http://httpbin.org/get +curl http://httpbin.org/get { "args": {}, "headers": { @@ -472,14 +472,14 @@ $ curl http://httpbin.org/get } # encrypted traffic can target all paths and hosts -$ curl https://httpbin.org/ip +curl https://httpbin.org/ip { "origin": "51.116.126.217" } # arbitrary unencrypted traffic goes to the internal service -$ curl http://google.com +curl http://google.com { "requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723", "payload": "You cannot go there right now"} diff --git a/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md b/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md index 81969979a0..83c638a4ae 100644 --- a/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md +++ b/linkerd.io/content/2.19/tasks/multicluster-using-statefulsets.md @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine. ```sh # clone example repository -$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git -$ cd l2d-k3d-statefulset +git clone git@github.com:linkerd/l2d-k3d-statefulset.git +cd l2d-k3d-statefulset ``` The second step consists of creating two `k3d` clusters named `east` and `west`, @@ -60,10 +60,10 @@ everything. ```sh # create k3d clusters -$ ./create.sh +./create.sh # list the clusters -$ k3d cluster list +k3d cluster list NAME SERVERS AGENTS LOADBALANCER east 1/1 0/0 true west 1/1 0/0 true @@ -77,10 +77,10 @@ controllers and links are generated for both clusters. ```sh # Install Linkerd and multicluster, output to check should be a success -$ ./install.sh +./install.sh # Next, link the two clusters together -$ ./link.sh +./link.sh ``` Perfect! If you've made it this far with no errors, then it's a good sign. In @@ -100,17 +100,17 @@ communication. First, we will deploy our pods and services: ```sh # deploy services and mesh namespaces -$ ./deploy.sh +./deploy.sh # verify both clusters # # verify east -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 7s # verify west has headless service -$ kubectl --context=k3d-west get services +kubectl --context=k3d-west get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 10m nginx-svc ClusterIP None 80/TCP 8s @@ -118,7 +118,7 @@ nginx-svc ClusterIP None 80/TCP 8s # verify west has statefulset # # this may take a while to come up -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 53s nginx-set-1 2/2 Running 0 43s @@ -129,7 +129,7 @@ Before we go further, let's have a look at the endpoints object for the `nginx-svc`: ```sh -$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml +kubectl --context=k3d-west get endpoints nginx-svc -o yaml ... subsets: - addresses: @@ -169,23 +169,23 @@ would get an answer back. We can test this out by applying the curl pod to the `west` cluster: ```sh -$ kubectl --context=k3d-west apply -f east/curl.yml -$ kubectl --context=k3d-west get pods +kubectl --context=k3d-west apply -f east/curl.yml +kubectl --context=k3d-west get pods NAME READY STATUS RESTARTS AGE nginx-set-0 2/2 Running 0 5m8s nginx-set-1 2/2 Running 0 4m58s nginx-set-2 2/2 Running 0 4m51s curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s -$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh -/$ # prompt for curl pod +kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh +/# prompt for curl pod ``` If we now curl one of these instances, we will get back a response. ```sh # exec'd on the pod -/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local +/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local " @@ -217,10 +217,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first export the service. ```sh -$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" +kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true" service/nginx-svc labeled -$ kubectl --context=k3d-east get services +kubectl --context=k3d-east get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.43.0.1 443/TCP 20h nginx-svc-west ClusterIP None 80/TCP 29s @@ -234,7 +234,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname will point to one of the services we see above: ```sh -$ kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml +kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml subsets: - addresses: - hostname: nginx-set-0 @@ -250,17 +250,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a second why this matters. ```sh -$ kubectl --context=k3d-east get pods +kubectl --context=k3d-east get pods NAME READY STATUS RESTARTS AGE curl-56dc7d945d-96r6p 2/2 Running 0 23m # exec and curl -$ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh +kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh # we want to curl the same hostname we see in the endpoints object above. # however, the service and cluster domain will now be different, since we # are in a different cluster. # -/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local +/ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local @@ -328,8 +328,8 @@ validation. To clean-up, you can remove both clusters entirely using the k3d CLI: ```sh -$ k3d cluster delete east +k3d cluster delete east cluster east deleted -$ k3d cluster delete west +k3d cluster delete west cluster west deleted ``` diff --git a/linkerd.io/content/2.19/tasks/multicluster.md b/linkerd.io/content/2.19/tasks/multicluster.md index 3a80b3f3ed..2779b7616a 100644 --- a/linkerd.io/content/2.19/tasks/multicluster.md +++ b/linkerd.io/content/2.19/tasks/multicluster.md @@ -506,9 +506,9 @@ To cleanup the multicluster control plane, you can run: ```bash # Delete the link CR -$ kubectl --context=west -n linkerd-multicluster delete links east +kubectl --context=west -n linkerd-multicluster delete links east # Delete the test namespace and uninstall multicluster -$ for ctx in west east; do \ +for ctx in west east; do \ kubectl --context=${ctx} delete ns test; \ linkerd --context=${ctx} multicluster uninstall | kubectl --context=${ctx} delete -f - ; \ done diff --git a/linkerd.io/content/2.19/tasks/restricting-access.md b/linkerd.io/content/2.19/tasks/restricting-access.md index 5654518600..c9850725f7 100644 --- a/linkerd.io/content/2.19/tasks/restricting-access.md +++ b/linkerd.io/content/2.19/tasks/restricting-access.md @@ -21,9 +21,9 @@ haven't already done this. Inject and install the Emojivoto application: ```bash -$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - +linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f - ... -$ linkerd check -n emojivoto --proxy -o short +linkerd check -n emojivoto --proxy -o short ... ``` diff --git a/linkerd.io/content/2.19/tasks/securing-linkerd-tap.md b/linkerd.io/content/2.19/tasks/securing-linkerd-tap.md index 8a802c890c..639f81692f 100644 --- a/linkerd.io/content/2.19/tasks/securing-linkerd-tap.md +++ b/linkerd.io/content/2.19/tasks/securing-linkerd-tap.md @@ -60,7 +60,7 @@ kubectl auth can-i watch deployments.tap.linkerd.io -n emojivoto --as $(whoami) You can also use the Linkerd CLI's `--as` flag to confirm: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) Cannot connect to Linkerd Viz: namespaces is forbidden: User "XXXX" cannot list resource "namespaces" in API group "" at the cluster scope Validate the install with: linkerd viz check ... @@ -77,7 +77,7 @@ To enable tap access to all resources in all namespaces, you may bind your user to the `linkerd-linkerd-tap-admin` ClusterRole, installed by default: ```bash -$ kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin +kubectl describe clusterroles/linkerd-linkerd-viz-tap-admin Name: linkerd-linkerd-viz-tap-admin Labels: component=tap linkerd.io/extension=viz @@ -109,7 +109,7 @@ kubectl create clusterrolebinding \ You can verify you now have tap access with: ```bash -$ linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) +linkerd viz tap -n linkerd deploy/linkerd-controller --as $(whoami) req id=3:0 proxy=in src=10.244.0.1:37392 dst=10.244.0.13:9996 tls=not_provided_by_remote :method=GET :authority=10.244.0.13:9996 :path=/ping ... ``` @@ -143,14 +143,14 @@ Because GCloud provides this additional level of access, there are cases where not. To validate this, check whether your GCloud user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces yes ``` And then validate whether your RBAC user has Tap access: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as $(gcloud config get-value account) no - no RBAC policy matched ``` @@ -187,14 +187,14 @@ privileges necessary to tap resources. To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web yes ``` This access is enabled via a `linkerd-linkerd-viz-web-admin` ClusterRoleBinding: ```bash -$ kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin +kubectl describe clusterrolebindings/linkerd-linkerd-viz-web-admin Name: linkerd-linkerd-viz-web-admin Labels: component=web linkerd.io/extensions=viz @@ -227,6 +227,6 @@ kubectl delete clusterrolebindings/linkerd-linkerd-viz-web-admin To confirm: ```bash -$ kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web +kubectl auth can-i watch pods.tap.linkerd.io --all-namespaces --as system:serviceaccount:linkerd-viz:web no ``` diff --git a/linkerd.io/content/2.19/tasks/troubleshooting.md b/linkerd.io/content/2.19/tasks/troubleshooting.md index baaa71e206..c65e8fb63c 100644 --- a/linkerd.io/content/2.19/tasks/troubleshooting.md +++ b/linkerd.io/content/2.19/tasks/troubleshooting.md @@ -230,7 +230,7 @@ Example failure: Ensure the Linkerd ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd +kubectl get clusterroles | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -240,7 +240,7 @@ linkerd-policy 9d Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -257,7 +257,7 @@ Example failure: Ensure the Linkerd ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd +kubectl get clusterrolebindings | grep linkerd linkerd-linkerd-destination 9d linkerd-linkerd-identity 9d linkerd-linkerd-proxy-injector 9d @@ -267,7 +267,7 @@ linkerd-destination-policy 9d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -284,7 +284,7 @@ Example failure: Ensure the Linkerd ServiceAccounts exist: ```bash -$ kubectl -n linkerd get serviceaccounts +kubectl -n linkerd get serviceaccounts NAME SECRETS AGE default 1 14m linkerd-destination 1 14m @@ -297,7 +297,7 @@ Also ensure you have permission to create ServiceAccounts in the Linkerd namespace: ```bash -$ kubectl -n linkerd auth can-i create serviceaccounts +kubectl -n linkerd auth can-i create serviceaccounts yes ``` @@ -314,7 +314,7 @@ Example failure: Ensure the Linkerd CRD exists: ```bash -$ kubectl get customresourcedefinitions +kubectl get customresourcedefinitions NAME CREATED AT serviceprofiles.linkerd.io 2019-04-25T21:47:31Z ``` @@ -322,7 +322,7 @@ serviceprofiles.linkerd.io 2019-04-25T21:47:31Z Also ensure you have permission to create CRDs: ```bash -$ kubectl auth can-i create customresourcedefinitions +kubectl auth can-i create customresourcedefinitions yes ``` @@ -339,14 +339,14 @@ Example failure: Ensure the Linkerd MutatingWebhookConfigurations exists: ```bash -$ kubectl get mutatingwebhookconfigurations | grep linkerd +kubectl get mutatingwebhookconfigurations | grep linkerd linkerd-proxy-injector-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create MutatingWebhookConfigurations: ```bash -$ kubectl auth can-i create mutatingwebhookconfigurations +kubectl auth can-i create mutatingwebhookconfigurations yes ``` @@ -363,14 +363,14 @@ Example failure: Ensure the Linkerd ValidatingWebhookConfiguration exists: ```bash -$ kubectl get validatingwebhookconfigurations | grep linkerd +kubectl get validatingwebhookconfigurations | grep linkerd linkerd-sp-validator-webhook-config 2019-07-01T13:13:26Z ``` Also ensure you have permission to create ValidatingWebhookConfigurations: ```bash -$ kubectl auth can-i create validatingwebhookconfigurations +kubectl auth can-i create validatingwebhookconfigurations yes ``` @@ -418,7 +418,7 @@ Example failure: Ensure the Linkerd ConfigMap exists: ```bash -$ kubectl -n linkerd get configmap/linkerd-config +kubectl -n linkerd get configmap/linkerd-config NAME DATA AGE linkerd-config 3 61m ``` @@ -426,7 +426,7 @@ linkerd-config 3 61m Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl -n linkerd auth can-i create configmap +kubectl -n linkerd auth can-i create configmap yes ``` @@ -780,7 +780,7 @@ Example failure: Verify the state of the control plane pods with: ```bash -$ kubectl -n linkerd get po +kubectl -n linkerd get po NAME READY STATUS RESTARTS AGE linkerd-destination-5fd7b5d466-szgqm 2/2 Running 1 12m linkerd-identity-54df78c479-hbh5m 2/2 Running 0 12m @@ -862,7 +862,7 @@ Ensure you can connect to the Linkerd version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" +curl "https://versioncheck.linkerd.io/version.json?version=edge-19.1.2&uuid=test-uuid&source=cli" {"stable":"stable-2.1.0","edge":"edge-19.1.2"} ``` @@ -961,7 +961,7 @@ normally. Example failure: ```bash -$ linkerd check --proxy --namespace foo +linkerd check --proxy --namespace foo ... × data plane namespace exists The "foo" namespace does not exist @@ -1147,7 +1147,7 @@ Example error: Ensure that the linkerd-cni-config ConfigMap exists in the CNI namespace: ```bash -$ kubectl get cm linkerd-cni-config -n linkerd-cni +kubectl get cm linkerd-cni-config -n linkerd-cni NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP READONLYROOTFS VOLUMES linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAsAny false hostPath,secret ``` @@ -1155,7 +1155,7 @@ linkerd-linkerd-cni-cni false RunAsAny RunAsAny RunAsAny RunAs Also ensure you have permission to create ConfigMaps: ```bash -$ kubectl auth can-i create ConfigMaps +kubectl auth can-i create ConfigMaps yes ``` @@ -1172,7 +1172,7 @@ Example error: Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole linkerd-cni +kubectl get clusterrole linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1180,7 +1180,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -1197,7 +1197,7 @@ Example error: Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding linkerd-cni +kubectl get clusterrolebinding linkerd-cni NAME AGE linkerd-cni 54m ``` @@ -1205,7 +1205,7 @@ linkerd-cni 54m Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -1222,7 +1222,7 @@ Example error: Ensure that the CNI service account exists in the CNI namespace: ```bash -$ kubectl get ServiceAccount linkerd-cni -n linkerd-cni +kubectl get ServiceAccount linkerd-cni -n linkerd-cni NAME SECRETS AGE linkerd-cni 1 45m ``` @@ -1230,7 +1230,7 @@ linkerd-cni 1 45m Also ensure you have permission to create ServiceAccount: ```bash -$ kubectl auth can-i create ServiceAccounts -n linkerd-cni +kubectl auth can-i create ServiceAccounts -n linkerd-cni yes ``` @@ -1247,7 +1247,7 @@ Example error: Ensure that the CNI daemonset exists in the CNI namespace: ```bash -$ kubectl get ds -n linkerd-cni +kubectl get ds -n linkerd-cni NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE linkerd-cni 1 1 1 1 1 beta.kubernetes.io/os=linux 14m ``` @@ -1255,7 +1255,7 @@ linkerd-cni 1 1 1 1 1 beta.kubernet Also ensure you have permission to create DaemonSets: ```bash -$ kubectl auth can-i create DaemonSets -n linkerd-cni +kubectl auth can-i create DaemonSets -n linkerd-cni yes ``` @@ -1272,7 +1272,7 @@ Example failure: Ensure that all the CNI pods are running: ```bash -$ kubectl get po -n linkerd-cn +kubectl get po -n linkerd-cni NAME READY STATUS RESTARTS AGE linkerd-cni-rzp2q 1/1 Running 0 9m20s linkerd-cni-mf564 1/1 Running 0 9m22s @@ -1282,7 +1282,7 @@ linkerd-cni-p5670 1/1 Running 0 9m25s Ensure that all pods have finished the deployment of the CNI config and binary: ```bash -$ kubectl logs linkerd-cni-rzp2q -n linkerd-cni +kubectl logs linkerd-cni-rzp2q -n linkerd-cni Wrote linkerd CNI binaries to /host/opt/cni/bin Created CNI config /host/etc/cni/net.d/10-kindnet.conflist Done configuring CNI. Sleep=true @@ -1310,7 +1310,7 @@ Make sure multicluster extension is correctly installed and that the `links.multicluster.linkerd.io` CRD is present. ```bash -$ kubectl get crds | grep multicluster +kubectl get crds | grep multicluster NAME CREATED AT links.multicluster.linkerd.io 2021-03-10T09:58:10Z ``` @@ -1400,7 +1400,7 @@ the rules section. Expected rules for `linkerd-service-mirror-access-local-resources` cluster role: ```bash -$ kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml +kubectl --context=local get clusterrole linkerd-service-mirror-access-local-resources -o yaml kind: ClusterRole metadata: labels: @@ -1433,7 +1433,7 @@ rules: Expected rules for `linkerd-service-mirror-read-remote-creds` role: ```bash -$ kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml +kubectl --context=local get role linkerd-service-mirror-read-remote-creds -n linkerd-multicluster -o yaml kind: Role metadata: labels: @@ -1466,7 +1466,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the controller pod with: ```bash -$ kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror +kubectl --all-namespaces get po --selector linkerd.io/control-plane-component=linkerd-service-mirror NAME READY STATUS RESTARTS AGE linkerd-service-mirror-7bb8ff5967-zg265 2/2 Running 0 50m ``` @@ -1612,7 +1612,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoles exist: ```bash -$ kubectl get clusterroles | grep linkerd-viz +kubectl get clusterroles | grep linkerd-viz linkerd-linkerd-viz-metrics-api 2021-01-26T18:02:17Z linkerd-linkerd-viz-prometheus 2021-01-26T18:02:17Z linkerd-linkerd-viz-tap 2021-01-26T18:02:17Z @@ -1623,7 +1623,7 @@ linkerd-linkerd-viz-web-check 2021-01-2 Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create clusterroles +kubectl auth can-i create clusterroles yes ``` @@ -1640,7 +1640,7 @@ Example failure: Ensure the linkerd-viz extension ClusterRoleBindings exist: ```bash -$ kubectl get clusterrolebindings | grep linkerd-viz +kubectl get clusterrolebindings | grep linkerd-viz linkerd-linkerd-viz-metrics-api ClusterRole/linkerd-linkerd-viz-metrics-api 18h linkerd-linkerd-viz-prometheus ClusterRole/linkerd-linkerd-viz-prometheus 18h linkerd-linkerd-viz-tap ClusterRole/linkerd-linkerd-viz-tap 18h @@ -1652,7 +1652,7 @@ linkerd-linkerd-viz-web-check ClusterRole/linkerd-linke Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create clusterrolebindings +kubectl auth can-i create clusterrolebindings yes ``` @@ -1741,7 +1741,7 @@ requirements in the cluster: Ensure all the linkerd-viz pods are injected ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1765,7 +1765,7 @@ Make sure that the `proxy-injector` is working correctly by running Ensure all the linkerd-viz pods are running with 2/2 ```bash -$ kubectl -n linkerd-viz get pods +kubectl -n linkerd-viz get pods NAME READY STATUS RESTARTS AGE grafana-68cddd7cc8-nrv4h 2/2 Running 3 18h metrics-api-77f684f7c7-hnw8r 2/2 Running 2 18h @@ -1936,7 +1936,7 @@ Ensure you can connect to the Linkerd Buoyant version check endpoint from the environment the `linkerd` cli is running: ```bash -$ curl https://buoyant.cloud/version.json +curl https://buoyant.cloud/version.json {"linkerd-buoyant":"v0.4.4"} ``` @@ -2001,7 +2001,7 @@ linkerd-buoyant install | kubectl apply -f - Ensure that the cluster role exists: ```bash -$ kubectl get clusterrole buoyant-cloud-agent +kubectl get clusterrole buoyant-cloud-agent NAME CREATED AT buoyant-cloud-agent 2020-11-13T00:59:50Z ``` @@ -2009,7 +2009,7 @@ buoyant-cloud-agent 2020-11-13T00:59:50Z Also ensure you have permission to create ClusterRoles: ```bash -$ kubectl auth can-i create ClusterRoles +kubectl auth can-i create ClusterRoles yes ``` @@ -2024,7 +2024,7 @@ yes Ensure that the cluster role binding exists: ```bash -$ kubectl get clusterrolebinding buoyant-cloud-agent +kubectl get clusterrolebinding buoyant-cloud-agent NAME ROLE AGE buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d ``` @@ -2032,7 +2032,7 @@ buoyant-cloud-agent ClusterRole/buoyant-cloud-agent 301d Also ensure you have permission to create ClusterRoleBindings: ```bash -$ kubectl auth can-i create ClusterRoleBindings +kubectl auth can-i create ClusterRoleBindings yes ``` @@ -2047,7 +2047,7 @@ yes Ensure that the service account exists: ```bash -$ kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent +kubectl -n buoyant-cloud get serviceaccount buoyant-cloud-agent NAME SECRETS AGE buoyant-cloud-agent 1 301d ``` @@ -2055,7 +2055,7 @@ buoyant-cloud-agent 1 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2070,7 +2070,7 @@ yes Ensure that the secret exists: ```bash -$ kubectl -n buoyant-cloud get secret buoyant-cloud-id +kubectl -n buoyant-cloud get secret buoyant-cloud-id NAME TYPE DATA AGE buoyant-cloud-id Opaque 4 301d ``` @@ -2078,7 +2078,7 @@ buoyant-cloud-id Opaque 4 301d Also ensure you have permission to create ServiceAccounts: ```bash -$ kubectl -n buoyant-cloud auth can-i create ServiceAccount +kubectl -n buoyant-cloud auth can-i create ServiceAccount yes ``` @@ -2116,7 +2116,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-agent` Deployment with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 156m ``` @@ -2139,7 +2139,7 @@ Ensure the `buoyant-cloud-agent` pod is injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-agent NAME READY STATUS RESTARTS AGE buoyant-cloud-agent-6b8c6888d7-htr7d 2/2 Running 0 161m ``` @@ -2158,7 +2158,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ linkerd-buoyant version +linkerd-buoyant version CLI version: v0.4.4 Agent version: v0.4.4 ``` @@ -2217,7 +2217,7 @@ everything to start up. If this is a permanent error, you'll want to validate the state of the `buoyant-cloud-metrics` DaemonSet with: ```bash -$ kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get po --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 163m buoyant-cloud-metrics-q8jhj 2/2 Running 0 163m @@ -2243,7 +2243,7 @@ Ensure the `buoyant-cloud-metrics` pods are injected, the `READY` column should show `2/2`: ```bash -$ kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics +kubectl -n buoyant-cloud get pods --selector app=buoyant-cloud-metrics NAME READY STATUS RESTARTS AGE buoyant-cloud-metrics-kt9mv 2/2 Running 0 166m buoyant-cloud-metrics-q8jhj 2/2 Running 0 166m @@ -2265,7 +2265,7 @@ Make sure that the `proxy-injector` is working correctly by running Check the version with: ```bash -$ kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' +kubectl -n buoyant-cloud get daemonset/buoyant-cloud-metrics -o jsonpath='{.metadata.labels}' {"app.kubernetes.io/name":"metrics","app.kubernetes.io/part-of":"buoyant-cloud","app.kubernetes.io/version":"v0.4.4"} ``` diff --git a/linkerd.io/content/blog/2016/1210-slow-cooker-load-testing-for-tough-software/index.md b/linkerd.io/content/blog/2016/1210-slow-cooker-load-testing-for-tough-software/index.md index e75e31a998..59909fb59f 100644 --- a/linkerd.io/content/blog/2016/1210-slow-cooker-load-testing-for-tough-software/index.md +++ b/linkerd.io/content/blog/2016/1210-slow-cooker-load-testing-for-tough-software/index.md @@ -91,7 +91,7 @@ static content. The latencies given are in milliseconds, and we report the min, p50, p95, p99, p999, and max latencies seen during this 10 second interval. ```txt -$ ./slow_cooker_linux_amd64 -url http://target:4140 -qps 50 -concurrency 10 http://perf-target-2:8080 +./slow_cooker_linux_amd64 -url http://target:4140 -qps 50 -concurrency 10 http://perf-target-2:8080 # sending 500 req/s with concurrency=10 to http://perf-target-2:8080 ... # good/b/f t good% min [p50 p95 p99 p999] max change 2016-10-12T20:34:20Z 4990/0/0 5000 99% 10s 0 [ 1 3 4 9 ] 9 @@ -120,7 +120,7 @@ latency. In the example below, we have a backend server suffering from a catastrophic slow down: ```txt -$ ./slow_cooker_linux_amd64 -totalRequests 100000 -qps 5 -concurrency 100 http://perf-target-1:8080 +./slow_cooker_linux_amd64 -totalRequests 100000 -qps 5 -concurrency 100 http://perf-target-1:8080 # sending 500 req/s with concurrency=10 to http://perf-target-2:8080 ... # good/b/f t good% min [p50 p95 p99 p999] max change 2016-11-14T20:58:13Z 4900/0/0 5000 98% 10s 0 [ 1 2 6 8 ] 8 + @@ -165,7 +165,7 @@ For comparison, let’s start with a [ApacheBench](http://httpd.apache.org/docs/2.4/programs/ab.html)’s report: ```txt -$ ab -n 100000 -c 10 http://perf-target-1:8080/ +ab -n 100000 -c 10 http://perf-target-1:8080/ This is ApacheBench, Version 2.3 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ Licensed to The Apache Software Foundation, http://www.apache.org/ @@ -232,7 +232,7 @@ becomes much more clear that the 99.9th percentile is consistently high; this is not just a few outliers, but a persistent and ongoing problem: ```txt -$ ./slow_cooker_linux_amd64 -totalRequests 20000 -qps 50 -concurrency 10 http://perf-target-2:8080 +./slow_cooker_linux_amd64 -totalRequests 20000 -qps 50 -concurrency 10 http://perf-target-2:8080 # sending 500 req/s with concurrency=10 to http://perf-target-2:8080 ... # good/b/f t good% min [p50 p95 p99 p999] max change 2016-12-07T19:05:37Z 2510/0/0 5000 50% 10s 0 [ 0 0 2 4995 ] 4994 + diff --git a/linkerd.io/content/blog/2018/1208-service-profiles-for-per-route-metrics/index.md b/linkerd.io/content/blog/2018/1208-service-profiles-for-per-route-metrics/index.md index b54e698c64..8936d6c39f 100644 --- a/linkerd.io/content/blog/2018/1208-service-profiles-for-per-route-metrics/index.md +++ b/linkerd.io/content/blog/2018/1208-service-profiles-for-per-route-metrics/index.md @@ -150,7 +150,7 @@ service—but we can't, because we haven't defined any routes for that service yet! ```bash -$ linkerd routes svc/webapp +linkerd routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [UNKNOWN] webapp 70.00% 5.7rps 34ms 100ms 269ms ``` @@ -188,13 +188,13 @@ spec: This service describes two routes that the webapp service responds to, `/books` and `/books/`. We add the service profile with `kubectl apply`: -`$ kubectl apply -f webapp-profile.yaml` +`kubectl apply -f webapp-profile.yaml` Within about a minute (Prometheus scrapes metrics from the proxies at regular intervals) per-route metrics will be available for the `webapp` service. ```bash -$ linkerd routes svc/webapp +linkerd routes svc/webapp ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 /books/{id} webapp 100.00% 0.3rps 26ms 75ms 95ms /books webapp 56.25% 0.5rps 25ms 320ms 384ms diff --git a/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md b/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md index 3e672ef738..01373a9ecc 100644 --- a/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md +++ b/linkerd.io/content/blog/2019/0222-how-we-designed-retries-in-linkerd-2-2/index.md @@ -164,7 +164,7 @@ One thing that we can notice about this application is that the success rate of requests from the books service to the authors service is very poor: ```bash -$ linkerd routes deploy/books --to svc/authors +linkerd routes deploy/books --to svc/authors ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 [DEFAULT] authors 54.24% 3.9rps 5ms 14ms 19ms ``` @@ -173,8 +173,8 @@ To get a better picture of what’s going on here, let’s add a service profile the authors service, generated from a Swagger definition: ```bash -$ curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp/authors.swagger | linkerd profile --open-api - authors | kubectl apply -f - -$ linkerd routes deploy/books --to svc/authors +curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp/authors.swagger | linkerd profile --open-api - authors | kubectl apply -f - +linkerd routes deploy/books --to svc/authors ROUTE SERVICE SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /authors/{id}.json authors 0.00% 0.0rps 0ms 0ms 0ms GET /authors.json authors 0.00% 0.0rps 0ms 0ms 0ms @@ -190,7 +190,7 @@ time. To correct this, let’s edit the authors service profile and make those requests retryable: ```bash -$ kubectl edit sp/authors.default.svc.cluster.local +kubectl edit sp/authors.default.svc.cluster.local [...] - condition: method: HEAD @@ -203,7 +203,7 @@ After editing the service profile, we see a nearly immediate improvement in success rate: ```bash -$ linkerd routes deploy/books --to svc/authors -o wide +linkerd routes deploy/books --to svc/authors -o wide ROUTE SERVICE EFFECTIVE_SUCCESS EFFECTIVE_RPS ACTUAL_SUCCESS ACTUAL_RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /authors/{id}.json authors 0.00% 0.0rps 0.00% 0.0rps 0ms 0ms 0ms GET /authors.json authors 0.00% 0.0rps 0.00% 0.0rps 0ms 0ms 0ms @@ -221,7 +221,7 @@ the purposes of this demo, I’ll set a timeout of 25ms. Your results will vary depending on the characteristics of your system. ```bash -$ kubectl edit sp/authors.default.svc.cluster.local +kubectl edit sp/authors.default.svc.cluster.local [...] - condition: method: HEAD @@ -235,7 +235,7 @@ We now see that success rate has come down slightly because some requests are timing out, but that the tail latency has been greatly reduced: ```bash -$ linkerd routes deploy/books --to svc/authors -o wide +linkerd routes deploy/books --to svc/authors -o wide ROUTE SERVICE EFFECTIVE_SUCCESS EFFECTIVE_RPS ACTUAL_SUCCESS ACTUAL_RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 DELETE /authors/{id}.json authors 0.00% 0.0rps 0.00% 0.0rps 0ms 0ms 0ms GET /authors.json authors 0.00% 0.0rps 0.00% 0.0rps 0ms 0ms 0ms diff --git a/linkerd.io/content/blog/2019/1007-linkerd-distributed-tracing/index.md b/linkerd.io/content/blog/2019/1007-linkerd-distributed-tracing/index.md index 790268900d..261bb1dbd5 100644 --- a/linkerd.io/content/blog/2019/1007-linkerd-distributed-tracing/index.md +++ b/linkerd.io/content/blog/2019/1007-linkerd-distributed-tracing/index.md @@ -82,7 +82,7 @@ on your cluster. If you don't, you can follow the instructions. ```bash -$ linkerd version +linkerd version Client version: stable-2.6 Server version: stable-2.6 ``` diff --git a/linkerd.io/content/blog/2024/1015-edge-release-roundup/index.md b/linkerd.io/content/blog/2024/1015-edge-release-roundup/index.md index 8f61008f3b..66c8b2094c 100644 --- a/linkerd.io/content/blog/2024/1015-edge-release-roundup/index.md +++ b/linkerd.io/content/blog/2024/1015-edge-release-roundup/index.md @@ -120,7 +120,7 @@ command line to the new metrics available based on Gateway API routes, for example: ```bash {class=disable-copy} -$ linkerd viz stat-outbound -n faces deploy/face +linkerd viz stat-outbound -n faces deploy/face NAME SERVICE ROUTE TYPE BACKEND SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99 TIMEOUTS RETRIES face smiley:80 smiley-route HTTPRoute 78.36% 6.32 41ms 5886ms 9177ms 0.00% 0.00% ├─────────────────────► smiley:80 79.34% 5.57 20ms 5725ms 9145ms 0.00% diff --git a/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md b/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md index dc79544aa8..44f4e39053 100644 --- a/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md +++ b/linkerd.io/content/blog/2025/0725-tilt-linkerd-nginx-part-2/index.md @@ -199,7 +199,7 @@ While the dashboard provides intuitive visualizations, the Linkerd CLI offers the same data in a terminal-friendly format for quick diagnostics: ```bash -$ linkerd viz top deployment/baz +linkerd viz top deployment/baz Source Destination Method Path Count Best Worst Last Success Rate foo-64798767b7-x8xvf baz-659dbf6895-v7gdm POST /demo.Baz/GetInfo 1187 81µs 9ms 124µs 100.00% bar-577c4bf849-cpdxl baz-659dbf6895-9twg9 POST /demo.Baz/GetInfo 1103 86µs 6ms 140µs 100.00%