Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/reference/cli/check.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ for a full list of all the possible checks, what they do and how to fix them.
## Example output

```bash
$ linkerd check
linkerd check
kubernetes-api
--------------
√ can initialize the client
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2-edge/reference/iptables.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,7 +164,7 @@ Alternatively, if you want to inspect the iptables rules created for a pod, you
can retrieve them through the following command:

```bash
$ kubectl -n <namesppace> logs <pod-name> linkerd-init
kubectl -n <namesppace> logs <pod-name> linkerd-init
# where <pod-name> is the name of the pod
# you want to see the iptables rules for
```
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@ Requests to `/echo` on port 9898 to the frontend pod will get forwarded the pod
pointed by the Service `backend-a-podinfo`:

```bash
$ curl -sX POST localhost:9898/echo \
curl -sX POST localhost:9898/echo \
| grep -o 'PODINFO_UI_MESSAGE=. backend'

PODINFO_UI_MESSAGE=A backend
Expand Down Expand Up @@ -132,7 +132,7 @@ the `backend-a-podinfo` Service.
The previous requests should still reach `backend-a-podinfo` only:

```bash
$ curl -sX POST localhost:9898/echo \
curl -sX POST localhost:9898/echo \
| grep -o 'PODINFO_UI_MESSAGE=. backend'

PODINFO_UI_MESSAGE=A backend
Expand All @@ -142,7 +142,7 @@ But if we add the `x-request-id: alternative` header, they get routed to
`backend-b-podinfo`:

```bash
$ curl -sX POST \
curl -sX POST \
-H 'x-request-id: alternative' \
localhost:9898/echo \
| grep -o 'PODINFO_UI_MESSAGE=. backend'
Expand Down
18 changes: 9 additions & 9 deletions linkerd.io/content/2-edge/tasks/configuring-per-route-policy.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ haven't already done this.
Inject and install the Books demo application:

```bash
$ kubectl create ns booksapp && \
kubectl create ns booksapp && \
curl --proto '=https' --tlsv1.2 -sSfL https://run.linkerd.io/booksapp.yml \
| linkerd inject - \
| kubectl -n booksapp apply -f -
Expand All @@ -44,21 +44,21 @@ run in the `booksapp` namespace.
Confirm that the Linkerd data plane was injected successfully:

```bash
$ linkerd check -n booksapp --proxy -o short
linkerd check -n booksapp --proxy -o short
```

You can take a quick look at all the components that were added to your cluster
by running:

```bash
$ kubectl -n booksapp get all
kubectl -n booksapp get all
```

Once the rollout has completed successfully, you can access the app itself by
port-forwarding `webapp` locally:

```bash
$ kubectl -n booksapp port-forward svc/webapp 7000 &
kubectl -n booksapp port-forward svc/webapp 7000 &
```

Open [http://localhost:7000/](http://localhost:7000/) in your browser to see the
Expand Down Expand Up @@ -87,7 +87,7 @@ First, let's run the `linkerd viz authz` command to list the authorization
resources that currently exist for the `authors` deployment:

```bash
$ linkerd viz authz -n booksapp deploy/authors
linkerd viz authz -n booksapp deploy/authors
ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
default default:all-unauthenticated default/all-unauthenticated 0.0rps 70.31% 8.1rps 1ms 43ms 49ms
probe default:all-unauthenticated default/probe 0.0rps 100.00% 0.3rps 1ms 1ms 1ms
Expand Down Expand Up @@ -124,7 +124,7 @@ Now that we've defined a [`Server`] for the authors `Deployment`, we can run the
currently unauthorized:

```bash
$ linkerd viz authz -n booksapp deploy/authors
linkerd viz authz -n booksapp deploy/authors
ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
default authors-server 9.5rps 0.00% 0.0rps 0ms 0ms 0ms
probe authors-server default/probe 0.0rps 100.00% 0.1rps 1ms 1ms 1ms
Expand Down Expand Up @@ -312,7 +312,7 @@ network (0.0.0.0).
Running `linkerd viz authz` again, we can now see that our new policies exist:

```bash
$ linkerd viz authz -n booksapp deploy/authors
linkerd viz authz -n booksapp deploy/authors
ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 2ms 2ms 2ms
authors-probe-route authors-server authorizationpolicy/authors-probe-policy 0.0rps 100.00% 0.1rps 1ms 1ms 1ms
Expand Down Expand Up @@ -383,7 +383,7 @@ requests, but we haven't _authorized_ requests to that route. Running the
requests to `authors-modify-route`:

```bash
$ linkerd viz authz -n booksapp deploy/authors
linkerd viz authz -n booksapp deploy/authors
ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
authors-get-route authors-server authorizationpolicy/authors-get-policy - - - - - -
authors-modify-route authors-server 9.7rps 0.00% 0.0rps 0ms 0ms 0ms
Expand Down Expand Up @@ -442,7 +442,7 @@ Running the `linkerd viz authz` command one last time, we now see that all
traffic is authorized:

```bash
$ linkerd viz authz -n booksapp deploy/authors
linkerd viz authz -n booksapp deploy/authors
ROUTE SERVER AUTHORIZATION UNAUTHORIZED SUCCESS RPS LATENCY_P50 LATENCY_P95 LATENCY_P99
authors-get-route authors-server authorizationpolicy/authors-get-policy 0.0rps 100.00% 0.1rps 0ms 0ms 0ms
authors-modify-route authors-server authorizationpolicy/authors-modify-policy 0.0rps 100.00% 0.0rps 0ms 0ms 0ms
Expand Down
10 changes: 5 additions & 5 deletions linkerd.io/content/2-edge/tasks/managing-egress-traffic.md
Original file line number Diff line number Diff line change
Expand Up @@ -70,7 +70,7 @@ Now SSH into the client container and start generating some external traffic:

```bash
kubectl -n egress-test exec -it client -c client -- sh
$ while sleep 1; do curl -s http://httpbin.org/get ; done
while sleep 1; do curl -s http://httpbin.org/get ; done
```

In a separate shell, you can use the Linkerd diagnostics command to visualize
Expand Down Expand Up @@ -235,7 +235,7 @@ Interestingly enough though, if we go back to our client shell and we try to
initiate HTTPS traffic to the same service, it will not be allowed:

```bash
~ $ curl -v https://httpbin.org/get
curl -v https://httpbin.org/get
curl: (35) TLS connect error: error:00000000:lib(0)::reason(0)
```

Expand Down Expand Up @@ -458,7 +458,7 @@ Now let's verify all works as expected:

```bash
# plaintext traffic goes as expected to the /get path
$ curl http://httpbin.org/get
curl http://httpbin.org/get
{
"args": {},
"headers": {
Expand All @@ -472,14 +472,14 @@ $ curl http://httpbin.org/get
}

# encrypted traffic can target all paths and hosts
$ curl https://httpbin.org/ip
curl https://httpbin.org/ip
{
"origin": "51.116.126.217"
}


# arbitrary unencrypted traffic goes to the internal service
$ curl http://google.com
curl http://google.com
{
"requestUID": "in:http-sid:terminus-grpc:-1-h1:80-190120723",
"payload": "You cannot go there right now"}
Expand Down
46 changes: 23 additions & 23 deletions linkerd.io/content/2-edge/tasks/multicluster-using-statefulsets.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,8 +48,8 @@ The first step is to clone the demo repository on your local machine.

```sh
# clone example repository
$ git clone git@github.com:linkerd/l2d-k3d-statefulset.git
$ cd l2d-k3d-statefulset
git clone git@github.com:linkerd/l2d-k3d-statefulset.git
cd l2d-k3d-statefulset
```

The second step consists of creating two `k3d` clusters named `east` and `west`,
Expand All @@ -60,10 +60,10 @@ everything.

```sh
# create k3d clusters
$ ./create.sh
./create.sh

# list the clusters
$ k3d cluster list
k3d cluster list
NAME SERVERS AGENTS LOADBALANCER
east 1/1 0/0 true
west 1/1 0/0 true
Expand All @@ -77,10 +77,10 @@ controllers and links are generated for both clusters.

```sh
# Install Linkerd and multicluster, output to check should be a success
$ ./install.sh
./install.sh

# Next, link the two clusters together
$ ./link.sh
./link.sh
```

Perfect! If you've made it this far with no errors, then it's a good sign. In
Expand All @@ -100,25 +100,25 @@ communication. First, we will deploy our pods and services:

```sh
# deploy services and mesh namespaces
$ ./deploy.sh
./deploy.sh

# verify both clusters
#
# verify east
$ kubectl --context=k3d-east get pods
kubectl --context=k3d-east get pods
NAME READY STATUS RESTARTS AGE
curl-56dc7d945d-96r6p 2/2 Running 0 7s

# verify west has headless service
$ kubectl --context=k3d-west get services
kubectl --context=k3d-west get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 10m
nginx-svc ClusterIP None <none> 80/TCP 8s

# verify west has statefulset
#
# this may take a while to come up
$ kubectl --context=k3d-west get pods
kubectl --context=k3d-west get pods
NAME READY STATUS RESTARTS AGE
nginx-set-0 2/2 Running 0 53s
nginx-set-1 2/2 Running 0 43s
Expand All @@ -129,7 +129,7 @@ Before we go further, let's have a look at the endpoints object for the
`nginx-svc`:

```sh
$ kubectl --context=k3d-west get endpoints nginx-svc -o yaml
kubectl --context=k3d-west get endpoints nginx-svc -o yaml
...
subsets:
- addresses:
Expand Down Expand Up @@ -169,23 +169,23 @@ would get an answer back. We can test this out by applying the curl pod to the
`west` cluster:

```sh
$ kubectl --context=k3d-west apply -f east/curl.yml
$ kubectl --context=k3d-west get pods
kubectl --context=k3d-west apply -f east/curl.yml
kubectl --context=k3d-west get pods
NAME READY STATUS RESTARTS AGE
nginx-set-0 2/2 Running 0 5m8s
nginx-set-1 2/2 Running 0 4m58s
nginx-set-2 2/2 Running 0 4m51s
curl-56dc7d945d-s4n8j 0/2 PodInitializing 0 4s

$ kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh
kubectl --context=k3d-west exec -it curl-56dc7d945d-s4n8j -c curl -- sh
/$ # prompt for curl pod
```

If we now curl one of these instances, we will get back a response.

```sh
# exec'd on the pod
/ $ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local
/ curl nginx-set-0.nginx-svc.default.svc.west.cluster.local
"<!DOCTYPE html>
<html>
<head>
Expand Down Expand Up @@ -217,10 +217,10 @@ Now, let's do the same, but this time from the `east` cluster. We will first
export the service.

```sh
$ kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true"
kubectl --context=k3d-west label service nginx-svc mirror.linkerd.io/exported="true"
service/nginx-svc labeled

$ kubectl --context=k3d-east get services
kubectl --context=k3d-east get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.43.0.1 <none> 443/TCP 20h
nginx-svc-west ClusterIP None <none> 80/TCP 29s
Expand All @@ -234,7 +234,7 @@ endpoints for `nginx-svc-west` will have the same hostnames, but each hostname
will point to one of the services we see above:

```sh
$ kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml
kubectl --context=k3d-east get endpoints nginx-svc-k3d-west -o yaml
subsets:
- addresses:
- hostname: nginx-set-0
Expand All @@ -250,17 +250,17 @@ cluster (`west`), will be mirrored as a clusterIP service. We will see in a
second why this matters.

```sh
$ kubectl --context=k3d-east get pods
kubectl --context=k3d-east get pods
NAME READY STATUS RESTARTS AGE
curl-56dc7d945d-96r6p 2/2 Running 0 23m

# exec and curl
$ kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh
kubectl --context=k3d-east exec curl-56dc7d945d-96r6p -it -c curl -- sh
# we want to curl the same hostname we see in the endpoints object above.
# however, the service and cluster domain will now be different, since we
# are in a different cluster.
#
/ $ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local
/ curl nginx-set-0.nginx-svc-k3d-west.default.svc.east.cluster.local
<!DOCTYPE html>
<html>
<head>
Expand Down Expand Up @@ -328,8 +328,8 @@ validation.
To clean-up, you can remove both clusters entirely using the k3d CLI:

```sh
$ k3d cluster delete east
k3d cluster delete east
cluster east deleted
$ k3d cluster delete west
k3d cluster delete west
cluster west deleted
```
4 changes: 2 additions & 2 deletions linkerd.io/content/2-edge/tasks/multicluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -506,9 +506,9 @@ To cleanup the multicluster control plane, you can run:

```bash
# Delete the link CR
$ kubectl --context=west -n linkerd-multicluster delete links east
kubectl --context=west -n linkerd-multicluster delete links east
# Delete the test namespace and uninstall multicluster
$ for ctx in west east; do \
for ctx in west east; do \
kubectl --context=${ctx} delete ns test; \
linkerd --context=${ctx} multicluster uninstall | kubectl --context=${ctx} delete -f - ; \
done
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2-edge/tasks/restricting-access.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ haven't already done this.
Inject and install the Emojivoto application:

```bash
$ linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
linkerd inject https://run.linkerd.io/emojivoto.yml | kubectl apply -f -
...
$ linkerd check -n emojivoto --proxy -o short
linkerd check -n emojivoto --proxy -o short
...
```

Expand Down
Loading