Skip to content

Commit

Permalink
Convert absolute links to relative (kubernetes#991)
Browse files Browse the repository at this point in the history
Followup to kubernetes#990, and also based off of the `alpeb/unversion-common-docs` branch.

This is preliminary work to convert `/2` into `/2.9` in a followup PR.

For reference, here's the procedure I followed. It was also performed for `2` (s/2.10/2 in the commands below).

```console
# Under /features
$ find . -type f -exec sed -i -e 's/\/2.10\/features/\.\./g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/tasks/\.\.\/\.\.\/tasks/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/reference/\.\.\/\.\.\/reference/g' {} \;
# Then edit _index.html which refers directly to ../tasks and ../reference

# Under /tasks
$ find . -type f -exec sed -i -e 's/\/2.10\/tasks/\.\./g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/features/\.\.\/\.\.\/features/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/reference/\.\.\/\.\.\/reference/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/getting-started/\.\.\/\.\.\/getting-started/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/upgrade/\.\.\/\.\.\/upgrade/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/proxy-injection/\.\.\/\.\.\/proxy-injection/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/observability/\.\.\/\.\.\/observability/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/adding-your-service/\.\.\/\.\.\/adding-your-service/g' {} \;
# Then edit _index.html which refers directly to ../tasks and ../reference

# Under /reference
$ find . -maxdepth 1 -type f -exec sed -i -e 's/\/2.10\/reference/\.\./g' {} \;
$ find . -maxdepth 1 -type f -exec sed -i -e 's/\/2.10\/features/\.\.\/\.\.\/features/g' {} \;
$ find . -maxdepth 1 -type f -exec sed -i -e 's/\/2.10\/tasks/\.\.\/\.\.\/tasks/g' {} \;

# Under /reference/cli
$ find . -type f -exec sed -i -e 's/\/2.10\/reference/\.\.\/\.\./g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/features/\.\.\/\.\.\/\.\.\/features/g' {} \;
$ find . -type f -exec sed -i -e 's/\/2.10\/tasks/\.\.\/\.\.\/\.\.\/tasks/g' {} \;

# Under getting-started and overview: just manually edit _index.md
```
  • Loading branch information
alpeb authored Mar 12, 2021
1 parent 810eb23 commit f45e1a8
Show file tree
Hide file tree
Showing 132 changed files with 532 additions and 533 deletions.
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/features/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,8 @@ weight = 3
+++

Linkerd offers many features, outlined below. For our walkthroughs and guides,
please see the [Linkerd task docs]({{% ref "/2.10/tasks" %}}). For a reference,
see the [Linkerd reference docs]({{% ref "/2.10/reference" %}}).
please see the [Linkerd task docs]({{% ref "../tasks" %}}). For a reference,
see the [Linkerd reference docs]({{% ref "../reference" %}}).

## Linkerd's features

Expand Down
12 changes: 6 additions & 6 deletions linkerd.io/content/2.10/features/automatic-mtls.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title = "Automatic mTLS"
description = "Linkerd automatically enables mutual Transport Layer Security (TLS) for all communication between meshed applications."
weight = 4
aliases = [
"/2.10/features/automatic-tls"
"../automatic-tls"
]
+++

Expand All @@ -16,7 +16,7 @@ plane also runs on the data plane, this means that communication between
Linkerd's control plane components are also automatically secured via mTLS.

Not all traffic can be automatically mTLS'd, but it's easy to [verify which
traffic is](/2.10/tasks/securing-your-service/). See [Caveats and future
traffic is](../../tasks/securing-your-service/). See [Caveats and future
work](#caveats-and-future-work) below for details on which traffic cannot
currently be automatically encrypted.

Expand All @@ -38,7 +38,7 @@ certificate and private key are placed into a [Kubernetes
Secret](https://kubernetes.io/docs/concepts/configuration/secret/). By default,
the Secret is placed in the `linkerd` namespace and can only be read by the
service account used by the [Linkerd control
plane](/2.10/reference/architecture/)'s `identity` component.
plane](../../reference/architecture/)'s `identity` component.

On the data plane side, each proxy is passed the trust anchor in an environment
variable. At startup, the proxy generates a private key, stored in a [tmpfs
Expand Down Expand Up @@ -69,13 +69,13 @@ name.

The trust anchor generated by `linkerd install` expires after 365 days, and
must be [manually
rotated](/2.10/tasks/manually-rotating-control-plane-tls-credentials/).
rotated](../../tasks/manually-rotating-control-plane-tls-credentials/).
Alternatively you can [provide the trust anchor
yourself](/2.10/tasks/generate-certificates/) and control the expiration date.
yourself](../../tasks/generate-certificates/) and control the expiration date.

By default, the issuer certificate and key are not automatically rotated. You
can [set up automatic rotation with
`cert-manager`](/2.10/tasks/automatically-rotating-control-plane-tls-credentials/).
`cert-manager`](../../tasks/automatically-rotating-control-plane-tls-credentials/).

## Caveats and future work

Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2.10/features/cni.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ In Helm v3, It has been deprecated, and is the first argument as
{{< /note >}}

At that point you are ready to install Linkerd with CNI enabled.
You can follow [Installing Linkerd with Helm](/2.10/tasks/install-helm/) to do so.
You can follow [Installing Linkerd with Helm](../../tasks/install-helm/) to do so.

## Additional configuration

Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/features/dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,8 @@ title = "Dashboard and Grafana"
description = "Linkerd provides a web dashboard, as well as pre-configured Grafana dashboards."
+++

In addition to its [command-line interface](/2.10/cli/), Linkerd provides a web
dashboard and pre-configured Grafana dashboards.
In addition to its [command-line interface](../../reference/cli/), Linkerd
provides a web dashboard and pre-configured Grafana dashboards.

To access this functionality, you need to have installed the Viz extension:

Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2.10/features/distributed-tracing.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,4 +56,4 @@ them into traces, and a trace backend to store the trace data and allow the
user to view/query it.

For details, please see our [guide to adding distributed tracing to your
application with Linkerd](/2.10/tasks/distributed-tracing/).
application with Linkerd](../../tasks/distributed-tracing/).
2 changes: 1 addition & 1 deletion linkerd.io/content/2.10/features/fault-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,4 +9,4 @@ Traditionally, this would require modifying the service's code to add a fault
injection library that would be doing the actual work. Linkerd can do this
without any service code changes, only requiring a little configuration.

To inject faults into your own services, follow the [tutorial](/2.10/tasks/fault-injection/).
To inject faults into your own services, follow the [tutorial](../../tasks/fault-injection/).
8 changes: 4 additions & 4 deletions linkerd.io/content/2.10/features/ha.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ For production workloads, Linkerd's control plane can run in high availability
* Sets production-ready CPU and memory resource requests on control plane
components.
* Sets production-ready CPU and memory resource requests on data plane proxies
* *Requires* that the [proxy auto-injector](/2.10/features/proxy-injection/) be
* *Requires* that the [proxy auto-injector](../proxy-injection/) be
functional for any pods to be scheduled.
* Sets [anti-affinity
policies](https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#affinity-and-anti-affinity)
Expand Down Expand Up @@ -44,7 +44,7 @@ flag:
linkerd install --ha --controller-replicas=2 | kubectl apply -f -
```

See the full [`install` CLI documentation](/2.10/reference/cli/install/) for
See the full [`install` CLI documentation](../../reference/cli/install/) for
reference.

The `linkerd upgrade` command can be used to enable HA mode on an existing
Expand All @@ -57,7 +57,7 @@ linkerd upgrade --ha | kubectl apply -f -
## Proxy injector failure policy

The HA proxy injector is deployed with a stricter failure policy to enforce
[automatic proxy injection](/2.10/features/proxy-injection/). This setup ensures
[automatic proxy injection](../proxy-injection/). This setup ensures
that no annotated workloads are accidentally scheduled to run on your cluster,
without the Linkerd proxy. (This can happen when the proxy injector is down.)

Expand Down Expand Up @@ -123,7 +123,7 @@ Prometheus and Grafana.
The Linkerd Viz extension provides a pre-configured Prometheus pod, but for
production workloads we recommend setting up your own Prometheus instance. To
scrape the data plane metrics, follow the instructions
[here](https://linkerd.io/2.10/tasks/external-prometheus/). This will provide you
[here](https://linkerd.io../../tasks/external-prometheus/). This will provide you
with more control over resource requirement, backup strategy and data retention.

When planning for memory capacity to store Linkerd timeseries data, the usual
Expand Down
2 changes: 1 addition & 1 deletion linkerd.io/content/2.10/features/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ aliases = [
For reasons of simplicity, Linkerd does not provide its own ingress controller.
Instead, Linkerd is designed to work alongside your ingress controller of choice.

See the [Using Ingress with Linkerd Guide](/2.10/tasks/using-ingress/) for examples
See the [Using Ingress with Linkerd Guide](../../tasks/using-ingress/) for examples
of how to get it all working together.
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/features/load-balancing.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,9 +20,9 @@ endpoints provided by DNS.
For destinations that are in Kubernetes, Linkerd will look up the IP address in
the Kubernetes API. If the IP address corresponds to a Service, Linkerd will
load balance across the endpoints of that Service and apply any policy from that
Service's [Service Profile](/2.10/features/service-profiles/). On the other hand,
Service's [Service Profile](../service-profiles/). On the other hand,
if the IP address corresponds to a Pod, Linkerd will not perform any load
balancing or apply any [Service Profiles](/2.10/features/service-profiles/).
balancing or apply any [Service Profiles](../service-profiles/).

{{< note >}}
If working with headless services, endpoints of the service cannot be retrieved.
Expand Down
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/features/multicluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,11 +52,11 @@ Once these components are installed, Kubernetes `Service` resources that match
a label selector can be exported to other clusters.

Ready to get started? See the [getting started with multi-cluster
guide](/2.10/tasks/multicluster/) for a walkthrough.
guide](../../tasks/multicluster/) for a walkthrough.

## Further reading

* [Multi-cluster installation instructions](/2.10/tasks/installing-multicluster/).
* [Multi-cluster installation instructions](../../tasks/installing-multicluster/).
* [Architecting for multi-cluster
Kubernetes](/2020/02/17/architecting-for-multicluster-kubernetes/), a blog
post explaining some of the design rationale behind Linkerd's multi-cluster
Expand Down
10 changes: 5 additions & 5 deletions linkerd.io/content/2.10/features/proxy-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,14 @@ Linkerd automatically adds the data plane proxy to pods when the
`linkerd.io/inject: enabled` annotation is present on a namespace or any
workloads, such as deployments or pods. This is known as "proxy injection".

See [Adding Your Service](/2.10/tasks/adding-your-service/) for a walkthrough of
See [Adding Your Service](../../tasks/adding-your-service/) for a walkthrough of
how to use this feature in practice.

{{< note >}}
Proxy injection is also where proxy *configuration* happens. While it's rarely
necessary, you can configure proxy settings by setting additional Kubernetes
annotations at the resource level prior to injection. See the [full list of
proxy configuration options](/2.10/reference/proxy-configuration/).
proxy configuration options](../../reference/proxy-configuration/).
{{< /note >}}

## Details
Expand All @@ -34,7 +34,7 @@ For each pod, two containers are injected:
Container](https://kubernetes.io/docs/concepts/workloads/pods/init-containers/)
that configures `iptables` to automatically forward all incoming and
outgoing TCP traffic through the proxy. (Note that this container is not
present if the [Linkerd CNI Plugin](/2.10/features/cni/) has been enabled.)
present if the [Linkerd CNI Plugin](../cni/) has been enabled.)
1. `linkerd-proxy`, the Linkerd data plane proxy itself.

Note that simply adding the annotation to a resource with pre-existing pods
Expand All @@ -50,7 +50,7 @@ otherwise be enabled, by adding the `linkerd.io/inject: disabled` annotation.

## Manual injection

The [`linkerd inject`](/2.10/reference/cli/inject/) CLI command is a text
The [`linkerd inject`](../../reference/cli/inject/) CLI command is a text
transform that, by default, simply adds the inject annotation to a given
Kubernetes manifest.

Expand All @@ -60,5 +60,5 @@ Linkerd 2.4; however, having injection to the cluster side makes it easier to
ensure that the data plane is always present and configured correctly,
regardless of how pods are deployed.

See the [`linkerd inject` reference](/2.10/reference/cli/inject/) for more
See the [`linkerd inject` reference](../../reference/cli/inject/) for more
information.
6 changes: 3 additions & 3 deletions linkerd.io/content/2.10/features/retries-and-timeouts.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ number of times, it becomes important to limit the total amount of time a client
waits before giving up entirely. Imagine a number of retries forcing a client
to wait for 10 seconds.

A [service profile](/2.10/features/service-profiles/) may define certain routes as
A [service profile](../service-profiles/) may define certain routes as
retryable or specify timeouts for routes. This will cause the Linkerd proxy to
perform the appropriate retries or timeouts when calling that service. Retries
and timeouts are always performed on the *outbound* (client) side.
Expand All @@ -29,8 +29,8 @@ to.

These can be setup by following the guides:

- [Configuring Retries](/2.10/tasks/configuring-retries/)
- [Configuring Timeouts](/2.10/tasks/configuring-timeouts/)
- [Configuring Retries](../../tasks/configuring-retries/)
- [Configuring Timeouts](../../tasks/configuring-timeouts/)

## How Retries Can Go Wrong

Expand Down
10 changes: 5 additions & 5 deletions linkerd.io/content/2.10/features/service-profiles.md
Original file line number Diff line number Diff line change
Expand Up @@ -22,12 +22,12 @@ to.

To get started with service profiles you can:

- Look into [setting up service profiles](/2.10/tasks/setting-up-service-profiles/)
- Look into [setting up service profiles](../../tasks/setting-up-service-profiles/)
for your own services.
- Understand what is required to see
[per-route metrics](/2.10/tasks/getting-per-route-metrics/).
- [Configure retries](/2.10/tasks/configuring-retries/) on your own services.
- [Configure timeouts](/2.10/tasks/configuring-timeouts/) on your own services.
- Glance at the [reference](/2.10/reference/service-profiles/) documentation.
[per-route metrics](../../tasks/getting-per-route-metrics/).
- [Configure retries](../../tasks/configuring-retries/) on your own services.
- [Configure timeouts](../../tasks/configuring-timeouts/) on your own services.
- Glance at the [reference](../../reference/service-profiles/) documentation.

[crd]: https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/
10 changes: 5 additions & 5 deletions linkerd.io/content/2.10/features/telemetry.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,17 +27,17 @@ requiring any work on the part of the developer. These features include:
latency distributions) for HTTP, HTTP/2, and gRPC traffic.
* Recording of TCP-level metrics (bytes in/out, etc) for other TCP traffic.
* Reporting metrics per service, per caller/callee pair, or per route/path
(with [Service Profiles](/2.10/features/service-profiles/)).
(with [Service Profiles](../service-profiles/)).
* Generating topology graphs that display the runtime relationship between
services.
* Live, on-demand request sampling.

This data can be consumed in several ways:

* Through the [Linkerd CLI](/2.10/cli/), e.g. with `linkerd viz stat` and
* Through the [Linkerd CLI](../../reference/cli/), e.g. with `linkerd viz stat` and
`linkerd viz routes`.
* Through the [Linkerd dashboard](/2.10/features/dashboard/), and
[pre-built Grafana dashboards](/2.10/features/dashboard/#grafana).
* Through the [Linkerd dashboard](../dashboard/), and
[pre-built Grafana dashboards](../dashboard/#grafana).
* Directly from Linkerd's built-in Prometheus instance

## Golden metrics
Expand Down Expand Up @@ -76,4 +76,4 @@ Rather, Linkerd is designed to *supplement* your existing metrics store. If
Linkerd's metrics are valuable, you should export them into your existing
historical metrics store.

See [Exporting Metrics](/2.10/tasks/exporting-metrics/) for more.
See [Exporting Metrics](../../tasks/exporting-metrics/) for more.
4 changes: 2 additions & 2 deletions linkerd.io/content/2.10/features/traffic-split.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,5 @@ account the success rate and latency of old and new versions. See the

Check out some examples of what you can do with traffic splitting:

- [Canary Releases](/2.10/tasks/canary-release/)
- [Fault Injection](/2.10/tasks/fault-injection/)
- [Canary Releases](../../tasks/canary-release/)
- [Fault Injection](../../tasks/fault-injection/)
22 changes: 11 additions & 11 deletions linkerd.io/content/2.10/getting-started/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@ title = "Getting Started"
aliases = [
"/getting-started/istio/",
"/choose-your-platform/",
"/2.10/katacoda/",
"/../katacoda/",
"/doc/getting-started",
"/getting-started"
]
Expand Down Expand Up @@ -46,7 +46,7 @@ Now that we have our cluster, we'll install the Linkerd CLI and use it validate
that your cluster is capable of hosting the Linkerd control plane.

(Note: if you're using a GKE "private cluster", there are some [extra steps
required](/2.10/reference/cluster-configuration/#private-clusters) before you can
required](../reference/cluster-configuration/#private-clusters) before you can
proceed to the next step.)

## Step 1: Install the CLI
Expand Down Expand Up @@ -110,7 +110,7 @@ add those resources to your cluster.
{{< note >}}
Some control plane resources require cluster-wide permissions. If you are
installing on a cluster where these permissions are restricted, you may prefer
the alternative [multi-stage install](/2.10/tasks/install/#multi-stage-install)
the alternative [multi-stage install](../tasks/install/#multi-stage-install)
process, which will split these "sensitive" components into a separate,
self-contained step which can be handed off to another party.
{{< /note >}}
Expand Down Expand Up @@ -173,8 +173,8 @@ linkerd viz dashboard &
title="The Linkerd dashboard in action" >}}

This command sets up a port forward from your local system to the
[linkerd-web](/2.10/reference/architecture/#web) pod. (It's also possible to
[expose the dashboard](/2.10/tasks/exposing-dashboard/) for everyone to access.)
[linkerd-web](../reference/architecture/#web) pod. (It's also possible to
[expose the dashboard](../tasks/exposing-dashboard/) for everyone to access.)

Because the control plane components all have the proxy installed in their pods,
each component is also part of the data plane itself. This provides the ability
Expand Down Expand Up @@ -232,7 +232,7 @@ This command retrieves all of the deployments running in the `emojivoto`
namespace, runs the manifest through `linkerd inject`, and then reapplies it to
the cluster. The `linkerd inject` command adds annotations to the pod spec
instructing Linkerd to add ("inject") the proxy as a container to the pod spec.
(See [Automatic Proxy Injection](/2.10/features/proxy-injection/) for more.)
(See [Automatic Proxy Injection](../features/proxy-injection/) for more.)

As with `install`, `inject` is a pure text operation, meaning that you can
inspect the input and output before you use it. Once piped into `kubectl
Expand Down Expand Up @@ -297,8 +297,8 @@ to use your browser instead:
{{< /gallery >}}

What about things that happened in the past? Linkerd includes
[Grafana](/2.10/reference/architecture/#grafana) to visualize the metrics
collected by [Prometheus](/2.10/reference/architecture/#prometheus), and ships
[Grafana](../reference/architecture/#grafana) to visualize the metrics
collected by [Prometheus](../reference/architecture/#prometheus), and ships
with some pre-configured dashboards. You can get to these by clicking the
Grafana icon in the overview page.

Expand All @@ -309,9 +309,9 @@ Grafana icon in the overview page.

Congratulations, you're now a Linkerd user! Here are some suggested next steps:

- Use Linkerd to [debug the errors in *emojivoto*](/2.10/debugging-an-app/)
- [Add your own service](/2.10/adding-your-service/) to Linkerd without downtime
- Learn more about [Linkerd's architecture](/2.10/reference/architecture/)
- Use Linkerd to [debug the errors in *emojivoto*](../debugging-an-app/)
- [Add your own service](../adding-your-service/) to Linkerd without downtime
- Learn more about [Linkerd's architecture](../reference/architecture/)
- Hop into the #linkerd2 channel on [the Linkerd
Slack](https://slack.linkerd.io)

Expand Down
14 changes: 7 additions & 7 deletions linkerd.io/content/2.10/overview/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,15 +29,15 @@ developed in the open in the [Linkerd GitHub organization](https://github.com/li
Linkerd has three basic components: a UI, a *data plane*, and a *control
plane*. You run Linkerd by:

1. [Installing the CLI on your local system](/2.10/getting-started/#step-1-install-the-cli);
1. [Installing the control plane into your cluster](/2.10/getting-started/#step-3-install-linkerd-onto-the-cluster);
1. [Adding your services to Linkerd's data plane](/2.10/tasks/adding-your-service/).
1. [Installing the CLI on your local system](../getting-started/#step-1-install-the-cli);
1. [Installing the control plane into your cluster](../getting-started/#step-3-install-linkerd-onto-the-cluster);
1. [Adding your services to Linkerd's data plane](../tasks/adding-your-service/).

Once a service is running with Linkerd, you can use [Linkerd's
UI](/2.10/getting-started/#step-4-explore-linkerd) to inspect and
UI](../getting-started/#step-4-explore-linkerd) to inspect and
manipulate it.

You can [get started](/2.10/getting-started/) in minutes!
You can [get started](../getting-started/) in minutes!

## How it works

Expand All @@ -64,6 +64,6 @@ Linkerd is currently published in several tracks:

## Next steps

[Get started with Linkerd](/2.10/getting-started/) in minutes, or check out the
[architecture](/2.10/reference/architecture/) for more details on Linkerd's
[Get started with Linkerd](../getting-started/) in minutes, or check out the
[architecture](../reference/architecture/) for more details on Linkerd's
components and how they all fit together.
Loading

0 comments on commit f45e1a8

Please sign in to comment.