Skip to content

Commit

Permalink
Merge branch 'master' into old-dirs-11
Browse files Browse the repository at this point in the history
  • Loading branch information
steveperry-53 committed Feb 18, 2018
2 parents 2929e0f + 372eec8 commit 1247c99
Show file tree
Hide file tree
Showing 15 changed files with 191 additions and 171 deletions.
1 change: 1 addition & 0 deletions _data/docs-home.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ toc:
landing_page: /editdocs/
section:
- editdocs.md
- docs/home/contribute/participating.md
- docs/home/contribute/create-pull-request.md
- docs/home/contribute/write-new-topic.md
- docs/home/contribute/stage-documentation-changes.md
Expand Down
14 changes: 14 additions & 0 deletions _data/glossary/customresourcedefinition.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
id: CustomResourceDefinition
name: CustomResourceDefinition
aka:
- CRD
- Formerly Known as ThirdPartyResources (TPR)
tags:
- fundamental
- operation
- extension
full-link: docs/tasks/access-kubernetes-api/extend-api-custom-resource-definitions/
short-description: >
Custom code that defines a resource to add to your Kubernetes API server without building a complete custom server.
long-description: >
Custom Resource Definitions let you extend the Kubernetes API for your environment if the publicly supported API resources can't meet your needs.
1 change: 1 addition & 0 deletions _redirects
Original file line number Diff line number Diff line change
Expand Up @@ -77,6 +77,7 @@
/docs/api-reference/v1.9/ /docs/reference/generated/kubernetes-api/v1.9/ 301
/docs/api-reference/v1/definitions/ /docs/api-reference/v1.9/ 301
/docs/api-reference/v1/operations/ /docs/api-reference/v1.9/ 301
/docs/api-reference/v1.9/ /docs/reference/generated/kubernetes-api/v1.9/ 301

/docs/concepts/abstractions/controllers/garbage-collection/ /docs/concepts/workloads/controllers/garbage-collection/ 301
/docs/concepts/abstractions/controllers/statefulsets/ /docs/concepts/workloads/controllers/statefulset/ 301
Expand Down
7 changes: 7 additions & 0 deletions docs/concepts/cluster-administration/networking.md
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,14 @@ imply any preferential status.
### ACI

[Cisco Application Centric Infrastructure](https://www.cisco.com/c/en/us/solutions/data-center-virtualization/application-centric-infrastructure/index.html) offers an integrated overlay and underlay SDN solution that supports containers, virtual machines, and bare metal servers. [ACI](https://www.github.com/noironetworks/aci-containers) provides container networking integration for ACI. An overview of the integration is provided [here](https://www.cisco.com/c/dam/en/us/solutions/collateral/data-center-virtualization/application-centric-infrastructure/solution-overview-c22-739493.pdf).

### Big Cloud Fabric from Big Switch Networks

[Big Cloud Fabric](https://www.bigswitch.com/container-network-automation) is a cloud native networking architecture, designed to run Kubernetes in private cloud/on-premise environments. Using unified physical & virtual SDN, Big Cloud Fabric tackles inherent container networking problems such as load balancing, visibility, troubleshooting, security policies & container traffic monitoring.

With the help of the Big Cloud Fabric's virtual pod multi-tenant architecture, container orchestration systems such as Kubernetes, RedHat Openshift, Mesosphere DC/OS & Docker Swarm will be natively integrated along side with VM orchestration systems such as VMware, OpenStack & Nutanix. Customers will be able to securely inter-connect any number of these clusters and enable inter-tenant communication between them if needed.

BCF was recognized by Gartner as a visionary in the latest [Magic Quadrant](http://go.bigswitch.com/17GatedDocuments-MagicQuadrantforDataCenterNetworking_Reg.html). One of the BCF Kubernetes on premise deployments (which includes Kubernetes, DC/OS & VMware running on multiple DCs across different geographic regions) is also referenced [here](https://portworx.com/architects-corner-kubernetes-satya-komala-nio/).

### Cilium

Expand Down
109 changes: 109 additions & 0 deletions docs/home/contribute/participating.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,109 @@
---
title: Participating in SIG-DOCS
---

{% capture overview %}

SIG-DOCS is one of the [special interest groups](https://github.com/kubernetes/community/blob/master/sig-list.md) within the Kubernetes project, focused on writing, updating, and maintaining the documentation for Kubernetes as a whole.

{% endcapture %}

{% capture body %}

SIG Docs welcomes content and reviews from all contributors. Anyone can open a pull request (PR), and anyone is welcome to comment on content or pull requests in progress.

Within the Kubernetes project, you may also become a member, reviewer, or approver.
These roles confer additional privileges and responsibilities when it comes to approving and committing changes.
See [community-membership](https://github.com/kubernetes/community/blob/master/community-membership.md) for more information on how membership works within the Kubernetes community.

## Roles and Responsibilities

The automation reads `/hold`, `/lgtm`, and `/approve` comments and sets labels on the pull request.
When a pull request has the `lgtm` and `approve` labels without any `hold` labels, the pull request merges automatically.
Kubernetes org members, and reviewers and approvers for SIG Docs can add comments to control the merge automation.

- Members

Any member of the [Kubernetes organization](https://github.com/kubernetes) can review a pull request, and SIG Docs team members frequently request reviews from members of other SIGs for technical accuracy.
SIG Docs also welcomes reviews and feedback regardless of Kubernetes org membership.
You can indicate your approval by adding a comment of `/lgtm` to a pull request.

- Reviewers

Reviewers are individuals who review documentation pull requests.

Automation assigns reviewers to pull requests, and contributors can request a review with a comment on the pull request: `/assign [@_github_handle]`.
To indicate that a pull request requires no further changes, a reviewer should add comment to the pull request `/lgtm`.
A reviewer indicates technical accuracy with a `lgtm` comment.

Reviewers can add a `/hold` comment to prevent the pull request from being merged.
Another reviewer or approver can remove a hold with the comment: `/hold cancel`.

When a reviewer is assigned a pull request to review it is not a sole responsibility, and any other reviewer may also offer their opinions on the pull request.
If a reviewer is requested, it is generally expected that the PR will be left to that reviewer to do their editorial pass on the content.
If a PR author or SIG Docs maintainer requests a review, refrain from merging or closing the PR until the requested reviewer completes their review.

- Approvers

Approvers have the ability to merge a PR.

Approvers can indicate their approval with a comment to the pull request: `/approve`.
An approver is indicating editorial approval with the an `/approve` comment.

Approvers can add a `/hold` comment to prevent the pull request from being merged.
Another reviewer or approver can remove a hold with the comment: `/hold cancel`.

Approvers may skip further reviews for small pull requests if the proposed changes appear trivial and/or well-understood.
An approver can indicate `/lgtm` or `/approve` in a PR comment to have a pull request merged, and all pull requests require at least one approver to provide their vote in order for the PR to be merged.

**Note:** There is a special case when an approver uses the comment: `/lgtm`. In these cases, the automation will add both `lgtm` and `approve` tags, skipping any further review.
+{: .note }

For PRs that require no review (typos or otherwise trivial changes), approvers can enter an `lgtm` comment, indicating no need for further review and flagging the PR with approval to merge.

### Teams and groups within SIG Docs

You can get an overview of [SIG Docs from the community github repo](https://github.com/kubernetes/community/tree/master/sig-docs).
The SIG Docs group defines two teams on Github:
- [@kubernetes/sig-docs-maintainers](https://github.com/orgs/kubernetes/teams/sig-docs-maintainers)
- [@kubernetes/sig-docs-pr-reviews](https://github.com/orgs/kubernetes/teams/sig-docs-pr-reviews)

These groups maintain the [Kubernetes website repository](https://github.com/kubernetes/website), which houses the content hosted at this site.
Both can be referenced with their `@name` in github comments to communicate with everyone in that group.

These teams overlap, but do not exactly match, the groups used by the automation tooling.
For assignment of issues, pull requests, and to support PR approvals, the automation uses information from the OWNERS file.

To volunteer as a reviewer or approver, make a pull request and add your Github handle to the relevant section in the [OWNERS file](https://github.com/kubernetes/community/blob/master/contributors/devel/owners.md).

**Note:** Reviewers and approvers must meet requirements for participation.
For more information, see the [Kubernetes community](https://github.com/kubernetes/community/blob/master/community-membership.md#membership) repository.
{: .note }

Documentation for the [OWNERS](https://github.com/kubernetes/community/blob/master/contributors/devel/owners.md) explains how to maintain OWNERS for each repository that enables it.

The [Kubernetes website repository](https://github.com/kubernetes/website) has two automation (prow) [plugins enabled](https://github.com/kubernetes/test-infra/blob/master/prow/plugins.yaml#L210):
- blunderbuss
- approve

These two plugins use the [OWNERS](https://github.com/kubernetes/website/blob/master/OWNERS) and [OWNERS_ALIAS](https://github.com/kubernetes/website/blob/master/OWNERS_ALIAS) files in our repo for configuration.

{% endcapture %}

{% capture whatsnext %}
For more information about contributing to the Kubernetes documentation, see:

* Review the SIG Docs [Style Guide](/docs/home/contribute/style-guide/).
* Learn how to [stage your documentation changes](/docs/home/contribute/stage-documentation-changes/).
* Learn about [writing a new topic](/docs/home/contribute/write-new-topic/).
* Learn about [using page templates](/docs/home/contribute/page-templates/).
* Learn about [staging your changes](/docs/home/contribute/stage-documentation-changes/).
* Learn about [creating a pull request](/docs/home/contribute/create-pull-request/).
* How to generate documentation:
* Learn how to [generate Reference Documentation for Kubernetes Federation API](/docs/home/contribute/generated-reference/federation-api/)
* Learn how to [generate Reference Documentation for kubectl Commands](/docs/home/contribute/generated-reference/kubectl/)
* Learn how to [generate Reference Documentation for the Kubernetes API](/docs/home/contribute/generated-reference/kubernetes-api/)
* Learn how to [generate Reference Pages for Kubernetes Components and Tools](/docs/home/contribute/generated-reference/kubernetes-components/)
{% endcapture %}

{% include templates/concept.md %}
2 changes: 0 additions & 2 deletions docs/setup/pick-right-solution.md
Original file line number Diff line number Diff line change
Expand Up @@ -158,8 +158,6 @@ Bare-metal | custom | Fedora | _none_ | [docs](/docs/gettin
Bare-metal | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
libvirt | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
KVM | custom | Fedora | flannel | [docs](/docs/getting-started-guides/fedora/flannel_multi_node_cluster/) | Community ([@aveshagarwal](https://github.com/aveshagarwal))
Mesos/Docker | custom | Ubuntu | Docker | [docs](/docs/getting-started-guides/mesos-docker/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
Mesos/GCE | | | | [docs](/docs/getting-started-guides/mesos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
DCOS | Marathon | CoreOS/Alpine | custom | [docs](/docs/getting-started-guides/dcos/) | Community ([Kubernetes-Mesos Authors](https://github.com/mesosphere/kubernetes-mesos/blob/master/AUTHORS.md))
AWS | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/aws/) | Community
GCE | CoreOS | CoreOS | flannel | [docs](/docs/getting-started-guides/coreos/) | Community ([@pires](https://github.com/pires))
Expand Down
44 changes: 27 additions & 17 deletions docs/tasks/administer-cluster/kubeadm-upgrade-1-8.md
Original file line number Diff line number Diff line change
Expand Up @@ -36,10 +36,10 @@ You have to carry out the following steps by executing these commands on your ma
1. Install the most recent version of `kubeadm` using `curl` like so:

```shell
$ export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
$ export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
$ curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
$ chmod a+rx /usr/bin/kubeadm
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
chmod a+rx /usr/bin/kubeadm
```
**Caution:** Upgrading the `kubeadm` package on your system prior to
upgrading the control plane causes a failed upgrade. Even though
Expand All @@ -51,7 +51,7 @@ this limitation.
Verify that this download of kubeadm works, and has the expected version:

```shell
$ kubeadm version
kubeadm version
```

2. If this the first time you use `kubeadm upgrade`, in order to preserve the configuration for future upgrades, do:
Expand All @@ -61,23 +61,28 @@ Note that for below you will need to recall what CLI args you passed to `kubeadm
If you used flags, do:

```shell
$ kubeadm config upload from-flags [flags]
kubeadm config upload from-flags [flags]
```

Where `flags` can be empty.

If you used a config file, do:

```shell
$ kubeadm config upload from-file --config [config]
kubeadm config upload from-file --config [config]
```

Where the `config` is mandatory.

3. On the master node, run the following:

```shell
$ kubeadm upgrade plan
kubeadm upgrade plan
```

You should see output similar to this:

```shell
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
Expand Down Expand Up @@ -138,7 +143,12 @@ The `kubeadm upgrade plan` checks that your cluster is in an upgradeable state a
4. Pick a version to upgrade to and run, for example, `kubeadm upgrade apply` as follows:

```shell
$ kubeadm upgrade apply v1.8.0
kubeadm upgrade apply v1.8.0
```

You should see output similar to this:

```shell
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
Expand Down Expand Up @@ -211,7 +221,7 @@ $ kubeadm upgrade apply v1.8.0
6. Add RBAC permissions for automated certificate rotation. In the future, kubeadm will perform this step automatically:

```shell
$ kubectl create clusterrolebinding kubeadm:node-autoapprove-certificate-rotation --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
kubectl create clusterrolebinding kubeadm:node-autoapprove-certificate-rotation --clusterrole=system:certificates.k8s.io:certificatesigningrequests:selfnodeclient --group=system:nodes
```

## Upgrading your master and node packages
Expand All @@ -221,7 +231,7 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet`
1. Prepare the host for maintenance, marking it unschedulable and evicting the workload:

```shell
$ kubectl drain $HOST --ignore-daemonsets
kubectl drain $HOST --ignore-daemonsets
```

When running this command against the master host, this error is expected and can be safely ignored (since there are static pods running on the master):
Expand All @@ -236,32 +246,32 @@ error: pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or
If the host is running a Debian-based distro such as Ubuntu, run:

```shell
$ apt-get update
$ apt-get upgrade
apt-get update
apt-get upgrade
```

If the host is running CentOS or the like, run:

```shell
$ yum update
yum update
```

Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`:

```shell
$ systemctl status kubelet
systemctl status kubelet
```

3. Bring the host back online by marking it schedulable:

```shell
$ kubectl uncordon $HOST
kubectl uncordon $HOST
```

4. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):

```shell
$ kubectl get nodes
kubectl get nodes
```

If the `STATUS` column of the above command shows `Ready` for all of your hosts, you are done.
Expand Down
38 changes: 24 additions & 14 deletions docs/tasks/administer-cluster/kubeadm-upgrade-1-9.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,10 +39,10 @@ Execute these commands on your master node:
1. Install the most recent version of `kubeadm` using `curl` like so:

```shell
$ export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
$ export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
$ curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
$ chmod a+rx /usr/bin/kubeadm
export VERSION=$(curl -sSL https://dl.k8s.io/release/stable.txt) # or manually specify a released Kubernetes version
export ARCH=amd64 # or: arm, arm64, ppc64le, s390x
curl -sSL https://dl.k8s.io/release/${VERSION}/bin/linux/${ARCH}/kubeadm > /usr/bin/kubeadm
chmod a+rx /usr/bin/kubeadm
```

**Caution:** Upgrading the `kubeadm` package on your system prior to upgrading the control plane causes a failed upgrade.
Expand All @@ -53,13 +53,18 @@ team is working on fixing this limitation.
Verify that this download of kubeadm works and has the expected version:

```shell
$ kubeadm version
kubeadm version
```

2. On the master node, run the following:

```shell
$ kubeadm upgrade plan
kubeadm upgrade plan
```

You should see output similar to this:

```shell
[preflight] Running pre-flight checks
[upgrade] Making sure the cluster is healthy:
[upgrade/health] Checking API Server health: Healthy
Expand Down Expand Up @@ -122,7 +127,12 @@ To check CoreDNS version, include the `--feature-gates=CoreDNS=true` flag to ver
3. Pick a version to upgrade to and run. For example:

```shell
$ kubeadm upgrade apply v1.9.0
kubeadm upgrade apply v1.9.0
```

You should see output similar to this:

```shell
[preflight] Running pre-flight checks.
[upgrade] Making sure the cluster is healthy:
[upgrade/config] Making sure the configuration is correct:
Expand Down Expand Up @@ -194,7 +204,7 @@ For each host (referred to as `$HOST` below) in your cluster, upgrade `kubelet`
1. Prepare the host for maintenance, marking it unschedulable and evicting the workload:
```shell
$ kubectl drain $HOST --ignore-daemonsets
kubectl drain $HOST --ignore-daemonsets
```
When running this command against the master host, this error is expected and can be safely ignored (since there are static pods running on the master):
Expand All @@ -209,32 +219,32 @@ error: pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or
If the host is running a Debian-based distro such as Ubuntu, run:
```shell
$ apt-get update
$ apt-get upgrade
apt-get update
apt-get upgrade
```
If the host is running CentOS or the like, run:
```shell
$ yum update
yum update
```
Now the new version of the `kubelet` should be running on the host. Verify this using the following command on `$HOST`:
```shell
$ systemctl status kubelet
systemctl status kubelet
```
3. Bring the host back online by marking it schedulable:
```shell
$ kubectl uncordon $HOST
kubectl uncordon $HOST
```
4. After upgrading `kubelet` on each host in your cluster, verify that all nodes are available again by executing the following (from anywhere, for example, from outside the cluster):
```shell
$ kubectl get nodes
kubectl get nodes
```
If the `STATUS` column of the above command shows `Ready` for all of your hosts, you are done.
Expand Down
Loading

0 comments on commit 1247c99

Please sign in to comment.