Add 'Documentation/' from commit '5bb59a12b80dc99552bebf0394a067807f72f947'

git-subtree-dir: Documentation
git-subtree-mainline: db8f94a5096c356107873455525f5fa3a02561d5
git-subtree-split: 5bb59a12b80dc99552bebf0394a067807f72f947
This commit is contained in:
Casey Callendrello 2017-05-15 17:13:31 +02:00
commit 3ac78df0b6
10 changed files with 662 additions and 0 deletions

42
Documentation/bridge.md Normal file
View File

@ -0,0 +1,42 @@
# bridge plugin
## Overview
With bridge plugin, all containers (on the same host) are plugged into a bridge (virtual switch) that resides in the host network namespace.
The containers receive one end of the veth pair with the other end connected to the bridge.
An IP address is only assigned to one end of the veth pair -- one residing in the container.
The bridge itself can also be assigned an IP address, turning it into a gateway for the containers.
Alternatively, the bridge can function purely in L2 mode and would need to be bridged to the host network interface (if other than container-to-container communication on the same host is desired).
The network configuration specifies the name of the bridge to be used.
If the bridge is missing, the plugin will create one on first use and, if gateway mode is used, assign it an IP that was returned by IPAM plugin via the gateway field.
## Example configuration
```
{
"name": "mynet",
"type": "bridge",
"bridge": "mynet0",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network.
* `type` (string, required): "bridge".
* `bridge` (string, optional): name of the bridge to use/create. Defaults to "cni0".
* `isGateway` (boolean, optional): assign an IP address to the bridge. Defaults to false.
* `isDefaultGateway` (boolean, optional): Sets isGateway to true and makes the assigned IP the default route. Defaults to false.
* `forceAddress` (boolean, optional): Indicates if a new IP address should be set if the previous value has been changed. Defaults to false.
* `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from this network and destined outside of it. Defaults to false.
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
* `hairpinMode` (boolean, optional): set hairpin mode for interfaces on the bridge. Defaults to false.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.

16
Documentation/cnitool.md Normal file
View File

@ -0,0 +1,16 @@
# Overview
The `cnitool` is a utility that can be used to test a CNI plugin
without the need for a container runtime. The `cnitool` takes a
`network name` and a `network namespace` and a command to `ADD` or
`DEL`,.i.e, attach or detach containers from a network. The `cnitool`
relies on the following environment variables to operate properly:
* `NETCONFPATH`: This environment variable needs to be set to a
directory. It defaults to `/etc/cni/net.d`. The `cnitool` searches
for CNI configuration files in this directory with the extension
`*.conf` or `*.json`. It loads all the CNI configuration files in
this directory and if it finds a CNI configuration with the `network
name` given to the cnitool it returns the corresponding CNI
configuration, else it returns `nil`.
* `CNI_PATH`: For a given CNI configuration `cnitool` will search for
the corresponding CNI plugin in this path.

35
Documentation/dhcp.md Normal file
View File

@ -0,0 +1,35 @@
# dhcp plugin
## Overview
With dhcp plugin the containers can get an IP allocated by a DHCP server already running on your network.
This can be especially useful with plugin types such as [macvlan](https://github.com/containernetworking/cni/blob/master/Documentation/macvlan.md).
Because a DHCP lease must be periodically renewed for the duration of container lifetime, a separate daemon is required to be running.
The same plugin binary can also be run in the daemon mode.
## Operation
To use the dhcp IPAM plugin, first launch the dhcp daemon:
```
# Make sure the unix socket has been removed
$ rm -f /run/cni/dhcp.sock
$ ./dhcp daemon
```
Alternatively, you can use systemd socket activation protocol.
Be sure that the .socket file uses /run/cni/dhcp.sock as the socket path.
With the daemon running, containers using the dhcp plugin can be launched.
## Example configuration
```
{
"ipam": {
"type": "dhcp",
}
}
## Network configuration reference
* `type` (string, required): "dhcp"

88
Documentation/flannel.md Normal file
View File

@ -0,0 +1,88 @@
# flannel plugin
## Overview
This plugin is designed to work in conjunction with [flannel](https://github.com/coreos/flannel), a network fabric for containers.
When flannel daemon is started, it outputs a `/run/flannel/subnet.env` file that looks like this:
```
FLANNEL_NETWORK=10.1.0.0/16
FLANNEL_SUBNET=10.1.17.1/24
FLANNEL_MTU=1472
FLANNEL_IPMASQ=true
```
This information reflects the attributes of flannel network on the host.
The flannel CNI plugin uses this information to configure another CNI plugin, such as bridge plugin.
## Operation
Given the following network configuration file and the contents of `/run/flannel/subnet.env` above,
```
{
"name": "mynet",
"type": "flannel"
}
```
the flannel plugin will generate another network configuration file:
```
{
"name": "mynet",
"type": "bridge",
"mtu": 1472,
"ipMasq": false,
"isGateway": true,
"ipam": {
"type": "host-local",
"subnet": "10.1.17.0/24"
}
}
```
It will then invoke the bridge plugin, passing it the generated configuration.
As can be seen from above, the flannel plugin, by default, will delegate to the bridge plugin.
If additional configuration values need to be passed to the bridge plugin, it can be done so via the `delegate` field:
```
{
"name": "mynet",
"type": "flannel",
"delegate": {
"bridge": "mynet0",
"mtu": 1400
}
}
```
This supplies a configuration parameter to the bridge plugin -- the created bridge will now be named `mynet0`.
Notice that `mtu` has also been specified and this value will not be overwritten by flannel plugin.
Additionally, the `delegate` field can be used to select a different kind of plugin altogether.
To use `ipvlan` instead of `bridge`, the following configuration can be specified:
```
{
"name": "mynet",
"type": "flannel",
"delegate": {
"type": "ipvlan",
"master": "eth0"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network
* `type` (string, required): "flannel"
* `subnetFile` (string, optional): full path to the subnet file written out by flanneld. Defaults to /run/flannel/subnet.env
* `dataDir` (string, optional): path to directory where plugin will store generated network configuration files. Defaults to `/var/lib/cni/flannel`
* `delegate` (dictionary, optional): specifies configuration options for the delegated plugin.
flannel plugin will always set the following fields in the delegated plugin configuration:
* `name`: value of its "name" field.
* `ipam`: "host-local" type will be used with "subnet" set to `$FLANNEL_SUBNET`.
flannel plugin will set the following fields in the delegated plugin configuration if they are not present:
* `ipMasq`: the inverse of `$FLANNEL_IPMASQ`
* `mtu`: `$FLANNEL_MTU`
Additionally, for the bridge plugin, `isGateway` will be set to `true`, if not present.

View File

@ -0,0 +1,82 @@
# host-local IP address management plugin
host-local IPAM allocates IPv4 and IPv6 addresses out of a specified address range. Optionally,
it can include a DNS configuration from a `resolv.conf` file on the host.
## Overview
host-local IPAM plugin allocates IPv4 addresses out of a specified address range.
It stores the state locally on the host filesystem, therefore ensuring uniqueness of IP addresses on a single host.
## Example configurations
IPv4:
```json
{
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16",
"rangeStart": "10.10.1.20",
"rangeEnd": "10.10.3.50",
"gateway": "10.10.0.254",
"routes": [
{ "dst": "0.0.0.0/0" },
{ "dst": "192.168.0.0/16", "gw": "10.10.5.1" }
],
"dataDir": "/var/my-orchestrator/container-ipam-state"
}
}
```
IPv6:
```json
{
"ipam": {
"type": "host-local",
"subnet": "3ffe:ffff:0:01ff::/64",
"rangeStart": "3ffe:ffff:0:01ff::0010",
"rangeEnd": "3ffe:ffff:0:01ff::0020",
"routes": [
{ "dst": "3ffe:ffff:0:01ff::1/64" }
],
"resolvConf": "/etc/resolv.conf"
}
}
```
We can test it out on the command-line:
```bash
$ export CNI_COMMAND=ADD
$ export CNI_CONTAINERID=f81d4fae-7dec-11d0-a765-00a0c91e6bf6
$ echo '{ "name": "default", "ipam": { "type": "host-local", "subnet": "203.0.113.0/24" } }' | ./host-local
```
```json
{
"ip4": {
"ip": "203.0.113.1/24"
}
}
```
## Network configuration reference
* `type` (string, required): "host-local".
* `subnet` (string, required): CIDR block to allocate out of.
* `rangeStart` (string, optional): IP inside of "subnet" from which to start allocating addresses. Defaults to ".2" IP inside of the "subnet" block.
* `rangeEnd` (string, optional): IP inside of "subnet" with which to end allocating addresses. Defaults to ".254" IP inside of the "subnet" block.
* `gateway` (string, optional): IP inside of "subnet" to designate as the gateway. Defaults to ".1" IP inside of the "subnet" block.
* `routes` (string, optional): list of routes to add to the container namespace. Each route is a dictionary with "dst" and optional "gw" fields. If "gw" is omitted, value of "gateway" will be used.
* `resolvConf` (string, optional): Path to a `resolv.conf` on the host to parse and return as the DNS configuration
* `dataDir` (string, optional): Path to a directory to use for maintaining state, e.g. which IPs have been allocated to which containers
## Supported arguments
The following [CNI_ARGS](https://github.com/containernetworking/cni/blob/master/SPEC.md#parameters) are supported:
* `ip`: request a specific IP address from the subnet. If it's not available, the plugin will exit with an error
## Files
Allocated IP addresses are stored as files in `/var/lib/cni/networks/$NETWORK_NAME`. The prefix can be customized with the `dataDir` option listed above.

40
Documentation/ipvlan.md Normal file
View File

@ -0,0 +1,40 @@
# ipvlan plugin
## Overview
ipvlan is a new [addition](https://lwn.net/Articles/620087/) to the Linux kernel.
Like its cousin macvlan, it virtualizes the host interface.
However unlike macvlan which generates a new MAC address for each interface, ipvlan devices all share the same MAC.
The kernel driver inspects the IP address of each packet when making a decision about which virtual interface should process the packet.
Because all ipvlan interfaces share the MAC address with the host interface, DHCP can only be used in conjunction with ClientID (currently not supported by DHCP plugin).
## Example configuration
```
{
"name": "mynet",
"type": "ipvlan",
"master": "eth0",
"ipam": {
"type": "host-local",
"subnet": "10.1.2.0/24"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network.
* `type` (string, required): "ipvlan".
* `master` (string, required): name of the host interface to enslave.
* `mode` (string, optional): one of "l2", "l3". Defaults to "l2".
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.
## Notes
* `ipvlan` does not allow virtual interfaces to communicate with the master interface.
Therefore the container will not be able to reach the host via `ipvlan` interface.
Be sure to also have container join a network that provides connectivity to the host (e.g. `ptp`).
* A single master interface can not be enslaved by both `macvlan` and `ipvlan`.

34
Documentation/macvlan.md Normal file
View File

@ -0,0 +1,34 @@
# macvlan plugin
## Overview
[macvlan](http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/) functions like a switch that is already connected to the host interface.
A host interface gets "enslaved" with the virtual interfaces sharing the physical device but having distinct MAC addresses.
Since each macvlan interface has its own MAC address, it makes it easy to use with existing DHCP servers already present on the network.
## Example configuration
```
{
"name": "mynet",
"type": "macvlan",
"master": "eth0",
"ipam": {
"type": "dhcp"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network
* `type` (string, required): "macvlan"
* `master` (string, required): name of the host interface to enslave
* `mode` (string, optional): one of "bridge", "private", "vepa", "passthrough". Defaults to "bridge".
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.
## Notes
* If are testing on a laptop, please remember that most wireless cards do not support being enslaved by macvlan.
* A single master interface can not be enslaved by both `macvlan` and `ipvlan`.

32
Documentation/ptp.md Normal file
View File

@ -0,0 +1,32 @@
# ptp plugin
## Overview
The ptp plugin creates a point-to-point link between a container and the host by using a veth device.
One end of the veth pair is placed inside a container and the other end resides on the host.
The host-local IPAM plugin can be used to allocate an IP address to the container.
The traffic of the container interface will be routed through the interface of the host.
## Example network configuration
```
{
"name": "mynet",
"type": "ptp",
"ipam": {
"type": "host-local",
"subnet": "10.1.1.0/24"
},
"dns": {
"nameservers": [ "10.1.1.1", "8.8.8.8" ]
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network
* `type` (string, required): "ptp"
* `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from this network and destined outside of it. Defaults to false.
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to value chosen by the kernel.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.
* `dns` (dictionary, optional): DNS information to return as described in the [Result](/SPEC.md#result).

View File

@ -0,0 +1,259 @@
# How to upgrade to CNI Specification v0.3.1
The 0.3.0 specification contained a small error. The Result structure's `ip` field should have been renamed to `ips` to be consistent with the IPAM result structure definition; this rename was missed when updating the Result to accommodate multiple IP addresses and interfaces. All first-party CNI plugins (bridge, host-local, etc) were updated to use `ips` (and thus be inconsistent with the 0.3.0 specification) and most other plugins have not been updated to the 0.3.0 specification yet, so few (if any) users should be impacted by this change.
The 0.3.1 specification corrects the Result structure to use the `ips` field name as originally intended. This is the only change between 0.3.0 and 0.3.1.
# How to upgrade to CNI Specification v0.3.0
Version 0.3.0 of the [CNI Specification](../SPEC.md) provides rich information
about container network configuration, including details of network interfaces
and support for multiple IP addresses.
To support this new data, the specification changed in a couple significant
ways that will impact CNI users, plugin authors, and runtime authors.
This document provides guidance for how to upgrade:
- [For CNI Users](#for-cni-users)
- [For Plugin Authors](#for-plugin-authors)
- [For Runtime Authors](#for-runtime-authors)
**Note**: the CNI Spec is versioned independently from the GitHub releases
for this repo. For example, Release v0.4.0 supports Spec version v0.2.0,
and Release v0.5.0 supports Spec v0.3.0.
----
## For CNI Users
If you maintain CNI configuration files for a container runtime that uses CNI,
ensure that the configuration files specify a `cniVersion` field and that the
version there is supported by your container runtime and CNI plugins.
Configuration files without a version field should be given version 0.2.0.
The CNI spec includes example configuration files for
[single plugins](https://github.com/containernetworking/cni/blob/master/SPEC.md#example-configurations)
and for [lists of chained plugins](https://github.com/containernetworking/cni/blob/master/SPEC.md#example-configurations).
Consult the documentation for your runtime and plugins to determine what
CNI spec versions they support. Test any plugin upgrades before deploying to
production. You may find [cnitool](https://github.com/containernetworking/cni/tree/master/cnitool)
useful. Specifically, your configuration version should be the lowest common
version supported by your plugins.
## For Plugin Authors
This section provides guidance for upgrading plugins to CNI Spec Version 0.3.0.
### General guidance for all plugins (language agnostic)
To provide the smoothest upgrade path, **existing plugins should support
multiple versions of the CNI spec**. In particular, plugins with existing
installed bases should add support for CNI spec version 0.3.0 while maintaining
compatibility with older versions.
To do this, two changes are required. First, a plugin should advertise which
CNI spec versions it supports. It does this by responding to the `VERSION`
command with the following JSON data:
```json
{
"cniVersion": "0.3.0",
"supportedVersions": [ "0.1.0", "0.2.0", "0.3.0" ]
}
```
Second, for the `ADD` command, a plugin must respect the `cniVersion` field
provided in the [network configuration JSON](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration).
That field is a request for the plugin to return results of a particular format:
- If the `cniVersion` field is not present, then spec v0.2.0 should be assumed
and v0.2.0 format result JSON returned.
- If the plugin doesn't support the version, the plugin must error.
- Otherwise, the plugin must return a [CNI Result](https://github.com/containernetworking/cni/blob/master/SPEC.md#result)
in the format requested.
Result formats for older CNI spec versions are available in the
[git history for SPEC.md](https://github.com/containernetworking/cni/commits/master/SPEC.md).
For example, suppose a plugin, via its `VERSION` response, advertises CNI specification
support for v0.2.0 and v0.3.0. When it receives `cniVersion` key of `0.2.0`,
the plugin must return result JSON conforming to CNI spec version 0.2.0.
### Specific guidance for plugins written in Go
Plugins written in Go may leverage the Go language packages in this repository
to ease the process of upgrading and supporting multiple versions. CNI
[Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases)
includes important changes to the Golang APIs. Plugins using these APIs will
require some changes now, but should more-easily handle spec changes and
new features going forward.
For plugin authors, the biggest change is that `types.Result` is now an
interface implemented by concrete struct types in the `types/current` and
`types/020` subpackages.
Internally, plugins should use the `types/current` structs, and convert
to or from specific versions when required. A typical plugin will only need
to do a single conversion. That is when it is about to complete and needs to
print the result JSON in the correct format to stdout. The library
function `types.PrintResult()` simplifies this by converting and printing in
a single call.
Additionally, the plugin should advertise which CNI Spec versions it supports
via the 3rd argument to `skel.PluginMain()`.
Here is some example code
```go
import (
"github.com/containernetworking/cni/pkg/skel"
"github.com/containernetworking/cni/pkg/types"
"github.com/containernetworking/cni/pkg/types/current"
"github.com/containernetworking/cni/pkg/version"
)
func cmdAdd(args *skel.CmdArgs) error {
// determine spec version to use
var netConf struct {
types.NetConf
// other plugin-specific configuration goes here
}
err := json.Unmarshal(args.StdinData, &netConf)
cniVersion := netConf.CNIVersion
// plugin does its work...
// set up interfaces
// assign addresses, etc
// construct the result
result := &current.Result{
Interfaces: []*current.Interface{ ... },
IPs: []*current.IPs{ ... },
...
}
// print result to stdout, in the format defined by the requested cniVersion
return types.PrintResult(result, cniVersion)
}
func main() {
skel.PluginMain(cmdAdd, cmdDel, version.PluginSupports("0.1.0", "0.2.0", "0.3.0"))
}
```
Alternately, to use the result from a delegated IPAM plugin, the `result`
value might be formed like this:
```go
ipamResult, err := ipam.ExecAdd(netConf.IPAM.Type, args.StdinData)
result, err := current.NewResultFromResult(ipamResult)
```
Other examples of spec v0.3.0-compatible plugins are the
[main plugins in this repo](https://github.com/containernetworking/cni/tree/master/plugins/main)
## For Runtime Authors
This section provides guidance for upgrading container runtimes to support
CNI Spec Version 0.3.0.
### General guidance for all runtimes (language agnostic)
#### Support multiple CNI spec versions
To provide the smoothest upgrade path and support the broadest range of CNI
plugins, **container runtimes should support multiple versions of the CNI spec**.
In particular, runtimes with existing installed bases should add support for CNI
spec version 0.3.0 while maintaining compatibility with older versions.
To support multiple versions of the CNI spec, runtimes should be able to
call both new and legacy plugins, and handle the results from either.
When calling a plugin, the runtime must request that the plugin respond in a
particular format by specifying the `cniVersion` field in the
[Network Configuration](https://github.com/containernetworking/cni/blob/master/SPEC.md#network-configuration)
JSON block. The plugin will then respond with
a [Result](https://github.com/containernetworking/cni/blob/master/SPEC.md#result)
in the format defined by that CNI spec version, and the runtime must parse
and handle this result.
#### Handle errors due to version incompatibility
Plugins may respond with error indicating that they don't support the requested
CNI version (see [Well-known Error Codes](https://github.com/containernetworking/cni/blob/master/SPEC.md#well-known-error-codes)),
e.g.
```json
{
"cniVersion": "0.2.0",
"code": 1,
"msg": "CNI version not supported"
}
```
In that case, the runtime may retry with a lower CNI spec version, or take
some other action.
#### (optional) Discover plugin version support
Runtimes may discover which CNI spec versions are supported by a plugin, by
calling the plugin with the `VERSION` command. The `VERSION` command was
added in CNI spec v0.2.0, so older plugins may not respect it. In the absence
of a successful response to `VERSION`, assume that the plugin only supports
CNI spec v0.1.0.
#### Handle missing data in v0.3.0 results
The Result for the `ADD` command in CNI spec version 0.3.0 includes a new field
`interfaces`. An IP address in the `ip` field may describe which interface
it is assigned to, by placing a numeric index in the `interface` subfield.
However, some plugins which are v0.3.0 compatible may nonetheless omit the
`interfaces` field and/or set the `interface` index value to `-1`. Runtimes
should gracefully handle this situation, unless they have good reason to rely
on the existence of the interface data. In that case, provide the user an
error message that helps diagnose the issue.
### Specific guidance for container runtimes written in Go
Container runtimes written in Go may leverage the Go language packages in this
repository to ease the process of upgrading and supporting multiple versions.
CNI [Library and Plugins Release v0.5.0](https://github.com/containernetworking/cni/releases)
includes important changes to the Golang APIs. Runtimes using these APIs will
require some changes now, but should more-easily handle spec changes and
new features going forward.
For runtimes, the biggest changes to the Go libraries are in the `types` package.
It has been refactored to make working with versioned results simpler. The top-level
`types.Result` is now an opaque interface instead of a struct, and APIs exposed by
other packages, such as the high-level `libcni` package, have been updated to use
this interface. Concrete types are now per-version subpackages. The `types/current`
subpackage contains the latest (spec v0.3.0) types.
When up-converting older result types to spec v0.3.0, fields new in
spec v0.3.0 (like `interfaces`) may be empty. Conversely, when
down-converting v0.3.0 results to an older version, any data in those fields
will be lost.
| From | 0.1 | 0.2 | 0.3 |
|--------|-----|-----|-----|
| To 0.1 | ✔ | ✔ | x |
| To 0.2 | ✔ | ✔ | x |
| To 0.3 | ✴ | ✴ | ✔ |
Key:
> ✔ : lossless conversion <br>
> ✴ : higher-version output may have empty fields <br>
> x : lower-version output is missing some data <br>
A container runtime should use `current.NewResultFromResult()` to convert the
opaque `types.Result` to a concrete `current.Result` struct. It may then
work with the fields exposed by that struct:
```go
// runtime invokes the plugin to get the opaque types.Result
// this may conform to any CNI spec version
resultInterface, err := libcni.AddNetwork(netConf, runtimeConf)
// upconvert result to the current 0.3.0 spec
result, err := current.NewResultFromResult(resultInterface)
// use the result fields ....
for _, ip := range result.IPs { ... }
```

34
Documentation/tuning.md Normal file
View File

@ -0,0 +1,34 @@
# tuning plugin
## Overview
This plugin can change some system controls (sysctls) in the network namespace.
It does not create any network interfaces and therefore does not bring connectivity by itself.
It is only useful when used in addition to other plugins.
## Operation
The following network configuration file
```
{
"name": "mytuning",
"type": "tuning",
"sysctl": {
"net.core.somaxconn": "500"
}
}
```
will set /proc/sys/net/core/somaxconn to 500.
Other sysctls can be modified as long as they belong to the network namespace (`/proc/sys/net/*`).
A successful result would simply be:
```
{ }
```
## Network sysctls documentation
Some network sysctls are documented in the Linux sources:
- [Documentation/sysctl/net.txt](https://www.kernel.org/doc/Documentation/sysctl/net.txt)
- [Documentation/networking/ip-sysctl.txt](https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt)
- [Documentation/networking/](https://www.kernel.org/doc/Documentation/networking/)