Updating plugin README.md files (#549)

Removing content and pointing at the new website as a part of the CNI Documentation migration.

Signed-off-by: Nate W <4453979+nate-double-u@users.noreply.github.com>
This commit is contained in:
Nate W
2020-11-18 16:38:45 +00:00
committed by GitHub
parent 8aad9739d8
commit cccf5395e8
18 changed files with 35 additions and 1276 deletions

View File

@ -1,60 +1,5 @@
# bridge plugin
## Overview
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
With bridge plugin, all containers (on the same host) are plugged into a bridge (virtual switch) that resides in the host network namespace.
The containers receive one end of the veth pair with the other end connected to the bridge.
An IP address is only assigned to one end of the veth pair -- one residing in the container.
The bridge itself can also be assigned an IP address, turning it into a gateway for the containers.
Alternatively, the bridge can function purely in L2 mode and would need to be bridged to the host network interface (if other than container-to-container communication on the same host is desired).
You can find it online here: https://cni.dev/plugins/main/bridge/
The network configuration specifies the name of the bridge to be used.
If the bridge is missing, the plugin will create one on first use and, if gateway mode is used, assign it an IP that was returned by IPAM plugin via the gateway field.
## Example configuration
```
{
"cniVersion": "0.3.1",
"name": "mynet",
"type": "bridge",
"bridge": "mynet0",
"isDefaultGateway": true,
"forceAddress": false,
"ipMasq": true,
"hairpinMode": true,
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16"
}
}
```
## Example L2-only configuration
```
{
"cniVersion": "0.3.1",
"name": "mynet",
"type": "bridge",
"bridge": "mynet0",
"ipam": {}
}
```
## Network configuration reference
* `name` (string, required): the name of the network.
* `type` (string, required): "bridge".
* `bridge` (string, optional): name of the bridge to use/create. Defaults to "cni0".
* `isGateway` (boolean, optional): assign an IP address to the bridge. Defaults to false.
* `isDefaultGateway` (boolean, optional): Sets isGateway to true and makes the assigned IP the default route. Defaults to false.
* `forceAddress` (boolean, optional): Indicates if a new IP address should be set if the previous value has been changed. Defaults to false.
* `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from this network and destined outside of it. Defaults to false.
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
* `hairpinMode` (boolean, optional): set hairpin mode for interfaces on the bridge. Defaults to false.
* `ipam` (dictionary, required): IPAM configuration to be used for this network. For L2-only network, create empty dictionary.
* `promiscMode` (boolean, optional): set promiscuous mode on the bridge. Defaults to false.
* `vlan` (int, optional): assign VLAN tag. Defaults to none.
*Note:* The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface.
*Note:* To configure uplink for L2 network you need to allow the vlan on the uplink interface by using the following command ``` bridge vlan add vid VLAN_ID dev DEV```.

View File

@ -1,66 +1,5 @@
# host-device
Move an already-existing device into a container.
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
## Overview
You can find it online here: https://cni.dev/plugins/main/host-device/
This simple plugin will move the requested device from the host's network namespace
to the container's. IPAM configuration can be used for this plugin.
## Network configuration reference
The device can be specified with any one of four properties:
* `device`: The device name, e.g. `eth0`, `can0`
* `hwaddr`: A MAC address
* `kernelpath`: The kernel device kobj, e.g. `/sys/devices/pci0000:00/0000:00:1f.6`
* `pciBusID`: A PCI address of network device, e.g `0000:00:1f.6`
For this plugin, `CNI_IFNAME` will be ignored. Upon DEL, the device will be moved back.
The plugin also supports the following [capability argument](https://github.com/containernetworking/cni/blob/master/CONVENTIONS.md):
* `deviceID`: A PCI address of the network device, e.g `0000:00:1f.6`
## Example configuration
A sample configuration with `device` property looks like:
```json
{
"cniVersion": "0.3.1",
"type": "host-device",
"device": "enp0s1"
}
```
A sample configuration with `pciBusID` property looks like:
```json
{
"cniVersion": "0.3.1",
"type": "host-device",
"pciBusID": "0000:3d:00.1"
}
```
A sample configuration utilizing `deviceID` runtime configuration looks like:
1. From operator perspective:
```json
{
"cniVersion": "0.3.1",
"type": "host-device",
"capabilities": {
"deviceID": true
}
}
```
2. From plugin perspective:
```json
{
"cniVersion": "0.3.1",
"type": "host-device",
"runtimeConfig": {
"deviceID": "0000:3d:00.1"
}
}
```

View File

@ -1,45 +1,5 @@
# ipvlan plugin
## Overview
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
ipvlan is a new [addition](https://lwn.net/Articles/620087/) to the Linux kernel.
Like its cousin macvlan, it virtualizes the host interface.
However unlike macvlan which generates a new MAC address for each interface, ipvlan devices all share the same MAC.
The kernel driver inspects the IP address of each packet when making a decision about which virtual interface should process the packet.
You can find it online here: https://cni.dev/plugins/main/ipvlan/
Because all ipvlan interfaces share the MAC address with the host interface, DHCP can only be used in conjunction with ClientID (currently not supported by DHCP plugin).
## Example configuration
```
{
"name": "mynet",
"type": "ipvlan",
"master": "eth0",
"ipam": {
"type": "host-local",
"subnet": "10.1.2.0/24"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network.
* `type` (string, required): "ipvlan".
* `master` (string, optional): name of the host interface to enslave. Defaults to default route interface.
* `mode` (string, optional): one of "l2", "l3", "l3s". Defaults to "l2".
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel.
* `ipam` (dictionary, required unless chained): IPAM configuration to be used for this network.
## Notes
* `ipvlan` does not allow virtual interfaces to communicate with the master interface.
Therefore the container will not be able to reach the host via `ipvlan` interface.
Be sure to also have container join a network that provides connectivity to the host (e.g. `ptp`).
* A single master interface can not be enslaved by both `macvlan` and `ipvlan`.
* For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin
can be chained with an earlier plugin that handles this logic. If `master` is
omitted, then the previous Result must contain a single interface name for the
ipvlan plugin to enslave. If `ipam` is omitted, then the previous Result is used
to configure the ipvlan interface.

View File

@ -1,34 +1,5 @@
# macvlan plugin
## Overview
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
[macvlan](http://backreference.org/2014/03/20/some-notes-on-macvlanmacvtap/) functions like a switch that is already connected to the host interface.
A host interface gets "enslaved" with the virtual interfaces sharing the physical device but having distinct MAC addresses.
Since each macvlan interface has its own MAC address, it makes it easy to use with existing DHCP servers already present on the network.
You can find it online here: https://cni.dev/plugins/main/macvlan/
## Example configuration
```
{
"name": "mynet",
"type": "macvlan",
"master": "eth0",
"ipam": {
"type": "dhcp"
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network
* `type` (string, required): "macvlan"
* `master` (string, optional): name of the host interface to enslave. Defaults to default route interface.
* `mode` (string, optional): one of "bridge", "private", "vepa", "passthru". Defaults to "bridge".
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to the value chosen by the kernel. The value must be \[0, master's MTU\].
* `ipam` (dictionary, required): IPAM configuration to be used for this network. For interface only without ip address, create empty dictionary.
## Notes
* If you are testing on a laptop, please remember that most wireless cards do not support being enslaved by macvlan.
* A single master interface can not be enslaved by both `macvlan` and `ipvlan`.

View File

@ -1,32 +1,5 @@
# ptp plugin
## Overview
The ptp plugin creates a point-to-point link between a container and the host by using a veth device.
One end of the veth pair is placed inside a container and the other end resides on the host.
The host-local IPAM plugin can be used to allocate an IP address to the container.
The traffic of the container interface will be routed through the interface of the host.
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
## Example network configuration
You can find it online here: https://cni.dev/plugins/main/ptp/
```
{
"name": "mynet",
"type": "ptp",
"ipam": {
"type": "host-local",
"subnet": "10.1.1.0/24"
},
"dns": {
"nameservers": [ "10.1.1.1", "8.8.8.8" ]
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network
* `type` (string, required): "ptp"
* `ipMasq` (boolean, optional): set up IP Masquerade on the host for traffic originating from ip of this network and destined outside of this network. Defaults to false.
* `mtu` (integer, optional): explicitly set MTU to the specified value. Defaults to value chosen by the kernel.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.
* `dns` (dictionary, optional): DNS information to return as described in the [Result](https://github.com/containernetworking/cni/blob/master/SPEC.md#result).

View File

@ -1,59 +1,5 @@
# win-bridge plugin
## Overview
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
With win-bridge plugin, all containers (on the same host) are plugged into an L2Bridge network that has one endpoint in the host namespace.
You can find it online here: https://cni.dev/plugins/main/win-bridge/
## Example configuration
```
{
"name": "mynet",
"type": "win-bridge",
"ipMasqNetwork": "10.244.0.0/16",
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16"
},
"policies":[
{
"name":"EndpointPolicy",
"value":{
"Type":"ROUTE",
"DestinationPrefix":"10.137.198.27/32",
"NeedEncap":true
}
}
],
"HcnPolicyArgs": [
{
"Type": "SDNRoute"
"Settings": {
"DestinationPrefix": "11.0.0.0/8",
"NeedEncap": true
}
}
],
"loopbackDSR": true,
"capabilities": {
"dns": true,
"portMappings": true
}
}
```
## Network configuration reference
* `ApiVersion` (integer, optional): ApiVersion to use, will default to hns. If set to "2" will try to use hcn APIs.
* `name` (string, required): the name of the network.
* `type` (string, required): "win-bridge".
* `ipMasqNetwork` (string, optional): setup NAT if not empty.
* `dns` (dictionary, optional): dns config to be used.
* `Nameservers` (list, optional): list of strings to be used for dns nameservers.
* `Search` (list, optional): list of stings to be used for dns search.
* `ipam` (dictionary, optional): IPAM configuration to be used for this network.
* `Policies` (list, optional): List of hns policies to be used (only used when ApiVersion is < 2).
* `HcnPolicyArgs` (list, optional): List of hcn policies to be used (only used when ApiVersion is 2).
* `loopbackDSR` (bool, optional): If true, will add a policy to allow the interface to support loopback direct server return.
* `capabilities` (dictionary, optional): Runtime capabilities to enable.
* `dns` (boolean, optional): If true, will take the dns config supplied by the runtime and override other settings.
* `portMappings` (boolean, optional): If true, will handle HostPort<>ContainerPort mapping using NAT HNS Policies

View File

@ -1,38 +1,5 @@
# win-overlay plugin
## Overview
This document has moved to the [containernetworking/cni.dev](https://github.com/containernetworking/cni.dev) repo.
With win-overlay plugin, all containers (on the same host) are plugged into an Overlay network based on VXLAN encapsulation.
You can find it online here: https://cni.dev/plugins/main/win-overlay/
## Example configuration
```
{
"name": "mynet",
"type": "win-overlay",
"ipMasq": true,
"endpointMacPrefix": "0E-2A",
"ipam": {
"type": "host-local",
"subnet": "10.10.0.0/16"
},
"loopbackDSR": true,
"capabilites": {
"dns": true
}
}
```
## Network configuration reference
* `name` (string, required): the name of the network.
* `type` (string, required): "win-overlay".
* `ipMasq` (bool, optional): the inverse of `$FLANNEL_IPMASQ`, setup NAT for the hnsNetwork subnet.
* `dns` (dictionary, optional): dns config to be used.
* `Nameservers` (list, optional): list of strings to be used for dns nameservers.
* `Search` (list, optional): list of stings to be used for dns search.
* `endpointMacPrefix` (string, optional): set to the MAC prefix configured for Flannel.
* `Policies` (list, optional): List of hns policies to be used.
* `ipam` (dictionary, required): IPAM configuration to be used for this network.
* `loopbackDSR` (bool, optional): If true, will add a policy to allow the interface to support loopback direct server return.
* `capabilities` (dictionary, optional): runtime capabilities to be parsed and injected by runtime.
* `dns` (boolean, optional): If true, will take the dns config supplied by the runtime and override other settings.