Tom Denham 7b381a1af2 SPEC: introduce "args" field and new error code
Based on previous discussions on the CNI maintainers calls, the spec is
unclear on 1) when CNI_ARGS should be used and 2) the fact the dynamic
config can be passed in through the network JSON.

This PR makes it clear that per-container config can be passed in
through the network JSON, adding a top level `args` field into
which orchestrators can add additional metadata without worrying that
plugins might reject the additional data. It also allows for plugins to
reject unknown fields passed in at the top level.

Using JSON is preferable to CNI_ARGS since it allows namespaced and
structured data. CNI_ARGS is a flat list of KV pairs which has reserved
characters with no escaping rules defined.
CNI_ARGS may still be used by orchestrators that want the simplicity of
passing the network config JSON as specified by the user, unchanged
through to the CNI plugin. But for any kind of structured data, it's
recommended that the `args` field in the JSON is used instead.
2016-06-14 15:02:51 -07:00
2015-05-11 11:12:24 -07:00
2016-05-26 15:33:58 +02:00
2016-05-21 01:48:26 +02:00
DCO
2015-09-02 11:00:27 -07:00
2015-04-04 20:35:49 -07:00
2016-05-26 09:42:10 -07:00
2016-06-02 17:50:14 +02:00

Build Status Coverage Status

CNI - the Container Network Interface

What is CNI?

The CNI (Container Network Interface) project consists of a specification and libraries for writing plugins to configure network interfaces in Linux containers, along with a number of supported plugins. CNI concerns itself only with network connectivity of containers and removing allocated resources when the container is deleted. Because of this focus CNI has a wide range of support and the specification is simple to implement.

As well as the specification, this repository contains the Go source code of a library for integrating CNI into applications, an example command-line tool, a template for making new plugins, and the supported plugins.

The template code makes it straight-forward to create a CNI plugin for an existing container networking project. CNI also makes a good framework for creating a new container networking project from scratch.

Why develop CNI?

Application containers on Linux are a rapidly evolving area, and within this area networking is not well addressed as it is highly environment-specific. We believe that many container runtimes and orchestrators will seek to solve the same problem of making the network layer pluggable.

To avoid duplication, we think it is prudent to define a common interface between the network plugins and container execution: hence we put forward this specification, along with libraries for Go and a set of plugins.

Who is using CNI?

Contributing to CNI

We welcome contributions, including bug reports, and code and documentation improvements. If you intend to contribute to code or documentation, please read CONTRIBUTING.md. Also see the contact section in this README.

How do I use CNI?

Requirements

CNI requires Go 1.5+ to build.

Go 1.5 users will need to set GO15VENDOREXPERIMENT=1 to get vendored dependencies. This flag is set by default in 1.6.

Included Plugins

This repository includes a number of common plugins in the plugins/ directory. Please see the Documentation/ directory for documentation about particular plugins.

Running the plugins

The scripts/ directory contains two scripts, priv-net-run.sh and docker-run.sh, that can be used to exercise the plugins.

note - priv-net-run.sh depends on jq

Start out by creating a netconf file to describe a network:

$ mkdir -p /etc/cni/net.d
$ cat >/etc/cni/net.d/10-mynet.conf <<EOF
{
	"name": "mynet",
	"type": "bridge",
	"bridge": "cni0",
	"isGateway": true,
	"ipMasq": true,
	"ipam": {
		"type": "host-local",
		"subnet": "10.22.0.0/16",
		"routes": [
			{ "dst": "0.0.0.0/0" }
		]
	}
}
EOF
$ cat >/etc/cni/net.d/99-loopback.conf <<EOF
{
	"type": "loopback"
}
EOF

The directory /etc/cni/net.d is the default location in which the scripts will look for net configurations.

Next, build the plugins:

$ ./build

Finally, execute a command (ifconfig in this example) in a private network namespace that has joined the mynet network:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./priv-net-run.sh ifconfig
eth0      Link encap:Ethernet  HWaddr f2:c2:6f:54:b8:2b  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f0c2:6fff:fe54:b82b/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

The environment variable CNI_PATH tells the scripts and library where to look for plugin executables.

Running a Docker container with network namespace set up by CNI plugins

Use the instructions in the previous section to define a netconf and build the plugins. Next, docker-run.sh script wraps docker run, to execute the plugins prior to entering the container:

$ CNI_PATH=`pwd`/bin
$ cd scripts
$ sudo CNI_PATH=$CNI_PATH ./docker-run.sh --rm busybox:latest ifconfig
eth0      Link encap:Ethernet  HWaddr fa:60:70:aa:07:d1  
          inet addr:10.22.0.2  Bcast:0.0.0.0  Mask:255.255.0.0
          inet6 addr: fe80::f860:70ff:feaa:7d1/64 Scope:Link
          UP BROADCAST MULTICAST  MTU:1500  Metric:1
          RX packets:1 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:1 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:90 (90.0 B)  TX bytes:0 (0.0 B)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:65536  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

What might CNI do in the future?

CNI currently covers a wide range of needs for network configuration due to it simple model and API. However, in the future CNI might want to branch out into other directions:

  • Dynamic updates to existing network configuration
  • Dynamic policies for network bandwidth and firewall rules

If these topics of are interest please contact the team via the mailing list or IRC and find some like minded people in the community to put a proposal together.

Contact

For any questions about CNI, please reach out on the mailing list:

Description
Some reference and example networking plugins, maintained by the CNI team.
Readme Apache-2.0 20 MiB
Latest
2024-12-02 17:06:11 +01:00
Languages
Go 99.7%
Shell 0.3%