This will get you routable containers with IPs on your existing subnets, advertising to Consul. They will also be scalable and placed across a cluster of Swarm hosts. It’s assumed that you are already running Consul, so if not, there are a ton of tutorials out there. It’s also assumed you know how to install Docker and various Linux kernels.
Bonus: We add an autoscaling API called Orbiter.
So you have an existing environment. You use Consul for service discovery. Life is good. Containers are now a thing and you want to work them in without having to worry about overlay networking or reverse proxies. You also don’t want to add extra latency (as some naysayers could use it as fuel to kill your hopes and dreams). Lastly, you don’t have a lot of time to invest in a complex orchestration tool, such as Kuberenetes. With Macvlan support added to Docker 17.06, we can rock ‘n roll and get shit done.
I’d consider this as a bridge the gap solution that may not be long lived. You may end up with k8s in the end, using overlay networking, enjoying the modularity it brings. This solution brings the least risk, easiest implementation and highest performance when you need it. You need to prove that the most basic Docker experience wont degrade the performance of your apps. If you were to jump straight to overlays, reverse proxies, etc, you are taking on a lot of extra baggage without having even traveled anywhere.
Lastly, Kelsey Hightower thinks (or thought) running without all the fluff was cool, last year.
ipvlan and pure L3 network automation will unwind the mess that is container networking. No more bridges. No more NAT. No more overlays.
— Kelsey Hightower (@kelseyhightower) April 4, 2016
You may already have a bunch of VLANs and subnets. Perhaps you want containers to ride on your existing VLANs and subnets. Macvlan (and Ipvlan) allow you to slap a MAC and an IP (Macvlan) or just an IP (ipvlan) on a container. You also don’t want to introduce any more complexity and overhead into packets getting to your containers (such as overlay networking and reverse proxies).
In my case, I’m using Consul. It’s ubiquitous. Sure, Kubernetes and Swarm both provide internal service discovery mechanisms, but what about integrating that with your existing apps? What if you have an app you want to eventually containerize, but it may run on bare metal or ec2, advertising to Consul. How does everything “outside” talk to services that only exist “inside”?
Swarm can be setup in minutes. You mark a node (preferably more than one to maintain quorum) as a manager and it will provide you a tokened url you can paste onto future worker nodes. Kubernetes, being more modular and feature packed, suffers as it’s installation is a major pain. It also comes bundled with a bunch of features that are completely unnecessary for a minimalist orchestration layer.
If you just want to ride on a single ip’d interface on your host, that’s cool too.
[root@xxx network-scripts]# ip a|grep bond0.40
8: bond0.40@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP qlen 1000
ip link set dev bond0.40 promisc on
echo "PROMISC=yes >> /etc/sysconfig/network-scripts/ifcfg-bond0.40"
#ip link add macnet252 link bond0.252 type macvlan mode bridge
#ifconfig macnet252 up
#ip route add 10.90.255.0/24 dev macnet252
My network is 172.80.0.0/16. I’m going to give each host a /24. I have preexisting hosts already on part of the first /24 of that network, so im going to start at 1.0 and move on. I dont need a network or broadcast address because the ranges fall inside the larger /16 supernet. The network name (in this case vlan40_net
is arbitrary).
manager1# docker network create --config-only --subnet 172.80.0.0/16 --gateway 172.80.0.1 -o parent=bond0.40 --ip-range 10.90.1.0/24 vlan40_net
worker1# docker network create --config-only --subnet 172.80.0.0/16 --gateway 172.80.0.1 -o parent=bond0.40 --ip-range 10.90.2.0/24 vlan40_net
worker2# docker network create --config-only --subnet 172.80.0.0/16 --gateway 172.80.0.1 -o parent=bond0.40 --ip-range 10.90.3.0/24 vlan40_net
Now I’m going to create the swarm enabled network, on the manager. This network references the config-only per-node networks we just created. The network name (swarm-vlan40_net) is arbitary.
manager1# docker network create -d macvlan --scope swarm --config-from vlan40_net swarm-vlan40_net
manager1# docker network ls|grep vlan40
0c1e0ab98806 vlan40_net null local
znxv8ab5t3n1 swarm-vlan40_net macvlan swarm
gliderlabs/registrator:master
it contains a PR that was merged to master after 4/16 (the last official release).gliderlabs/registrator:latest
.EXPOSE
proper port in your Dockerfile
See for details.--network="host"
, but not with Macvlan/Swarm. Let me know if you are able to get it to work.exec
. It needs to be run with exec
so it gets the honor of running as PID 1
(so it can receive SIGTERMs when it’s time has come). This allows the agent to leave the cluster gracefully. See below.manager1# docker service create --network swarm-vlan40_net --name portainer portainer/portainer
manager1# nkbu2j5suypr portainer replicated 1/1 portainer/portainer:latest
And to see what IPs are used by your containers on a specific host:
manager1# docker network inspect swarm-vlan40_net
[
{
"Name": "swarm-vlan40_net",
"Id": "znxv8ab5t3n1vdb86jtlie823",
"Created": "2017-08-11T07:50:12.488791524-04:00",
"Scope": "swarm",
"Driver": "macvlan",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.80.0.0/16",
"IPRange": "172.80.1.0/24",
"Gateway": "172.80.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": "vlan40_net"
},
"ConfigOnly": false,
"Containers": {
"6a74fe709c30e517ebb2931651d6356bb4fddac48f636263182058036ce73d75": {
"Name": "portainer.1.a4dysjkpppuyyqi18war2b4u6",
"EndpointID": "777b2d15e175e70aaf5a9325fa0b4faa96347e4ec635b2edff204d5d4233c506",
"MacAddress": "02:42:0a:5a:23:40",
"IPv4Address": "172.80.1.0/24",
"IPv6Address": ""
},
"Options": {
"parent": "bond0.40"
},
"Labels": {}
}
]
If you would dig a UI, I recommend Portainer. https://github.com/portainer/portainer
Another killer way to visualize your cluster is with, Visualizer, naturally. https://hub.docker.com/r/dockersamples/visualizer/tags/
Orbiter. https://github.com/gianarb/orbiter
Scaling based on an API call. Trigger a webhook from your favorite monitoring software…
This will let you make a call to the HTTP API with an “up” or “down” and it will automatically scale the app. You can also adjust how many containers scale at those times. It really works!
I’m always happy to help if anyone needs a hand. I can be found on dockercommunity.slack.com @killcity.