Orchestrating CoreOS with OpenStack Heat

Having finally spent a bit of time with OpenStack’s Heat, I’ve started to see what I can do with automating infrastructure deployments and services by using it in conjunction with CoreOS. This post sort of builds on Scott Lowe’s introduction to CoreOS and Heat and does a few of the things he suggests, such as creating a dedicated network and deploying an arbitrary number of instances. It’s just enough to get a cluster stood up with which you can then define some services and roll out your application stack in order to start testing.

The Heat template itself looks like this:

heat_template_version: 2014-10-16
description: Deploy a CoreOS cluster
parameters:
count:
description: Number of CoreOS machines to deploy
type: number
default: 3
constraints:
- range:
min: 1
max: 10
description: Must be between 1 and 10 servers.
key_name:
type: string
description: Name of key-pair to be used for compute instance
flavor:
type: string
default: dc1.1x1.20
constraints:
- allowed_values:
- dc1.1x1.20
- dc1.1x2.20
- dc1.2x2.40
- dc1.2x4.40
- dc1.2x8.40
- dc1.4x16
- dc1.8x16
description: |
Must be a valid DataCentred Compute Cloud flavour
image_id:
type: string
label: CoreOS image ID
default: 9d8d2945-e699-469a-ac24-c63885f621ca
public_net_id:
type: string
label: Public network ID
description: ID of the public network to use
discovery_url:
type: string
label: Cluster discovery URL such as one generated from https://discovery.etcd.io/new
name:
type: string
description: Name of each CoreOS machine booted
default: CoreOS-stable
resources:
security_group:
type: OS::Neutron::SecurityGroup
properties:
description: Security Group
name: core-security-group
rules:
- remote_ip_prefix: 0.0.0.0/0
protocol: tcp
port_range_min: 0
port_range_max: 65535
- remote_ip_prefix: 0.0.0.0/0
protocol: udp
port_range_min: 0
port_range_max: 65535
- remote_ip_prefix: 0.0.0.0/0
protocol: icmp
private_net:
type: OS::Neutron::Net
properties:
admin_state_up: true
name: core-net
private_subnet:
type: OS::Neutron::Subnet
properties:
name: core-subnet
cidr: 192.168.10.0/24
gateway_ip: 192.168.10.1
allocation_pools:
- start: 192.168.10.20
end: 192.168.10.99
dns_nameservers: [8.8.8.8, 8.8.4.4]
enable_dhcp: true
network_id: { get_resource: private_net }
router:
type: OS::Neutron::Router
properties:
name: core-router
admin_state_up: true
router_gw:
type: OS::Neutron::RouterGateway
properties:
network_id: { get_param: public_net_id }
router_id: { get_resource: router }
router_interface:
type: OS::Neutron::RouterInterface
properties:
router_id: { get_resource: router }
subnet_id: { get_resource: private_subnet }
machines:
type: "OS::Heat::ResourceGroup"
depends_on: private_net
properties:
count: { get_param: count }
resource_def:
type: OS::Nova::Server
properties:
key_name: { get_param: key_name }
image: { get_param: image_id }
networks:
- network: { get_resource: private_net }
flavor: { get_param: flavor }
name:
str_replace:
template:
$name-$index
params:
$name: { get_param: name }
$index: "%index%"
user_data_format: RAW
user_data:
str_replace:
template: |
#cloud-config
coreos:
etcd:
discovery: $discovery_url$
# multi-region and multi-cloud deployments need to use $public_ipv4
addr: $private_ipv4:4001
peer-addr: $private_ipv4:7001
units:
- name: etcd.service
command: start
- name: fleet.service
command: start
params:
$discovery_url$: { get_param: discovery_url }
outputs:
key_pair:
description: SSH key-pair for this cluster
value: { get_param: key_name }

Most of it is pretty standard, the interesting bits that I think are worth pointing out are:

  • Line 22, or thereabouts, where I list a few flavors. These are specific to DataCentred’s OpenStack installation and will need changing if you’re deploying this elsewhere;
  • The image ID on line 35 is also specific to DataCentred but can be overridden either here or as a parameter when you create the stack;
  • Line 99, where we define a ResourceGroup with a count passed in as a parameter;
  • Line 111, which saves confusion by suffixing the (parameterised) name with the RG’s index value for the OS::Nova::Server instance;
  • The user_data section which does just about enough to start etcd and fleet and gets our instances talking to one another.

To launch this stack using the heat CLI, run the following command:

$ heat stack-create -f coreos-heat.yaml \
-P key_name=deadline \
-P count=3 \
-P public_net_id=6751cb30-0aef-4d7e-94c3-ee2a09e705eb \
-P discovery_url=$(curl -sw "\n" 'https://discovery.etcd.io/new?size=3') \
-P name=webserver coreos

In that example, webserver is the prefix that each instance will use and the last argument, coreos, is the name of the stack itself. And yes, passing in count=3 is a bit redundant as it’s the default in the template, but for illustration’s sake I think it helps here ;) The discovery_url is passed in as a parameter, and in our example and in my lab I’ve been using the etcd project’s provided discovery service, but you’re free to run your own instead of course.

Kick that command off, give it a few minutes, and eventually you should have a successfully deployed stack. Login to one of the instances and then you’ll be able to verify the state of the cluster:

nick@deadline:~> ssh core@185.43.218.192
Last login: Sat Apr 18 18:06:22 2015 from 86.143.53.8
CoreOS stable (607.0.0)
core@webserver-0 ~ $ fleetctl list-machines
MACHINE		IP		METADATA
1fc4fbb3...	192.168.10.24	-
59c16e8a...	192.168.10.23	-
882768f6...	192.168.10.22	-

At this point you can define some units and launch containers in your cluster via fleet - the CoreOS project’s website has you covered with a good introduction to get you started.