Configure OpenAttic on SUSE Linux with DeepSea and Salt.
Configure OpenAttic on SUSE Linux with DeepSea and Salt.
Consolidated and modified guide by [125] based on: http://docs.openattic.org/en/latest/install_guides/quick_start_guide.html http://docs.openattic.org/en/latest/install_guides/post_installation.html#post-installation-configuration http://docs.openattic.org/en/latest/prerequisites.html#base-operating-system-installation
Installation on openSUSE
openATTIC is available for installation on openSUSE Leap 42.3 from the openSUSE Build Service.
The software is delivered in the form of RPM packages via a dedicated zypper repository named filesystems:openATTIC:3.x
.
Post-installation Operating System Configuration
After performing the base installation of your Linux distribution of choice, the following configuration changes should be performed:
- The system must be connected to a network and should be able to establish outgoing Internet connections, so additional software and regular OS updates can be installed.
- Make sure the output of
hostname --fqdn
is something that makes sense, e.g.srvopenattic01.yourdomain.com
instead oflocalhost.localdomain
. If this doesn’t fit, edit/etc/hostname
and/etc/hosts
to contain the correct names. - Install and configure an NTP daemon on every host, so the clocks on all these nodes are in sync.
- HTTP access might be blocked by the default firewall configuration. Make sure to update the configuration in order to enable HTTP access to the openATTIC API/Web UI.
Zypper Repository Configuration
From the command line, you can run the following command to enable the openATTIC package repository.
For openSUSE Leap 42.3 run the following as root:
# zypper addrepo http://download.opensuse.org/repositories/filesystems:/ceph:/luminous/openSUSE_Leap_42.3/filesystems:ceph:luminous.repo
# zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:saltstack:products/openSUSE_Leap_42.3/systemsmanagement:saltstack:products.repo
# zypper addrepo http://download.opensuse.org/repositories/filesystems:openATTIC:3.x/openSUSE_Leap_42.3/filesystems:openATTIC:3.x.repo
# zypper refresh
Package Installation
To install the openATTIC base packages on SUSE Linux, run the following command:
# zypper install openattic
Setup a Ceph cluster with DeepSea
The way how to setup a Ceph cluster with DeepSea is well described in the upstream README and the DeepSea Wiki.
In this quick walkthrough we’ll highlight the most important parts of the installation.
DeepSea uses salt to deploy, setup and manage the cluster. Therefore we have to define one of our nodes as the “master” (management) node.
Note
DeepSea currently only supports Salt 2016.11.04, while openSUSE Leap ships with a newer version (2017.7.2) by default. We therefore need to add a dedicated package repository that provides the older version and make sure that the package management system does not update it to a newer version by accident.
-
Log into the “master” node and run the following commands to add the DeepSea/openATTIC repository, install DeepSea and to start the
salt-master
service:# zypper install salt-2016.11.04 # zypper install deepsea # systemctl enable salt-master.service # systemctl start salt-master.service
-
Next, install and configure the
salt-minion
service on all your nodes (including the “master” node) with the following commands:# zypper addrepo http://download.opensuse.org/repositories/systemsmanagement:saltstack:products/openSUSE_Leap_42.3/systemsmanagement:saltstack:products.repo # zypper refresh # zypper install salt-minion-2016.11.04 # zypper al 'salt*'
Configure all minions to connect to the master. If your Salt master is not reachable by the host name “salt”, edit the file
/etc/salt/minion
or create a new file/etc/salt/minion.d/master.conf
with the following content:master: host_name_of_salt_master
After you’ve changed the Salt minion configuration as mentioned above, start the Salt service on all Salt minions:
# systemctl enable salt-minion.service # systemctl start salt-minion.service
-
Connect to your “master” node again:
Check that the file
/srv/pillar/ceph/master_minion.sls
on the Salt master points to your Salt master and enable and start the Salt minion service on the master node:# systemctl enable salt-minion.service # systemctl start salt-minion.service
Now accept all Salt keys on the Salt master:
# salt-key --accept-all
Verify that the keys have been accepted:
# salt-key --list accepted
In order to avoid conflicts with other minions managed by the Salt master, DeepSea needs to know which Salt minions should be considered part of the Ceph cluster to be deployed.
This can be configured in file /srv/pillar/ceph/deepsea_minions.sls
, by defining a naming pattern. By default, DeepSea targets all minions that have a grain deepsea
applied to them.
This can be accomplished by running the following Salt command on all Minions that should be part of your Ceph cluster:
# salt -L <list of minions> grains.append deepsea default
Alternatively, you can change deepsea_minions
in deepsea_minions.sls
to any valid Salt target definition. See man deepsea-minions for details.
-
We can now start the Ceph cluster deployment from the “master” node:
Stage 0 – During this stage all required updates are applied and your systems may be rebooted:
# deepsea stage run ceph.stage.0
Stage 1 – The discovery stage collects all nodes and their hardware configuration in your cluster:
# deepsea stage run ceph.stage.1
Now you’ve to create a
policy.cfg
within/srv/pillar/ceph/proposals
. This file describes the layout of your cluster and how it should be deployed.You can find some examples upstream as well as in the documentation included in the
deepsea
RPM package at/usr/share/doc/packages/deepsea/examples
.For this deployment we’ve chosen the rolebased policy. Please change this file according to your environment. See
man 5 policy.cfg
for details.Stage 2 – The configuration stage parses the
policy.cfg
file and merges the included files into their final form:# deepsea stage run ceph.stage.2
Stage 3 – The actual deployment will be done:
# deepsea stage run ceph.stage.3
Stage 4 – This stage will deploy all of the defined services within the
policy.cfg
:# deepsea stage run ceph.stage.4
Congratulations, you’re done! You can now reach the openATTIC Web-UI on “http://<your-master-node>.<yourdomain>