Commit 83ce5dd7 authored by MUSSET Paul's avatar MUSSET Paul
Browse files

improve documentation

parent 346688d5
Pipeline #94080 failed with stage
in 60 minutes and 9 seconds
# Configuration of an xcache standalone server in direct mode
This document explain the configuration of a xcache server with direct mode.
## In case of a VM (cc-in2p3 specific)
- Ask to add a CNAME to the machine (here xcache-escape.in2p3.fr)
## Installation and basic config
First we need to install and configure the core components of xroot and xcache.
- Installation :`yum install epel-release && yum install xrootd-server`
- Configure the cache directory
```
mkdir /mnt/xcache
chown -R xrootd:xrootd /mnt/xcache
```
- CC-IN2P3 VM Specific
```
sudo mount /dev/vdb /mnt/xcache/
```
- Create a file `/etc/xrootd/xrootd-xcache.cfg` with the folowing content:
```
all.export /
ofs.osslib libXrdPss.so
pss.origin <origin server>
pss.cachelib libXrdFileCache.so
oss.localroot /mnt/xcache
```
- Explanation :
- `all.export`([doc](https://xrootd.slac.stanford.edu/doc/dev48/xrd_config.htm#_Toc496911329)): specifies a valid path prefix for file requests. For example, if you only want to accept request to file and directory starting with `/escape`
- `ofs.osslib`([doc](https://xrootd.slac.stanford.edu/doc/dev49/ofs_config.htm#_Toc522916534)): Specifies the location of the storage system interface layer. In that case, the pss component which brings the proxy capabilities to xroot.
- `pss.origin`([doc](https://xrootd.slac.stanford.edu/doc/dev410/pss_config.htm#_Toc8936779)): defines the origin server of the data.
- `pss.cachelib`([doc](https://xrootd.slac.stanford.edu/doc/dev410/pss_config.htm#_Toc8936781)): loads the library for the file caching proxy
- `oss.localroot`([doc](https://xrootd.slac.stanford.edu/doc/dev49/ofs_config.htm#_Toc522916545)): defines the local path to the cache
### Service config
- Create a systemd unit file `/usr/lib/systemd/system/xrootd@xcache.service` to have xcache as a service.
```
[Unit]
Description=XRootD xrootd daemon instance %I
Documentation=man:xrootd(8)
Documentation=http://xrootd.org/docs.html
Requires=network-online.target
After=network-online.target
[Service]
ExecStart=/usr/bin/xrootd -d -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-%i.cfg -k fifo -s /var/run/xrootd/xrootd-%i.pid -n %i
User=xrootd
Group=xrootd
Type=simple
Restart=on-abort
RestartSec=0
KillMode=control-group
LimitNOFILE=65536
WorkingDirectory=/var/spool/xrootd
[Install]
RequiredBy=multi-user.target
```
- enable the service (start at boot time): `systemctl enable xrootd@xcache.service`
- start the service: `systemctl start xrootd@xcache.service`
Now you have a basic Xcache but there are multiples issue with it :
- Xcache might not be able to connect to the origin server because it needs to authenticate.
- There is no authN/Z for client connecting to the cache.
- http is not enabled for client.
## Basic http config
To enable http, you just need to add the following instruction in the config file
```
xrd.protocol http libXrdHttp.so
```
## AuthN/Z
There is a quick sumary on how authentication works on a grid/data lake in [this document](Authentication-summary.md)
### Management of grid certificates
- Installation of the grid CA certificates and the program to refresh CRL
```
wget http://repository.egi.eu/sw/production/cas/1/current/repo-files/EGI-trustanchors.repo -O /etc/yum.repos.d/EGI-trustanchors.repo
yum install ca-policy-lcg fetch-crl
```
- No need for fetch-crl to be put in cron job as its automatically done at installation (`/etc/cron.d/fetch-crl`)
### Installation of vo files
```
mkdir /etc/vomses/
wget https://indigo-iam.github.io/escape-docs/voms-config/voms-escape.cloud.cnaf.infn.it.vomses -O /etc/vomses/voms-escape.cloud.cnaf.infn.it.vomses
mkdir -p /etc/grid-security/vomsdir/escape/
wget https://indigo-iam.github.io/escape-docs/voms-config/voms-escape.cloud.cnaf.infn.it.lsc -O /etc/grid-security/vomsdir/escape/voms-escape.cloud.cnaf.infn.it.lsc
```
### Request a certificate for the server
<!--- Should be for the server hostname, not an alias-->
- [Documentation (France Grid Specific)](https://services.renater.fr/ssi/grid-fr/vos_certificats/demande/certificat_service)
- [Script](https://github.com/GRIF-IRFU/grid-fr-scripting)(not used yet)
- Place the certificate in the right place and make it owned by the xrootd user
```
mv server.key /etc/grid-security/xrd/xrdkey.pem
mv server.pem /etc/grid-security/xrd/xrdcert.pem
chown xrootd:xrootd /etc/grid-security/xrd/*
chmod 640 /etc/grid-security/xrd/xrdcert.pem
chmod 400 /etc/grid-security/xrd/xrdkey.pem
```
### Via certificates
#### Client authentication to XCache
- For the xroot protocol :
You have to do this part even if you want only want the http protocol as authentication is protocol dependant and http depends on xroot
- installation of the XRootD VO info extraction plugin
```
yum install xrootd-devel voms-devel
git clone https://github.com/gganis/vomsxrd.git
cd vomsxrd
mkdir build && cd build
cmake ../ -DCMAKE_INSTALL_PREFIX=/usr
make install
```
- Enable with adding the following content to the `/etc/xrootd/xrootd-xcache.cfg` file :
```
xrootd.seclib libXrdSec.so
sec.protocol gsi -cert:/etc/grid-security/xrd/xrdcert.pem \
-key:/etc/grid-security/xrd/xrdkey.pem \
-gridmap:/dev/null \
-vomsfun:/usr/lib64/libXrdSecgsiVOMS-4.so \
-vomsfunparms:dbg
sec.protbind * gsi
```
- Explanation:
- `xrootd.seclib`([doc](https://xrootd.slac.stanford.edu/doc/dev48/xrd_config.htm#_Toc496911330)): enables strong authentication by loading the library
- `sec.protocol`([doc](https://xrootd.slac.stanford.edu/doc/dev49/sec_config.htm#_Toc517294098)): define the characteristics of an authentication protocol, here gsi.
- `sec.protbind`([doc](https://xrootd.slac.stanford.edu/doc/dev49/sec_config.htm#_Toc517294096)): binds a set of authentication protocols to one or more hosts.
- For http protocol
- instalation of the HTTP VO info extraction plugin
```
sudo yum install xrdhttpvoms
```
- Add this lines in the config file
```
http.cadir /etc/grid-security/certificates
http.cert /etc/grid-security/xrd/xrdcert.pem
http.key /etc/grid-security/xrd/xrdkey.pem
http.secxtractor libXrdHttpVOMS.so
```
#### Client authorization to XCache
- Add this in the config file:
```
ofs.authorize 1
acc.authdb /opt/xrd/etc/Authfile
```
- explanation:
- `ofs.authorize`([doc](https://xrootd.slac.stanford.edu/doc/dev49/ofs_config.htm#_Toc522916523)): enable the authorization component, acc.
- `acc.authdb`([doc](https://xrootd.slac.stanford.edu/doc/dev49/sec_config.htm#_Toc517294126)): specify the location of the authorization database.
- The documentation to create the authdb file is [here](https://xrootd.slac.stanford.edu/doc/dev49/sec_config.htm#_Toc517294132)
#### XCache authentication to the origin server
In XrootD4, proxy delegation doesn't work in xcache. You have to register the server's certificate to the VO with the rights needed by your user to access the data on the origin server and then do a cronjob to obtain the proxy certificate.
- [ESCAPE IAM](https://iam-escape.cloud.cnaf.infn.it)
- `yum install voms-clients-java`. Only version 3 can be used with the iam server
- Add to a cronjob for user xrootd `1 11 * * * voms-proxy-init --voms escape`
### Via tokens
Tokens only work with http
- Install the scitokens xrootd library
```
yum install https://repo.opensciencegrid.org/osg/3.5/osg-3.5-el7-release-latest.rpm
yum install xrootd-scitokens
```
- Create a file `/etc/xrootd/scitokens.cfg` with the following content to configure it.
```
[Issuer ESCAPE]
issuer = https://iam-escape.cloud.cnaf.infn.it/
base_path = /
map_subject = False
default_user = xrootd
```
- Install the xrootd client library to allow xcache to communicate with the origin server with http.
```
yum install xrdcl-http
```
- Create a file `/etc/xrootd/client.plugins.d/xrdcl-http-plugin.conf` to configure it
```
url = https://*
lib = /usr/lib64/libXrdClHttp-4.so
enable = true
```
- The issue with xrdcl-http is that it uses the CA certificates in `/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem` to do the certificates signature checking. To resolve that problem, you have to executes the folowing commands
```
cp /etc/grid-security/certificates/*.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
```
......@@ -3,5 +3,5 @@ XCache is the file caching component from [XRootD](https://xrootd.slac.stanford.
The goal of this document is to help understand and configure a XCache instance.
- [Introduction](Introduction-to-XCache-and-XRootD.md)
- [Configuration](XCache-config-recipe.md)
- [Introduction](introduction_xcache.md)
- [Installation](instalation.md)
......@@ -30,121 +30,87 @@
openstack server add volume xcache-server xcache-server-storage-1
openstack server add volume xcache-server xcache-server-storage-2
```
### From xcache-server
- Add ephemeral openstack volume to rootvg volume group to store docker image, the xcache localroot(link to the real files) and metadata(cinfo file)
```
lvm pvcreate /dev/vdb
lvm vgextend rootvg /dev/vdb
```
## For all kind of machine
### For CC-IN2P3
- Ask to add a CNAME (alias) to the machine
### From ccdevli
- puppetize vm (https://doc-wiki.cc.in2p3.fr/intranet:systeme:puppet:puppetize). To get kerberos access
- Puppetize vm (https://doc-wiki.cc.in2p3.fr/intranet:systeme:puppet:puppetize). To get kerberos access
```
remctl ccosmonitor osadmin puppetize <vm_id> xcache default
```
## For all kind of machine
- Create xcache user and group:
### From a machine with ansible
- Get the git repository containing the ansible role
```
groupadd --gid 9999 xrootd
useradd --uid 9998 --gid xrootd xrootd
git clone https://gitlab.in2p3.fr/CC-Escape/xcache-config.git
```
- Create partitions (lv) and directories for docker
- Create an inventory file listing the machine(s) you want to install categorized by group between brackets
```
mkdir /var/lib/docker/
lvm lvcreate --size 20G --name docker rootvg
mkfs.ext4 /dev/mapper/rootvg-docker
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-docker /var/lib/docker ext4 defaults 1 2
EOF
[prod]
xcache.in2p3.fr
```
- Create directories for xcache namespace (ns), metadata(meta), and storage and make the xrootd user own them
- Create a file called `production.yaml` which we will use to launch the instalation.
```
mkdir /mnt/xcache-ns/ /mnt/xcache-meta/ /mnt/xcache-storage
chown -R xrootd:xrootd /mnt/xcache-*
---
- name: install and launch an XCache instance
hosts: prod
roles:
- xcache
```
- the `hosts` variable should contain the group of machines you want to install in the inventory file
- Mount the corresponding disks and volumes
- Namespace
```
lvm lvcreate --size 20G --name xcache-ns rootvg
mkdir /mnt/xcache-ns/
mkfs.ext4 /dev/mapper/rootvg-xcache--ns
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-xcache--ns /mnt/xcache-ns ext4 defaults 1 2
EOF
```
- Metadata:
```
lvm lvcreate --size 20G --name xcache-meta rootvg
mkdir /mnt/xcache-meta/
mkfs.ext4 /dev/mapper/rootvg-xcache--meta
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-xcache--meta /mnt/xcache-meta ext4 defaults 1 2
EOF
```
- xcache data disks
```
mkdir /mnt/xcache-storage/data1
mkfs.xfs /dev/vdc
cat >> /etc/fstab << EOF
/dev/vdc /mnt/xcache-storage/data1 xfs nosuid,nodev 1 2
EOF
mount -a
chown xrootd:xrootd /mnt/xcache-storage/data1
```
```
mkdir /mnt/xcache-storage/data2
mkfs.xfs /dev/vdd
cat >> /etc/fstab << EOF
/dev/vdd /mnt/xcache-storage/data2 xfs nosuid,nodev 1 2
EOF
mount -a
chown xrootd:xrootd /mnt/xcache-storage/data2
```
- Create a `host_var/<machine_name>.yaml` file which will contain some parameter needed for the instalation of the machine.
The file has to start with `---`
- In that file, to configure the disks that will store the data, you need to set the `storage_devices` variable with a least one device:
```
storage_devices:
- vdc
- vdd
```
- Certificate
- Ask for a certificate (https://services.renater.fr/ssi/grid-fr/vos_certificats/demande/certificat_service)
Needed only if you need authentication/authorisation on your instance
- Ask for a grid certificate.
- For France: https://services.renater.fr/ssi/grid-fr/vos_certificats/demande/certificat_service
- Once you have it, transform it to a p12
```
openssl pkcs12 -export -in certificate.crt -inkey privatekey.key -out certificate.p12
```
- Register it to the [ESCAPE IAM Server](https://iam-escape.cloud.cnaf.infn.it)
- Put the certificates and private keys as `cert.pem` and `key.pem` files in `/root` on the machine
- Change their owner:
```
chown xrootd:xrootd /root/*.pem
```
- And their permissions
```
chmod 600 /root/cert.pem
chmod 400 /root/key.pem
```
<!---Add it to hiera data (https://doc-wiki.cc.in2p3.fr/intranet:systeme:puppet:hiera#chiffrement_des_variables) in hieradata/usages/xcache/server/<server_fqdn>.yaml-->
<!--### All that is done through Puppet now-->
- install docker and docker compose
```
yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo
yum install --nogpg -y docker-ce docker-ce-cli containerd.io
systemctl start docker
systemctl enable docker
curl -L "https://github.com/docker/compose/releases/download/1.27.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
```
- get the git repository containing the configuration
```
git clone https://gitlab.in2p3.fr/CC-Escape/xcache-config.git
```
- copy the service file, enable it and start it
```
cp xcache-config/containers/setup/certificate/xcache.service /etc/systemd/system
systemctl enable xcache
systemctl start xcache
```
- Register it to the VO
- For Escape: [ESCAPE IAM Server](https://iam-escape.cloud.cnaf.infn.it)
- Use [ansible-vault](https://docs.ansible.com/ansible/latest/user_guide/vault.html) to store the data
- Create a password file outside of the git repository
- Encrypt the certificate and the private key with the ansible-vault command line:
```
cat cert.pem | ansible-vault encrypt_string --vault-password-file vault_password --stdin-name 'certificate'
cat cert.key | ansible-vault encrypt_string --vault-password-file vault_password --stdin-name 'private_key'
```
- Put the output of these two commands in the `host_var/<machine_name>` file corresponding to the machine you want to install
`<machine_name>` should be the same as in the inventory
The file should look like:
```
certificate: !vault |
$ANSIBLE_VAULT;1.1;AES256
30663666663633323966383565343534623931303732363738616333653161336464656436623532...
private_key: !vault |
$ANSIBLE_VAULT;1.1;AES256
38353534633032366365393061653533303534323630616532636339616334663562663863653964...
```
- In case of a testing instance, you can set the `test_machine` to true in the `host_var/<machine_name>.yaml` file. The only effect is to launch a webserver that you can contact via curl to completely erase the cache.
```
curl -X DELETE http://<machine_name>:80/flush
```
- To launch the full instalation of the machine:
```
ansible-playbook -K --vault-password-file ../../vault_password -i inventory site.yaml
```
- `-K` is too type the sudo password
- `--vault-password-file ../../vault_password` is to be used to decrypt the certificate and private key and install them on the machine
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment