Commit 63a1feb5 authored by MUSSET Paul's avatar MUSSET Paul
Browse files

Merge branch 'doc' into 'dev'


See merge request !23
parents 979e40ca 6de782c8
# Monitoring
- XRootD4:
- [Python scripts]( which are [not supported]( by the XRootD devs
- XRootD5
- [General xroot documentation](
- Xcache specific (on version 5 only):
- [Summary monitoring](
- Should come for rc2 which should arrive [soon]( according to Michal
- [Configuration xcache side](
- [Listener](
- [Fields explanation](
- [Detailed monitoring]( (not sure if implemented yet, at least seems seems to lack some doc)
- [Configuration xcache side](
\ No newline at end of file
# Configuration of an xcache standalone server in direct mode
This document explain the configuration of a xcache server with direct mode.
## In case of a VM (cc-in2p3 specific)
- Ask to add a CNAME to the machine (here
## Installation and basic config
First we need to install and configure the core components of xroot and xcache.
- Installation :`yum install epel-release && yum install xrootd-server`
- Configure the cache directory
mkdir /mnt/xcache
chown -R xrootd:xrootd /mnt/xcache
- CC-IN2P3 VM Specific
sudo mount /dev/vdb /mnt/xcache/
- Create a file `/etc/xrootd/xrootd-xcache.cfg` with the folowing content:
all.export /
pss.origin <origin server>
oss.localroot /mnt/xcache
- Explanation :
- `all.export`([doc]( specifies a valid path prefix for file requests. For example, if you only want to accept request to file and directory starting with `/escape`
- `ofs.osslib`([doc]( Specifies the location of the storage system interface layer. In that case, the pss component which brings the proxy capabilities to xroot.
- `pss.origin`([doc]( defines the origin server of the data.
- `pss.cachelib`([doc]( loads the library for the file caching proxy
- `oss.localroot`([doc]( defines the local path to the cache
### Service config
- Create a systemd unit file `/usr/lib/systemd/system/xrootd@xcache.service` to have xcache as a service.
Description=XRootD xrootd daemon instance %I
ExecStart=/usr/bin/xrootd -d -l /var/log/xrootd/xrootd.log -c /etc/xrootd/xrootd-%i.cfg -k fifo -s /var/run/xrootd/ -n %i
- enable the service (start at boot time): `systemctl enable xrootd@xcache.service`
- start the service: `systemctl start xrootd@xcache.service`
Now you have a basic Xcache but there are multiples issue with it :
- Xcache might not be able to connect to the origin server because it needs to authenticate.
- There is no authN/Z for client connecting to the cache.
- http is not enabled for client.
## Basic http config
To enable http, you just need to add the following instruction in the config file
xrd.protocol http
## AuthN/Z
There is a quick sumary on how authentication works on a grid/data lake in [this document](
### Management of grid certificates
- Installation of the grid CA certificates and the program to refresh CRL
wget -O /etc/yum.repos.d/EGI-trustanchors.repo
yum install ca-policy-lcg fetch-crl
- No need for fetch-crl to be put in cron job as its automatically done at installation (`/etc/cron.d/fetch-crl`)
### Installation of vo files
mkdir /etc/vomses/
wget -O /etc/vomses/
mkdir -p /etc/grid-security/vomsdir/escape/
wget -O /etc/grid-security/vomsdir/escape/
### Request a certificate for the server
<!--- Should be for the server hostname, not an alias-->
- [Documentation (France Grid Specific)](
- [Script]( used yet)
- Place the certificate in the right place and make it owned by the xrootd user
mv server.key /etc/grid-security/xrd/xrdkey.pem
mv server.pem /etc/grid-security/xrd/xrdcert.pem
chown xrootd:xrootd /etc/grid-security/xrd/*
chmod 640 /etc/grid-security/xrd/xrdcert.pem
chmod 400 /etc/grid-security/xrd/xrdkey.pem
### Via certificates
#### Client authentication to XCache
- For the xroot protocol :
You have to do this part even if you want only want the http protocol as authentication is protocol dependant and http depends on xroot
- installation of the XRootD VO info extraction plugin
yum install xrootd-devel voms-devel
git clone
cd vomsxrd
mkdir build && cd build
make install
- Enable with adding the following content to the `/etc/xrootd/xrootd-xcache.cfg` file :
sec.protocol gsi -cert:/etc/grid-security/xrd/xrdcert.pem \
-key:/etc/grid-security/xrd/xrdkey.pem \
-gridmap:/dev/null \
-vomsfun:/usr/lib64/ \
sec.protbind * gsi
- Explanation:
- `xrootd.seclib`([doc]( enables strong authentication by loading the library
- `sec.protocol`([doc]( define the characteristics of an authentication protocol, here gsi.
- `sec.protbind`([doc]( binds a set of authentication protocols to one or more hosts.
- For http protocol
- instalation of the HTTP VO info extraction plugin
sudo yum install xrdhttpvoms
- Add this lines in the config file
http.cadir /etc/grid-security/certificates
http.cert /etc/grid-security/xrd/xrdcert.pem
http.key /etc/grid-security/xrd/xrdkey.pem
#### Client authorization to XCache
- Add this in the config file:
ofs.authorize 1
acc.authdb /opt/xrd/etc/Authfile
- explanation:
- `ofs.authorize`([doc]( enable the authorization component, acc.
- `acc.authdb`([doc]( specify the location of the authorization database.
- The documentation to create the authdb file is [here](
#### XCache authentication to the origin server
In XrootD4, proxy delegation doesn't work in xcache. You have to register the server's certificate to the VO with the rights needed by your user to access the data on the origin server and then do a cronjob to obtain the proxy certificate.
- `yum install voms-clients-java`. Only version 3 can be used with the iam server
- Add to a cronjob for user xrootd `1 11 * * * voms-proxy-init --voms escape`
### Via tokens
Tokens only work with http
- Install the scitokens xrootd library
yum install
yum install xrootd-scitokens
- Create a file `/etc/xrootd/scitokens.cfg` with the following content to configure it.
[Issuer ESCAPE]
issuer =
base_path = /
map_subject = False
default_user = xrootd
- Install the xrootd client library to allow xcache to communicate with the origin server with http.
yum install xrdcl-http
- Create a file `/etc/xrootd/client.plugins.d/xrdcl-http-plugin.conf` to configure it
url = https://*
lib = /usr/lib64/
enable = true
- The issue with xrdcl-http is that it uses the CA certificates in `/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem` to do the certificates signature checking. To resolve that problem, you have to executes the folowing commands
cp /etc/grid-security/certificates/*.pem /etc/pki/ca-trust/source/anchors/
update-ca-trust extract
# XCache
XCache is the file caching component from [XRootD]( XRootD is a software framework and a protocol used to store and move data, for example on the Worldwide LHC Computing Grid.
XCache is the file caching component from [XRootD]( XRootD and xroot are a software framework and a protocol used to store and move data, for example on the Worldwide LHC Computing Grid.
The goal of this document is to help understand and configure a XCache instance.
- [Introduction](
- [Configuration](
- [Introduction](
- [Installation](
- [Monitoring](
......@@ -30,121 +30,91 @@
openstack server add volume xcache-server xcache-server-storage-1
openstack server add volume xcache-server xcache-server-storage-2
## For all kind of machine
### For CC-IN2P3
- Ask to add a CNAME (alias) to the machine
### From xcache-server
- Add ephemeral openstack volume to rootvg volume group to store docker image, the xcache localroot(link to the real files) and metadata(cinfo file)
#### From ccdevli
- Puppetize machine ( to get kerberos access
lvm pvcreate /dev/vdb
lvm vgextend rootvg /dev/vdb
remctl ccosmonitor osadmin puppetize <vm_id> xcache default
### From ccdevli
- puppetize vm ( To get kerberos access
### From a machine with ansible
- Get the git repository containing the ansible role
remctl ccosmonitor osadmin puppetize <vm_id> xcache default
git clone
## For all kind of machine
- Create xcache user and group:
- Create an inventory file listing the machine(s) you want to install categorized by group between brackets
groupadd --gid 9999 xrootd
useradd --uid 9998 --gid xrootd xrootd
- Create partitions (lv) and directories for docker
- Create a file called `production.yaml` which we will use to launch the instalation.
- the `hosts` variable should contain the group of machines you want to install in the inventory file
mkdir /var/lib/docker/
lvm lvcreate --size 20G --name docker rootvg
mkfs.ext4 /dev/mapper/rootvg-docker
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-docker /var/lib/docker ext4 defaults 1 2
- name: install and launch an XCache instance
hosts: prod
- xcache
- Create directories for xcache namespace (ns), metadata(meta), and storage and make the xrootd user own them
- Create a `host_var/<machine_name>.yaml` file which will contain some parameter needed for the instalation of the machine.
The file has to start with `---`
- In that file, to configure the disks that will store the data, you need to set the `storage_devices` variable with a least one device:
mkdir /mnt/xcache-ns/ /mnt/xcache-meta/ /mnt/xcache-storage
chown -R xrootd:xrootd /mnt/xcache-*
- vdc
- vdd
- Mount the corresponding disks and volumes
- Namespace
lvm lvcreate --size 20G --name xcache-ns rootvg
mkdir /mnt/xcache-ns/
mkfs.ext4 /dev/mapper/rootvg-xcache--ns
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-xcache--ns /mnt/xcache-ns ext4 defaults 1 2
- Metadata:
lvm lvcreate --size 20G --name xcache-meta rootvg
mkdir /mnt/xcache-meta/
mkfs.ext4 /dev/mapper/rootvg-xcache--meta
cat >> /etc/fstab << EOF
/dev/mapper/rootvg-xcache--meta /mnt/xcache-meta ext4 defaults 1 2
- xcache data disks
mkdir /mnt/xcache-storage/data1
mkfs.xfs /dev/vdc
cat >> /etc/fstab << EOF
/dev/vdc /mnt/xcache-storage/data1 xfs nosuid,nodev 1 2
mount -a
chown xrootd:xrootd /mnt/xcache-storage/data1
mkdir /mnt/xcache-storage/data2
mkfs.xfs /dev/vdd
cat >> /etc/fstab << EOF
/dev/vdd /mnt/xcache-storage/data2 xfs nosuid,nodev 1 2
mount -a
chown xrootd:xrootd /mnt/xcache-storage/data2
- Certificate
- Ask for a certificate (
Needed only if you need authentication/authorisation on your instance
- Ask for a grid certificate.
- For France:
- Once you have it, transform it to a p12
openssl pkcs12 -export -in certificate.crt -inkey privatekey.key -out certificate.p12
- Register it to the [ESCAPE IAM Server](
- Put the certificates and private keys as `cert.pem` and `key.pem` files in `/root` on the machine
- Change their owner:
chown xrootd:xrootd /root/*.pem
- And their permissions
chmod 600 /root/cert.pem
chmod 400 /root/key.pem
<!---Add it to hiera data ( in hieradata/usages/xcache/server/<server_fqdn>.yaml-->
- Register it to the VO
- For Escape: [ESCAPE IAM Server](
- Use [ansible-vault]( to store the data
- Create a password file outside of the git repository
- Encrypt the certificate and the private key with the ansible-vault command line:
cat cert.pem | ansible-vault encrypt_string --vault-password-file vault_password --stdin-name 'certificate'
cat cert.key | ansible-vault encrypt_string --vault-password-file vault_password --stdin-name 'private_key'
- Put the output of these two commands in the `host_var/<machine_name>` file corresponding to the machine you want to install
`<machine_name>` should be the same as in the inventory
The file should look like:
certificate: !vault |
private_key: !vault |
<!--### All that is done through Puppet now-->
- install docker and docker compose
yum-config-manager --add-repo
yum install --nogpg -y docker-ce docker-ce-cli
systemctl start docker
systemctl enable docker
curl -L "$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
chmod +x /usr/local/bin/docker-compose
- In case of a testing instance, you can set the `test_machine` to true in the `host_var/<machine_name>.yaml` file.
- get the git repository containing the configuration
The only effect is to launch a webserver that you can contact via curl to completely erase the cache.
git clone
curl -X DELETE http://<machine_name>:80/flush
- copy the service file, enable it and start it
- To launch the full instalation of the machine:
cp xcache-config/containers/setup/certificate/xcache.service /etc/systemd/system
systemctl enable xcache
systemctl start xcache
ansible-playbook -K --vault-password-file ../../vault_password -i inventory prod.yaml
- `-K` is too type the sudo password
- `--vault-password-file ../../vault_password` is to be used to decrypt the certificate and private key and install them on the machine
# Monitoring
:construction: Work in Progress I am implementing that right now :construction:
- Integrated to XRootD
- [General xroot documentation](
- Xcache specific (on version 5 only):
- [Summary monitoring](
- XCache configuration: [](
- Listener: [mpxstats](
- [Fields explanation](
- [Detailed monitoring](
- XCache configuration
- Base monitoring: [xrootd.monitor](
- XCache specific monitoring: [xrootd.mongstream](
- XCache stream specificity
- [Packet Header](
- [Monitor Map Messages](
- Probes to check different states of the server and notify the admins in case of necessity
- Protocols
- xroot
xrdfs root://myXcache query config version
- http
Not sure how to make it works without either without getting an error code or having to launch a containerised xrootd server serving as an origin to get a file.
- Disks
- Check the filling rates of data and metadata disks
- Machine load
- Memory
......@@ -10,6 +10,6 @@
- `setup`: docker-compose files and xrootd configuration to make xcache work
- `base`: basic config of a xcache standalone server
- `certificate`: basic config to launch a xcache server with certificate authN/Z[^ct]
- `token`: basic config to launch a xcache server with token authN/Z[^ct]
- `token`: basic config to launch a xcache server with token authN/Z[^ct]. For now disabled as too complicated to maintain the dependency.
[^ct]: `certificate` and `token` config will be soon merged in one file
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment