Skip to content
Snippets Groups Projects

PLIBS_9 logo


pipeline status coverage report

Code

https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer

Documentation

https://cta-lapp.pages.in2p3.fr//PHOENIX_LIBS/PhoenixFileBinaryAnalyzer/

Requirements

  • c++ compiler (tested with g++ 5,6,7,8,9,10 and clang 9,10)
  • cmake > 3
  • make
  • HDF5 C++

Installation for Users

	$ git clone https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer.git
	$ cd PhoenixFileBinaryAnalyzer
	$ ./install.sh

Then PhoenixFileBinaryAnalyzer is installed in your $HOME/usr.

If you prefer a customized install path you can do :

	$ git clone https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer.git
	$ cd PhoenixFileBinaryAnalyzer
	$ ./install.sh /your/install/path

If you prefer a customized install path with custom compilation you can do :

	$ git clone https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer.git
	$ cd PhoenixFileBinaryAnalyzer
	$ mkdir -p build
	$ cd build
	$ cmake .. -DCMAKE_INSTALL_PREFIX=/your/install/Path
	$ make -j `nproc`
	$ make install -j `nproc`

The nproc gives the number of cores of the computer. If you want a build on one core you can just type :

	$ make
	$ make install

Update PhoenixFileBinaryAnalyzer

If you want to update the software :

	$ git clone https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer.git
	$ cd PhoenixFileBinaryAnalyzer
	$ ./update.sh

If you want to update the software with a custom install path :

	$ git clone https://gitlab.in2p3.fr/CTA-LAPP/PHOENIX_LIBS/PhoenixFileBinaryAnalyzer.git
	$ cd PhoenixFileBinaryAnalyzer
	$ ./update.sh /your/install/path

Basic use

phoenix_binary_analyzer -i rawMessage.h5 -f fullEvent.h5

Where :

  • rawMessage.h5 contains raw binary data (we will determine the offset in it)
  • fullEvent.h5 contains the different values (in HDF5 DataSet columns) we will search into the rawMessage.h5 file

These two files have to have only one DataSet at the top of the file (its name does not matter) :

  • the DataSet of the raw file has only one column (its name does not matter either) which contains all the raw data we need to analyse
  • the DataSet of the full file has as many columns as you want to get their offset

The number of rows of the two DataSet has to be the same. This number of rows allows to improve the discrimination between attributes in columns.

Do not use to many rows, at least files of 50 MB can be analysed in few seconds depending of how much conflicts and uncertainties the program will have to solve.

A typical output is :

Attribute 'configurationId' offset =	49
Attribute 'eventId' offset =	58
Attribute 'pedId' offset =	Not enough data to conclude (potential conflict with other searched Attributes) => 4278 possibilities
Attribute 'tabPixelStatus' offset =	296908
Attribute 'telEventId' offset =	67
Attribute 'triggerTimeQns' offset =	Not enough data to conclude (potential conflict with other searched Attributes) => 4301 possibilities
Attribute 'triggerTimeS' offset =	Not enough data to conclude (potential conflict with other searched Attributes) => 4301 possibilities
Attribute 'triggerType' offset =	Not enough data to conclude (potential conflict with other searched Attributes) => 4301 possibilities
Attribute 'waveform' offset =	100

The offset is given in bytes (when it has been found) and an explaination is given when there are several possibilities for columns which cannot be determined based on the given data.