Commit 374f3175 authored by Maude Le Jeune's avatar Maude Le Jeune
Browse files

starting sphinx doc

parent 2c9e893e
Browsing Pipes
~~~~~~~~~~~~~~
The pipelet webserver and ACL
*****************************
The pipelet webserver allows the browsing of multiple pipelines.
Each pipeline has to be register using ::
pipeweb track <shortname> sqlfile
To remove a pipeline from the tracked list, use ::
pipeweb untrack <shortname>
As the pipeline browsing implies a disk parsing, some basic security
has to be set also. All users have to be register with a specific access
level (1 for read-only access, and 2 for write access). ::
pipeutils -a <username> -l 2 sqlfile
To remove a user from the user list::
pipeutils -d <username> sqlfile
Start the web server using ::
pipeweb start
Then the web application will be available on the web page http://localhost:8080
To stop the web server ::
pipeweb stop
The web application
+++++++++++++++++++
In order to ease the comparison of different processing, the web
interface displays various views of the pipeline data :
The index page
++++++++++++++
The index page displays a tree view of all pipeline instances. Each
segment may be expand or reduce via the +/- buttons.
The parameters used in each segments are resumed and displayed with
the date of execution and the number of related tasks order by
status.
A check-box allows to performed operation on multiple segments :
- deletion : to clean unwanted data
- tag : to tag remarkable data
The filter panel allows to display the segment instances with respect to 2
criterions :
- tag
- date of execution
The code page
+++++++++++++
Each segment names is a link to its code page. From this page the user
can view all python scripts code which have been applied to the data.
The tree view is reduced to the current segment and its related
parents.
The root path corresponding to the data storage is also displayed.
The product page
++++++++++++++++
The number of related tasks, order by status, is a link to the product
pages, where the data can be directly displayed (if images, or text
files) or downloaded.
From this page it is also possible to delete a specific product and
its dependencies.
The log page
++++++++++++
The log page can be acceded via the log button of the filter panel.
Logs are ordered by date.
# -*- coding: utf-8 -*-
#
# Pipelet documentation build configuration file, created by
# sphinx-quickstart on Mon Sep 23 11:54:20 2013.
#
# This file is execfile()d with the current directory set to its containing dir.
#
# Note that not all possible configuration values are present in this
# autogenerated file.
#
# All configuration values have a default; values that are commented out
# serve to show the default.
import sys, os
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
# -- General configuration -----------------------------------------------------
# If your documentation needs a minimal Sphinx version, state it here.
#needs_sphinx = '1.0'
# Add any Sphinx extension module names here, as strings. They can be extensions
# coming with Sphinx (named 'sphinx.ext.*') or your custom ones.
extensions = ['sphinx.ext.autodoc', 'sphinx.ext.doctest', 'sphinx.ext.todo', 'sphinx.ext.coverage', 'sphinx.ext.pngmath', 'sphinx.ext.viewcode']
# Add any paths that contain templates here, relative to this directory.
templates_path = ['_templates']
# The suffix of source filenames.
source_suffix = '.rst'
# The encoding of source files.
#source_encoding = 'utf-8-sig'
# The master toctree document.
master_doc = 'index'
# General information about the project.
project = u'Pipelet'
copyright = u'2013, M. Betoule, M. Le Jeune'
# The version info for the project you're documenting, acts as replacement for
# |version| and |release|, also used in various other places throughout the
# built documents.
#
# The short X.Y version.
version = '1.1'
# The full version, including alpha/beta/rc tags.
release = '1.1'
# The language for content autogenerated by Sphinx. Refer to documentation
# for a list of supported languages.
#language = None
# There are two options for replacing |today|: either, you set today to some
# non-false value, then it is used:
#today = ''
# Else, today_fmt is used as the format for a strftime call.
#today_fmt = '%B %d, %Y'
# List of patterns, relative to source directory, that match files and
# directories to ignore when looking for source files.
exclude_patterns = ['_build']
# The reST default role (used for this markup: `text`) to use for all documents.
#default_role = None
# If true, '()' will be appended to :func: etc. cross-reference text.
#add_function_parentheses = True
# If true, the current module name will be prepended to all description
# unit titles (such as .. function::).
#add_module_names = True
# If true, sectionauthor and moduleauthor directives will be shown in the
# output. They are ignored by default.
#show_authors = False
# The name of the Pygments (syntax highlighting) style to use.
pygments_style = 'sphinx'
# A list of ignored prefixes for module index sorting.
#modindex_common_prefix = []
# -- Options for HTML output ---------------------------------------------------
# The theme to use for HTML and HTML Help pages. See the documentation for
# a list of builtin themes.
html_theme = 'default'
# Theme options are theme-specific and customize the look and feel of a theme
# further. For a list of options available for each theme, see the
# documentation.
#html_theme_options = {}
# Add any paths that contain custom themes here, relative to this directory.
#html_theme_path = []
# The name for this set of Sphinx documents. If None, it defaults to
# "<project> v<release> documentation".
#html_title = None
# A shorter title for the navigation bar. Default is the same as html_title.
#html_short_title = None
# The name of an image file (relative to this directory) to place at the top
# of the sidebar.
#html_logo = None
# The name of an image file (within the static path) to use as favicon of the
# docs. This file should be a Windows icon file (.ico) being 16x16 or 32x32
# pixels large.
#html_favicon = None
# Add any paths that contain custom static files (such as style sheets) here,
# relative to this directory. They are copied after the builtin static files,
# so a file named "default.css" will overwrite the builtin "default.css".
html_static_path = ['_static']
# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
# using the given strftime format.
#html_last_updated_fmt = '%b %d, %Y'
# If true, SmartyPants will be used to convert quotes and dashes to
# typographically correct entities.
#html_use_smartypants = True
# Custom sidebar templates, maps document names to template names.
#html_sidebars = {}
# Additional templates that should be rendered to pages, maps page names to
# template names.
#html_additional_pages = {}
# If false, no module index is generated.
#html_domain_indices = True
# If false, no index is generated.
#html_use_index = True
# If true, the index is split into individual pages for each letter.
#html_split_index = False
# If true, links to the reST sources are added to the pages.
#html_show_sourcelink = True
# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
#html_show_sphinx = True
# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
#html_show_copyright = True
# If true, an OpenSearch description file will be output, and all pages will
# contain a <link> tag referring to it. The value of this option must be the
# base URL from which the finished HTML is served.
#html_use_opensearch = ''
# This is the file name suffix for HTML files (e.g. ".xhtml").
#html_file_suffix = None
# Output file base name for HTML help builder.
htmlhelp_basename = 'Pipeletdoc'
# -- Options for LaTeX output --------------------------------------------------
latex_elements = {
# The paper size ('letterpaper' or 'a4paper').
#'papersize': 'letterpaper',
# The font size ('10pt', '11pt' or '12pt').
#'pointsize': '10pt',
# Additional stuff for the LaTeX preamble.
#'preamble': '',
}
# Grouping the document tree into LaTeX files. List of tuples
# (source start file, target name, title, author, documentclass [howto/manual]).
latex_documents = [
('index', 'Pipelet.tex', u'Pipelet Documentation',
u'M. Betoule, M. Le Jeune', 'manual'),
]
# The name of an image file (relative to this directory) to place at the top of
# the title page.
#latex_logo = None
# For "manual" documents, if this is true, then toplevel headings are parts,
# not chapters.
#latex_use_parts = False
# If true, show page references after internal links.
#latex_show_pagerefs = False
# If true, show URL addresses after external links.
#latex_show_urls = False
# Documents to append as an appendix to all manuals.
#latex_appendices = []
# If false, no module index is generated.
#latex_domain_indices = True
# -- Options for manual page output --------------------------------------------
# One entry per manual page. List of tuples
# (source start file, name, description, authors, manual section).
man_pages = [
('index', 'pipelet', u'Pipelet Documentation',
[u'M. Betoule, M. Le Jeune'], 1)
]
# If true, show URL addresses after external links.
#man_show_urls = False
# -- Options for Texinfo output ------------------------------------------------
# Grouping the document tree into Texinfo files. List of tuples
# (source start file, target name, title, author,
# dir menu entry, description, category)
texinfo_documents = [
('index', 'Pipelet', u'Pipelet Documentation',
u'M. Betoule, M. Le Jeune', 'Pipelet', 'One line description of project.',
'Miscellaneous'),
]
# Documents to append as an appendix to all manuals.
#texinfo_appendices = []
# If false, no module index is generated.
#texinfo_domain_indices = True
# How to display URL addresses: 'footnote', 'no', or 'inline'.
#texinfo_show_urls = 'footnote'
.. _example:
The example pipelines
~~~~~~~~~~~~~~~~~~~~~
fft
***
Highlights
++++++++++
This example illustrates a very simple image processing use.
The problematic is the following : one wants to apply a Gaussian
filter in Fourier domain on several 2D images.
The pipe scheme is::
pipedot = """
mkgauss->convol;
fftimg->convol;
"""
where segment ``mkgauss`` computes the Gaussian filter, ``fftimg`` computes the
Fourier transforms of the input images, and ``convol`` performs the
filtering in Fourier domain, and the inverse transform of the filtered
images. ::
P = pipeline.Pipeline(pipedot, code_dir=op.abspath('./'), prefix=op.abspath('./'))
P.to_dot('pipeline.dot')
The pipe scheme is output as a .dot file, that can be converted to an
image file with the command line::
dot -Tpng -o pipeline.png pipeline.dot
To apply this filter to several images (in our case 4 input images),
the pipe data parallelism is used.
From the main script, a 4-element list is pushed to the ``fftimg``
segment. ::
P.push(fftimg=[1,2,3,4])
At execution, 4 instances of the =fftimg= segment will be
created, and each of them outputs one element of this list ::
img = get_input() #(fftimg.py - line 15)
set_output (img) #(fftimg.py - line 38)
On the other side, a single instance of the ``mkgauss`` segment will be
executed, as there is one filter to apply.
The last segment ``convol``, which depends on the two others, will be
executed with a number of instances that is the Cartesian product of
its 4+1 inputs (ie 4 instances)
The instance identifier which is set by the ``fftimg`` output, can be
retrieve with the following instruction: ::
img = get_input('fftimg') #(convol.py - line 12)
Running the pipe
++++++++++++++++
Follow the same procedure than for the first example pipeline, to run
this pipe and browse the result.
cmb
***
Running the pipe
++++++++++++++++
This CMB pipeline depends on two external python modules:
+ healpy : http://code.google.com/p/healpy/
+ spherelib: http://gitorious.org/spherelib
Problematic
+++++++++++
This example illustrates a very simple CMB data processing use.
The problematic is the following : one wants to characterize the
inverse noise weighting spectral estimator (as applied to the WMAP 1yr
data). A first demo pipeline is built to check that the algorithm
has correctly been implemented. Then, Monte Carlo simulations are used
to compute error bars estimates.
A design pipeline
+++++++++++++++++
The design pipe scheme is: ::
pipe_dot = """
cmb->clcmb->clplot;
noise->clcmb;
noise->clnoise->clplot;
"""
where:
+ ``cmb``: generate a CMB map from LCDM power spectrum.
+ ``noise``: compute the mode coupling matrix from the input hit-count map
+ ``clnoise``: compute the empirical noise power spectrum from a noise
realization.
+ ``clcmb``: generate two noise realizations, add them to the CMB map, to compute a
first cross spectrum estimator. Then weighting mask and mode
coupling matrix are applied to get the inverse noise weighting
estimator
+ ``clplot``: make a plot to compare pure cross spectrum vs inverse noise
weighting estimators.
As the two first orphan segments depends on a single shared parameter
which is the map resolution nside, this argument is pushed from the
main script.
Another input argument of the cmb segment, is its simulation identifier,
which will be used for latter Monte Carlo. In order to push two
inputs to a single segment instance, we use python tuple data type.::
P.push(cmb=[(nside, 1)])
P.push(noise=[nside])
From the segment, those inputs are retrieved with : ::
(nside,sim_id) = get_input() ##(cmb.py line 14)
nside = seg_input() ##(noise.py line 15)
The last segment produces a plot in which we compare:
+ the input LCDM power spectrum
+ the binned cross spectrum of the noisy CMB maps
+ the binned cross spectrum of which we applied hitcount weight and mode coupling matrix.
+ the noise power spectrum computed by clnoise segment.
In this plot we check that both estimators are corrects, and that the
noise level is the expected one.
From the design pipeline to Monte Carlo
+++++++++++++++++++++++++++++++++++++++
As a second step, Monte Carlo simulations are used to compute error
bars.
The ``clnoise`` segment is no longer needed, so that the new pipe scheme
reads : ::
pipe_dot = """
cmb->clcmb->clplot;
noise->clcmb;
"""
We now use the native data parallelization scheme of the pipe to build
many instances of the ``cmb`` and ``clcmb`` segments. ::
cmbin = []
for sim_id in [1,2,3,4,5,6]:
cmbin.append((nside, sim_id))
P.push(cmb=cmbin)
.. Pipelet documentation master file, created by
sphinx-quickstart on Mon Sep 23 11:54:20 2013.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to Pipelet's documentation!
===================================
Pipelet is a free framework allowing for the creation, execution and
browsing of scientific data processing pipelines. It provides:
+ easy chaining of interdependent elementary tasks,
+ web access to data products,
+ branch handling,
+ automated distribution of computational tasks.
Both engine and web interface are written in Python. As Pipelet is all
about chaining processing written in Python or using Python as a glue
language, prior knowledge of this language is required.
.. automodule:: pipelet
:members:
:undoc-members:
:private-members:
:special-members:
Table of contents:
.. toctree::
:maxdepth: 1
Tutorial <tutorial>
Advanced usage <userguide>
Contributing to pipelet <extra>
Indices and tables
==================
* :ref:`genindex`
* :ref:`modindex`
* :ref:`search`
Introduction
~~~~~~~~~~~~
Why using pipelines
*******************
The pipeline mechanism allows to apply a sequence of processing
steps to some data, in a way that the input of each process is the
output of the previous one. Making visible these different processing
steps, in the right order, is essential in data analysis to keep track
of what you did, and make sure that the whole processing remains
consistent.
How it works
************
Pipelet is based on the possibility to save on disk every intermediate
input or output of a pipeline, which is usually not a strong
constraint but offers a lot of benefits. It means that you can stop
the processing whenever you want, and start it again without
recomputing the whole thing: you just take the last products you have
on disk, and continue the processing where it stopped. This logic is
interesting when the computation cost is higher than the cost of disk
space required by intermediate products.
In addition, the Pipelet engine has been designed to
process *data* *sets*. It takes advantage of the parallelisation
opportunity that comes with data which share the same structure (data
arrays), to dispatch the computational tasks on parallel architecture.
The data dependency scheme is also used to save CPU time, and allows
to handle very big data sets processing.
The Pipelet functionalities
***************************
Pipelet is a free framework which helps you :
+ to write and manipulate pipelines with any dependency scheme,
+ to keep track of what processing has been applied to your data and perform comparisons,
+ to carry pipelines source code from development to production and adapt to different hardware and software architectures.
What's new in v1.1
******************
.. _hyperlink-name: link-block
+ Speed improvement during execution and navigation to handle pipeline of 100 thousand tasks.
+ Task repository versionning to manage ``group_by`` directive which uses different parent tasks list.
+ New ``glob_seg`` type utility to search data files from parent task only + improvement of I/O and parameters utilities. See :ref:`The segment environnement<my-reference-label>` section
+ Improvement of external dependencies management : the ``depend`` directive induces a copy of external dependencies, the version number (together with RCS revision if exist) of the imported modules are output.
+ Pickle file render available from the Web interface
.. _run:
Running Pipes
~~~~~~~~~~~~~
The sample main file
********************
A sample main file is made available when creating a new Pipelet
framework. It is copied from the reference file ::
pipelet/pipelet/static/main.py
This script illustrates various ways of running pipes. It describes
the different parameters, and also, how to write a
main python script can be used as any binary from the command line
(including options parsing).
Common options
**************
Some options are common to each running modes.
log level
+++++++++
The logging system is handle by the python logging facility module.
This module defines the following log levels :
+ ``DEBUG``
+ ``INFO``
+ ``WARNING``
+ ``ERROR``
+ ``CRITICAL``
All logging messages are saved in the different Pipelet log files,
available from the web interface (rotating file logging). It is also
possible to print those messages on the standard output (stream
logging), by setting the desired log level in the launchers options:
For example::
import logging
launch_process(P, N,log_level=logging.DEBUG)
If set to 0, stream logging will be disable.
matplotlib
++++++++++
.. note:: The matplotlib documentation says: "Many users report initial problems trying to use maptlotlib in web application servers, because by default matplotlib ships configured to work with a graphical user interface which may require an X11 connection. Since many barebones application servers do not have X11 enabled, you may get errors if you don’t configure matplotlib for use in these environments. Most importantly, you need to decide what kinds of images you want to generate (PNG, PDF, SVG) and configure the appropriate default backend. For 99% of users, this will be the Agg backend, which uses the C++ antigrain rendering engine to make nice PNGs. The Agg backend is also configured to recognize requests to generate other output formats (PDF, PS, EPS, SVG). "
The easiest way to configure matplotlib to use Agg is to call::
matplotlib.use('Agg')
The ``matplotlib`` and ``matplotlib_interactive`` options turn the
matplotlib backend to Agg in order to allow the execution in
non-interactive environment. The two options affects independently the
non interactive execution mode and the interactive mode.
Those two options are set to ``True`` by default in the sample main
script. Setting them to False deactivate this behavior for pipeli