tutorial.rst 28.1 KB
Newer Older
Maude Le Jeune's avatar
Maude Le Jeune committed
1 2 3
Tutorial
--------

Maude Le Jeune's avatar
Maude Le Jeune committed
4 5
Introduction
~~~~~~~~~~~~
Maude Le Jeune's avatar
Maude Le Jeune committed
6

Maude Le Jeune's avatar
Maude Le Jeune committed
7 8
Why using pipelines
*******************
Maude Le Jeune's avatar
Maude Le Jeune committed
9

Maude Le Jeune's avatar
Maude Le Jeune committed
10 11 12 13 14 15
The pipeline mechanism allows to apply a sequence of processing
steps to some data, in a way that the input of each process is the
output of the previous one. Making visible these different processing
steps, in the right order, is essential in data analysis to keep track
of what you did, and make sure that the whole processing remains
consistent.
Maude Le Jeune's avatar
Maude Le Jeune committed
16

Maude Le Jeune's avatar
Maude Le Jeune committed
17 18
How it works
************
Maude Le Jeune's avatar
Maude Le Jeune committed
19

Maude Le Jeune's avatar
Maude Le Jeune committed
20 21 22 23 24 25 26 27
Pipelet is based on the possibility to save on disk every intermediate
input or output of a pipeline, which is usually not a strong
constraint but offers a lot of benefits. It means that you can stop
the processing whenever you want, and start it again without
recomputing the whole thing: you just take the last products you have
on disk, and continue the processing where it stopped. This logic is
interesting when the computation cost is higher than the cost of disk
space required by intermediate products.
Maude Le Jeune's avatar
Maude Le Jeune committed
28

Maude Le Jeune's avatar
Maude Le Jeune committed
29 30 31 32 33 34
In addition, the Pipelet engine has been designed to
process *data* *sets*. It takes advantage of the parallelisation
opportunity that comes with data which share the same structure (data
arrays), to dispatch the computational tasks on parallel architecture.
The data dependency scheme is also used to save CPU time, and allows
to handle very big data sets processing.
Maude Le Jeune's avatar
Maude Le Jeune committed
35 36


Maude Le Jeune's avatar
Maude Le Jeune committed
37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90
The Pipelet functionalities
***************************

Pipelet is a free framework which helps you : 

 + to write and manipulate pipelines with any dependency scheme, 
 + to keep track of what processing has been applied to your data and perform comparisons,
 + to carry pipelines source code from development to production and adapt to different hardware and software architectures.


What's new in v1.1
******************

.. _hyperlink-name: link-block

 + Speed improvement during execution and navigation to handle pipeline of 100 thousand tasks.
 + Task repository versionning to manage ``group_by`` directive which uses different parent tasks list.
 + New ``glob_seg`` type utility to search data files from parent task only + improvement of I/O and parameters utilities. See :ref:`The segment environnement<my-reference-label>` section
 + Improvement of external dependencies management : the ``depend`` directive induces a copy of external dependencies, the version number (together with RCS revision if exist) of the imported modules are output.
 + Pickle file render available from the Web interface 

Getting started
~~~~~~~~~~~~~~~

Pipelet installation 
********************

Dependencies 
++++++++++++

 + Running the *pipelet* engine requires *Python* >= 2.6.

 + The web interface of *pipelet* requires the installation of the *cherrypy3* Python module (on Debian: aptitude install python-cherrypy3).

 + Although default Python installation provides the *sqlite3* module, you may not be able to use it. In that case, you can manually install the *pysqlite2* module. 


You may find useful to install some generic scientific tools that nicely interact with *pipelet*: 

 + *numpy*
 + *matplotlib*
 + *latex* 


Getting Pipelet
+++++++++++++++

.. note:: The first version of the software is currently in the process of stabilisation.  The Pipelet engine has now reached the level of desired sophistication.  On the other hand, the user interface has been developped in a minimalist way. It includes the main functionalities but with a design which could and will be more user friendly. 


Getting last pipelet version
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
.. code:: bash

Maude Le Jeune's avatar
Maude Le Jeune committed
91
	  git clone https://gitlab.in2p3.fr/pipelet/pipelet.git
Maude Le Jeune's avatar
Maude Le Jeune committed
92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402

Installing Pipelet
++++++++++++++++++

.. code:: bash

	  sudo python setup.py install

Running a simple test pipeline
******************************

1. Run the test pipeline::

     cd test/first_test

     python main.py

2. Add this pipeline to the web interface::

     pipeweb track test ./.sqlstatus

3. Set up an account in the access control list and launch the web server::

     pipeutils -a username -l 2 .sqlstatus

     pipeweb start

4. You should be able to browse the result on the web page
   http://localhost:8080

Getting a new pipe framework
****************************

To get a new pipeline framework, with example main and segment scripts::

   pipeutils -c pipename

This command ends up with the creation of directory named pipename wich contains: 

 + a main script (named *main.py*) providing functionnalities to execute your pipeline in various modes (debug, parallel, batch mode, ...)

 + an example of segment script (``default.py``) which illustrates the pipelet utilities with comments. 

The next section describes those two files in more details. 

..  _write:

Writing Pipes
~~~~~~~~~~~~~

Pipeline architecture
*********************

The definition of a data processing pipeline consists in :
 + a succession of python scripts, called segments, coding each step of the actual processing.

 + a main script that defines the dependency scheme between segments, and launch the processing.

The dependencies between segments must form a directed acyclic
graph. This graph is described by a char string using a subset of the
graphviz dot language (http://www.graphviz.org). For example the string::

  """
  a -> b -> d;
  c -> d;
  c -> e;
  """

defines a pipeline with 5 segments ``{"a", "b", "c", "d", "e"}``. The
relation ``a->b`` ensures that the processing of the segment "a" will be
done before the processing of its 'child' segment ``b``. Also the output
of ``a`` will be fed as input for ``b``. In the given example, the node
``d`` has two parents ``b`` and ``c``. Both will be executed before ``d``. As
their is no relation between ``b`` and ``c`` which of the two will be
executed first is not defined.

When executing the segment ``seg``, the engine looks for a python script
named ``seg.py``. If not found, it looks iteratively for script files
named ``se.py`` and ``s.py``. This way, different segments of the pipeline
can share the same code, if they are given a name with a common root
(this mechanism is useful to write generic segment and is completed by
the hooking system, described in the advanced usage section). The code
is then executed in a specific namespace (see below :ref:`The segment environment<my-reference-label>` section).

The Pipeline object
*******************

Practically, the creation of a Pipeline object requires 3 arguments::

  from pipelet.pipeline import Pipeline
  P = Pipeline(pipedot, code_dir="./", prefix="./")


+ ``pipedot`` is the string description of the pipeline
+ ``code_dir`` is the path where the segment scripts can be found
+ ``prefix``  is the path to the data repository (see below :ref:`Hierarchical data storage<hier-sec>`)

It is possible to output the graphviz representation of the pipeline
(requires the installation of graphviz). First, save the graph string
into a ``.dot`` file with the pipelet function::

  P.to_dot('pipeline.dot')

Then, convert it to an image file with the dot command::

  dot -Tpng -o pipeline.png pipeline.dot

Dependencies between segments and data parallelism
**************************************************

The modification of the code of one segment will trigger its
recalculation and the recalculation of all the segments which
depend on it.

The output of a segment is a list of python objects. If a segment as
no particular output this list can be empty and do not need to be
specified. Elements of the list are allowed to be any kind of
pickleable python objects. However, a good practice is to fill the
list with the minimal set of characteristics relevant to describe the
output of the segment and to defer the storage of the data to
appropriate structures and file formats. For example, a segment which
performs computation on large images could virtually pass the results
of its computation to the following segment using the output list. It
is a better practice to store the resulting image in a dedicated file
and to pass in the list only the information allowing a non ambiguous
identification of this file (like its name or part of it).

The input of a child segment is taken in a set build from the output
lists of its parents. The content of the input set is actually tunable
using the multiplex directive (see below). However the simplest and
default behavior of the engine is to form the Cartesian product of
the output list of its parent.

To illustrate this behavior let us consider the following pipeline,
build from three segments::

  """
  knights -> melt;
  quality -> melt;
  """

and assume that the respective output lists of segments knights and
quality are::

  ["Lancelot", "Galahad"]

and::

  ['the Brave', 'the Pure']

The Cartesian product of the previous set is::


 [('Lancelot','the Brave'), ('Lancelot','the Pure'), ('Galahad','the Brave'), ('Galahad','the Pure')]

Four instances of segment ``melt`` will thus be run, each one receiving
as input one of the four 2-tuples.

At the end of the execution of all the instances of a segment, their
output lists are concatenated. If the action of segment ``melt`` is to
concatenate the two strings he receives separated by a space, the
final output set of segment ``melt`` will be::

  [('Lancelot the Brave'), ('Lancelot the Pure'), ('Galahad the Brave'), ('Galahad the Pure')].

This default behavior can be altered by specifying a ``#multiplex``
directive in the commentary of the segment code. See section :ref:`Multiplex directive<multiplex-section>` for more details.

As the segment execution order is not uniquely determined by the pipe
scheme (several path may exists), it is not possible to retrieve an
ordered input tuple. To overcome this issue, segment inputs are
dictionaries, with keywords matching parent segment names.  In the
above example, one can read ``melt`` inputs using::

  k = seg_input["knights"]
  q = seg_input["quality"]

One can also use dedicated segment routines::

  k = get_input("knights")
  q = get_input("quality")

See section :ref:`The segment environment<my-reference-label>` for more details.

Orphan segments
***************

By default, orphan segments (segments without parents) have no input
argument (an empty list), and therefore are executed once without
input. The possibility is offered to feed input to an orphan segment
by pushing a list into the output set of an hypothetic ('phantom')
parent. If P is an instance of the pipeline object, this is done by::

  P.push (segname=[1,2,3])

From the segment environment, inputs can be retrieve from the
dedicated routine::

  id = get_input()

In this scheme, it is important to uniquely identify the child tasks
of the orphan segment by setting a dedicated output.::

  seg_output = id

See section :ref:`The segment environment<my-reference-label>` for more details.

.. _hier-sec:

Hierarchical data storage
*************************

The framework provides versioning of your data and easy access through
the web interface. It also keep track of the code, of the execution
logs, and various meta-data of the processing. Of course, you remain
able to bypass the hierarchical storage and store your actual data
elsewhere, but you will loose the benefit of automated versioning
which proves to be quite convenient.

The storage is organized as follows:

- all pipeline instances are stored below a root which corresponds to
  the prefix parameter of the Pipeline object. 
      ``/prefix/``
- all segment meta data are stored below a root which name corresponds
  to a unique hash computed on the segment code and its dependencies.
      ``/prefix/segname_YFLJ65/``
- Segment's meta data are: 
  - a copy of the segment python script
  - a copy of all segment hook scripts
  - a parameter file (.args) which contains segment parameters value
  - a meta data file (.meta) which contains some extra meta data
- all segment instances data and meta data are stored in a specific subdirectory
  which name corresponds to a string representation of its input
  prefix by its identifier number
  	``/prefix/segname_YFLJ65/data/1_a/``
- if there is a single segment instance, then data are stored in
       ``/prefix/segname_YFLJ65/data/``
- If a segment has at least one parent, its root will be located below
  one of its parent's one : 
       ``/prefix/segname_YFLJ65/segname2_PLMBH9/``
- etc...
  
While the hierarchical storage makes easy the storing and handling of
many different data with different versions, it can make the manual
navigation in the data less convenient. Here comes the role of the [[*Browsing%20Pipes][web
interface]] (among other advantages, like distant access to the data,
tagging...).

.. _my-reference-label:

The segment environment
***********************

The segment code is executed in a specific environment that provides:

1. access to the segment input and output
   - ``get_input(seg)``: return the input coming from segment seg. If no
     segment specified, take the first. This utility replaces the
     ``seg_input`` variable which type could vary as described below.

   - ``seg_input``:  this variable is a dictionary containing the input of the segment.
     In the general case, ``seg_input`` is a python dictionary which
     contains as many keywords as parent segments. In the case of orphan
     segment, the keyword used is suffixed by the ``phantom`` word. 
     One exception to this is coming from the use of the ``group_by``
     directive, which alters the origin of the inputs. In this case,
     ``seg_input`` contains the resulting class elements. 
     
   - ``set_output(o)``: set the segment output as a list. If ``o`` is not a
     list, set a list of one element ``[o]``. 

   - ``seg_output``: this variable has to be a list. 

2. Functionalities to use the automated hierarchical data storage system.
   - ``get_data_fn(basename, seg)``: complete the filename with the path to
     the working directory of the segment (default is the current segment). 
   - ``glob_parent(regexp, segs)``: Return the list of filename matching
        the pattern y in the data directory of direct parent tasks. It
        is possible to search only in a specific segment list segs.
   - ``glob_seg(seg, regexp)``: Return the list of filename matching the
        pattern y in the data directory of parent segment x (all task
        directories are searched, independantly of whether the file
        comes from a task related to the current task). Its usage
        should be limited as it:
        - potentially breaks the dependancy scheme.
        - may hurt performances as all task directories of the segment
          x will be searched.
   - ``get_tmp_fn()``: return a temporary filename.

3. Functionalities to use the automated parameters handling
   - ``save_param(lst)``: the listed parameters will be saved in a dedicated file. 
   - ``expose(lst)``: the listed parameters will be exposed from the web interface
   - ``load_param(seg, globals(), lst)``: retrieve parameters from the meta data.

4. Various convenient functionalities
   - ``save_products(filename, globals(), lst_par)``: use pickle to save a
     part of the current namespace.
   - ``load_products(filename, globals(), lst_par)``: update the namespace by
     unpickling requested object from the file.
   - ``logged_subprocess(lst_args)``: execute a subprocess and log its
     output in ``processname.log`` and ``processname.err``.
   - ``logger`` is a standard ``logging.Logger`` object that can be used to
     log the processing

5. Hooking support 
   Pipelet enables you to write reusable generic
   segments by providing a hooking system via the hook function.
   ``hook(hookname, globals())``: execute Python script ``segname_hookname.py`` and update the namespace.
   See the section :ref:`Hooking system<hook-sec>` for more details.

Maude Le Jeune's avatar
Maude Le Jeune committed
403 404
See the API documentation :py:mod:`environment <environment>`  to get more details on those functionalities 

Maude Le Jeune's avatar
Maude Le Jeune committed
405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854

Writing a first pipeline
************************

We are now in the position to write a complete simple pipeline. Let us
consider the knights example and write the beginning of the main file
``main.py`` describing the pipeline::


  from pipelet.pipeline import Pipeline
  
  pipedot `` """
  knights -> melt;
  quality -> melt;
  """
  
  P = Pipeline(pipedot, code_dir='./',prefix='./')  


Now, we create the 3 segment files ``knights.py``, ``quality.py`` and
``melt.py``. The only action we expect from segment knights is simply to
provide a list of knights. Its code is very simple::

  set_output(["Lancelot", "Galahad")

Same thing for the segment quality::

  set_output (['the Brave', 'the Pure'] )


As explained, the segment melt will be executed four times. We expect
from it to concatenate its input and write the result into a file, so the code is::

  knight, quality = get_input('knights'), get_input('quality')
  f = open(get_data_fn('result.txt'), 'w')
  f.write(knight + ' ' + quality+'\n')
  f.close()  


We need to complete the main file so that it takes care of the
execution (see :ref:`Running pipes<run>` for more explanations)::

  from pipelet.pipeline import Pipeline
  from pipelet.launchers import launch_interactive
  pipedot = """
  knights -> melt;
  quality -> melt;

  
  P = Pipeline(pipedot, code_dir='./',prefix='./')
  w,t =  launch_interactive(P)
  w.run()


The execution of the main file will run this simple example in the
'interactive' mode provided for debugging purposes. You may add a
knight in the list to see only the required part recomputed. More
complete examples are described in the :ref:`Example pipes<example>` section. The
two remaining sections of the tutorial explain how to use execution
mode that enable to exploitation of data parallelism (in this case
running the four independent instances of the melt segment in
parallel), and how to provide web access to the results.


..  _run:

Running Pipes
~~~~~~~~~~~~~   

The sample main file
********************

A sample main file is made available when creating a new Pipelet
framework. It is copied from the reference file ::

  pipelet/pipelet/static/main.py

This script illustrates various ways of running pipes. It describes
the different parameters, and also, how to write a
main python script can be used as any binary from the command line
(including options parsing). 

Common options
**************
    Some options are common to each running modes.

log level
+++++++++

The logging system is handle by the python logging facility module. 
This module defines the following log levels : 

+ ``DEBUG``
+ ``INFO``
+ ``WARNING``
+ ``ERROR``
+ ``CRITICAL``

All logging messages are saved in the different Pipelet log files,
available from the web interface (rotating file logging).  It is also
possible to print those messages on the standard output (stream
logging), by setting the desired log level in the launchers options:
For example::

  import logging
  launch_process(P, N,log_level=logging.DEBUG)

If set to 0, stream logging will be disable. 

matplotlib
++++++++++



.. note:: The matplotlib documentation says: "Many users report initial problems trying to use maptlotlib in web application servers, because by default matplotlib ships configured to work with a graphical user interface which may require an X11 connection. Since many barebones application servers do not have X11 enabled, you may get errors if you don’t configure matplotlib for use in these environments. Most importantly, you need to decide what kinds of images you want to generate (PNG, PDF, SVG) and configure the appropriate default backend. For 99% of users, this will be the Agg backend, which uses the C++ antigrain rendering engine to make nice PNGs. The Agg backend is also configured to recognize requests to generate other output formats (PDF, PS, EPS, SVG). "

The easiest way to configure matplotlib to use Agg is to call::

  matplotlib.use('Agg')

The ``matplotlib`` and ``matplotlib_interactive`` options turn the
matplotlib backend to Agg in order to allow the execution in
non-interactive environment. The two options affects independently the
non interactive execution mode and the interactive mode.

Those two options are set to ``True`` by default in the sample main
script. Setting them to False deactivate this behavior for pipelines
that make no use of matplotlib (and prevents the raise of an exception
if matplotlib is not even available).

The interactive mode
********************

This mode has been designed to ease debugging. If ``P`` is an instance of
the pipeline object, the syntax reads ::

  from pipelet.launchers import launch_interactive
  w, t = launch_interactive(P)
  w.run()

In this mode, each tasks will be computed in a sequential way. 
Do not hesitate to invoque the Python debugger from IPython :``%pdb``

To use the interactive mode, run::

  main.py -d

The process mode
****************

In this mode, one can run simultaneous tasks (if the pipe scheme
allows to do so). 
The number of subprocess is set by the N parameter ::

  from pipelet.launchers import launch_process
  launch_process(P, N)

To use the process mode, run::

  main.py

or::

  main.py -p 4

The batch mode
**************

In this mode, one can submit some batch jobs to execute the tasks. 
The number of job is set by the N parameter ::

  from pipelet.launchers import launch_pbs
  launch_pbs(P, N , address=(os.environ['HOST'],50000))

It is possible to specify some job submission options like: 

+ job name 
+ job header: this string is prepend to the PBS job scripts. You may
  want to add some environment specific paths. Log and error files are
  automatically handled by the pipelet engine, and made available from
  the web interface. 
+ cpu time: syntax is: "hh:mm:ss"

The ``server`` option can be disable to add some workers to an existing
scheduler.

To use the batch mode, run::

  main.py -b

to start the server, and::

  main.py -a 4

to add 4 workers. 


Browsing Pipes
~~~~~~~~~~~~~~

The pipelet webserver and ACL
*****************************

The pipelet webserver allows the browsing of multiple pipelines. 
Each pipeline has to be register using ::

  pipeweb track <shortname> sqlfile

To remove a pipeline from the tracked list, use ::

  pipeweb untrack <shortname>

As the pipeline browsing implies a disk parsing, some basic security
has to be set also. All users have to be register with a specific access
level (1 for read-only access, and 2 for write access).  ::

  pipeutils -a <username> -l 2 sqlfile

To remove a user from the user list::

  pipeutils -d <username> sqlfile

Start the web server using ::

  pipeweb start

Then the web application will be available on the web page http://localhost:8080

To stop the web server ::

  pipeweb stop

The web application
+++++++++++++++++++

In order to ease the comparison of different processing, the web
interface displays various views of the pipeline data : 

The index page 
++++++++++++++

The index page displays a tree view of all pipeline instances. Each
segment may be expand or reduce via the +/- buttons.  

The parameters used in each segments are resumed and displayed with
the date of execution and the number of related tasks order by
status. 

A check-box allows to performed operation on multiple segments :

  - deletion : to clean unwanted data
  - tag : to tag remarkable data

The filter panel allows to display the segment instances with respect to 2
criterions :

  - tag
  - date of execution

The code page
+++++++++++++

Each segment names is a link to its code page. From this page the user
can view all python scripts code which have been applied to the data.

The tree view is reduced to the current segment and its related
parents.

The root path corresponding to the data storage is also displayed.


The product page
++++++++++++++++

The number of related tasks, order by status, is a link to the product
pages, where the data can be directly displayed (if images, or text
files) or downloaded. 
From this page it is also possible to delete a specific product and
its dependencies. 


The log page
++++++++++++

The log page can be acceded via the log button of the filter panel.
Logs are ordered by date. 

..  _example:

The example pipelines
~~~~~~~~~~~~~~~~~~~~~

fft
***

Highlights
++++++++++

This example illustrates a very simple image processing use.
The problematic is the following : one wants to apply a Gaussian
filter in Fourier domain on several 2D images. 

The pipe scheme is::

  pipedot = """
  mkgauss->convol;
  fftimg->convol;
  """

where segment ``mkgauss`` computes the Gaussian filter, ``fftimg`` computes the
Fourier transforms of the input images, and ``convol`` performs the
filtering in Fourier domain, and the inverse transform of the filtered
images. ::

  P = pipeline.Pipeline(pipedot, code_dir=op.abspath('./'), prefix=op.abspath('./'))
  P.to_dot('pipeline.dot')

The pipe scheme is output as a .dot file, that can be converted to an
image file with the command line::

  dot -Tpng -o pipeline.png pipeline.dot

To apply this filter to several images (in our case 4 input images),
the pipe data parallelism is used. 
From the main script, a 4-element list is pushed to the ``fftimg``
segment. ::

  P.push(fftimg=[1,2,3,4]) 

At execution, 4 instances of the =fftimg= segment will be
created, and each of them outputs one element of this list ::

  img = get_input() #(fftimg.py - line 15)
  set_output (img)  #(fftimg.py - line 38)

On the other side, a single instance of the ``mkgauss`` segment will be
executed, as there is one filter to apply. 

The last segment ``convol``, which depends on the two others, will be
executed with a number of instances that is the Cartesian product of
its 4+1 inputs (ie 4 instances)

The instance identifier which is set by the ``fftimg`` output, can be
retrieve with the following instruction: ::

  img = get_input('fftimg')   #(convol.py - line 12)


Running the pipe
++++++++++++++++

Follow the same procedure than for the first example pipeline, to run
this pipe and browse the result. 


cmb
***

Running the pipe
++++++++++++++++

This CMB pipeline depends on two external python modules: 

 + healpy   :  http://code.google.com/p/healpy/
 + spherelib:  http://gitorious.org/spherelib


Problematic
+++++++++++

This example illustrates a very simple CMB data processing use.  

The problematic is the following : one wants to characterize the
inverse noise weighting spectral estimator (as applied to the WMAP 1yr
data). A first demo pipeline is built to check that the algorithm
has correctly been implemented. Then, Monte Carlo simulations are used
to compute error bars estimates. 

A design pipeline
+++++++++++++++++

The design pipe scheme is: ::

  pipe_dot = """ 
  cmb->clcmb->clplot;
  noise->clcmb;
  noise->clnoise->clplot;
  """

where: 

+ ``cmb``: generate a CMB map from LCDM power spectrum. 
+ ``noise``: compute the mode coupling matrix from the input hit-count map
+ ``clnoise``: compute the empirical noise power spectrum from a noise
  realization. 
+ ``clcmb``: generate two noise realizations, add them to the CMB map, to compute a
  first cross spectrum estimator. Then weighting mask and mode
  coupling matrix are applied to get the inverse noise weighting
  estimator
+ ``clplot``: make a plot to compare pure cross spectrum vs inverse noise
  weighting estimators. 

As the two first orphan segments depends on a single shared parameter
which is the map resolution nside, this argument is pushed from the
main script. 

Another input argument of the cmb segment, is its simulation identifier,
which will be used for latter Monte Carlo. In order to push two
inputs to a single segment instance, we use python tuple data type.::

  P.push(cmb=[(nside, 1)])
  P.push(noise=[nside])

From the segment, those inputs are retrieved with : ::

  (nside,sim_id) = get_input() ##(cmb.py line 14)
  nside  = seg_input()         ##(noise.py line 15)

The last segment produces a plot in which we compare: 

 + the input LCDM power spectrum
 + the binned cross spectrum of the noisy CMB maps
 + the binned cross spectrum of which we applied hitcount weight and mode coupling matrix. 
 + the noise power spectrum computed by clnoise segment. 

In this plot we check that both estimators are corrects, and that the
noise level is the expected one.

From the design pipeline to Monte Carlo
+++++++++++++++++++++++++++++++++++++++

As a second step, Monte Carlo simulations are used to compute error
bars. 

The ``clnoise`` segment is no longer needed, so that the new pipe scheme
reads : ::

  pipe_dot = """ 
  cmb->clcmb->clplot;
  noise->clcmb;
  """

We now use the native data parallelization scheme of the pipe to build
many instances of the ``cmb`` and ``clcmb`` segments. ::

  cmbin = []
  for sim_id in [1,2,3,4,5,6]:
     cmbin.append((nside, sim_id))
  P.push(cmb=cmbin)