README.org 25 KB
Newer Older
Maude Le Jeune's avatar
Maude Le Jeune committed
1 2
Pipelet is a free framework allowing for the creation, execution and
browsing of scientific data processing pipelines. It provides:
3

Marc Betoule's avatar
Marc Betoule committed
4 5 6 7 8 9 10 11 12 13 14
+ easy chaining of interdependent elementary tasks,
+ web access to data products,
+ branch handling,
+ automated distribution of computational tasks.

Both engine and web interface are written in Python.

* Tutorial
** Introduction
*** Why using pipelines

Maude Le Jeune's avatar
Maude Le Jeune committed
15
The pipeline mechanism allows you to apply a sequence of processing
Marc Betoule's avatar
Marc Betoule committed
16 17 18 19 20 21 22 23
steps to your data, in a way that the input of each process is the
output of the previous one. Making visible these different processing
steps, in the right order, is essential in data analysis to keep track
of what you did, and make sure that the whole processing remains
consistent.

*** How it works

Maude Le Jeune's avatar
Maude Le Jeune committed
24 25 26 27
Pipelet is based on the possibility to save on disk every intermediate
input or output of your pipeline, which is usually not a strong
constraint but offers a lot of benefits. It means that you can stop
the processing whenever you want, and begin it again without
Marc Betoule's avatar
Marc Betoule committed
28 29 30 31 32 33 34
recomputing the whole thing: you just take the last products you have
on disk, and continue the processing where it stopped. This logic is
interesting when the computation cost is higher than the cost of disk
space required by intermediate products.

*** The Pipelet functionalities

Maude Le Jeune's avatar
Maude Le Jeune committed
35 36
Pipelet is a free framework which helps you : 
+ to write and manipulate pipelines with any dependency scheme, 
Marc Betoule's avatar
Marc Betoule committed
37 38 39
+ to dispatch the computational tasks on parallel architectures, 
+ to keep track of what processing has been applied to your data and perform comparisons.

Maude Le Jeune's avatar
Maude Le Jeune committed
40

Marc Betoule's avatar
Marc Betoule committed
41
** Getting started
Maude Le Jeune's avatar
Maude Le Jeune committed
42 43 44 45 46 47 48 49 50 51 52 53
*** Pipelet installation 
**** Dependencies 

+ Running the Pipelet engine requires Python >= 2.6.

+ The web interface of Pipelet requires the installation of the
  cherrypy3 Python module (debian aptitude install python-cherrypy3).

You may find usefull to install some generic scientific tools that nicely interact with Pipelet: 
+ numpy
+ matplotlib
+ latex 
Marc Betoule's avatar
Marc Betoule committed
54

Maude Le Jeune's avatar
Maude Le Jeune committed
55
**** Getting Pipelet
Marc Betoule's avatar
Marc Betoule committed
56 57 58 59 60

There is not any published stable release of pipelet right now.

git clone git://gitorious.org/pipelet/pipelet.git

Maude Le Jeune's avatar
Maude Le Jeune committed
61
**** Installing Pipelet
Marc Betoule's avatar
Marc Betoule committed
62 63 64

sudo python setup.py install

65
*** Running a simple test pipeline
Marc Betoule's avatar
Marc Betoule committed
66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83

1. Run the test pipeline

cd test
python main.py

2. Add this pipeline to the web interface

pipeweb track test ./.sqlstatus

3. Set the access control and launch the web server

pipeutils -a username -l 2 .sqlstatus
pipeweb start

4. You should be able to browse the result on the web page
   http://localhost:8080

84 85
*** Getting a new pipe framework

Maude Le Jeune's avatar
Maude Le Jeune committed
86
To get a new pipeline framework, with example main and segment scripts : 
87 88 89

pipeutils -c pipename

Maude Le Jeune's avatar
Maude Le Jeune committed
90 91 92 93 94
This command ends up with the creation of directory named pipename wich contains: 
+ a main script (named main.py) providing functionnalities to execute
  your pipeline in various modes (debug, parallel, batch mode, ...)
+ an example of segment script (seg_default_code.py) which illustrates
  the pipelet utilities with comments. 
95

Maude Le Jeune's avatar
Maude Le Jeune committed
96
The next section describes those two files in more details. 
97

Marc Betoule's avatar
Marc Betoule committed
98

Maude Le Jeune's avatar
Maude Le Jeune committed
99
** Writing Pipes
Marc Betoule's avatar
Marc Betoule committed
100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125
*** Pipeline architecture

The definition of a data processing pipeline consists in :
+ a succession of python scripts, called segments, coding each step
  of the actual processing.
+ a main script that defines the dependency scheme between segments,
  and launch the processing.

The dependencies between segments must form a directed acyclic
graph. This graph is described by a char string using a subset of the
graphviz dot language (http://www.graphviz.org). For exemple the string:

"""
a -> b -> d;
c -> d;
c -> e;
"""

defines a pipeline with 5 segments {"a", "b", "c", "d", "e"}. The
relation "a->b" ensures that the processing of the segment "a" will be
done before the processing of its child segment "b". Also the output
of "a" will be feeded as input for "b". In the given example, the node
"d" has two parents "b" and "c". Both will be executed before "d". As
their is no relation between "b" and "c" which of the two will be
executed first is not defined.

Maude Le Jeune's avatar
Maude Le Jeune committed
126 127 128 129 130 131 132 133 134
When executing the segment "seg", the engine looks for a python script
named seg.py. If not found, it looks iteratively for script files
named "se.py" and "s.py". This way, different segments of the pipeline
can share the same code, if they are given a name with a common root
(this mecanism is useful to write generic segment and is completed by
the hooking system, described in the advanced usage section). The code
is then executed in a specific namespace (see below The execution
environment).

Marc Betoule's avatar
Marc Betoule committed
135 136
*** The Pipeline object

Maude Le Jeune's avatar
Maude Le Jeune committed
137
Practically, the creation of a Pipeline object needs 3 arguments:
Marc Betoule's avatar
Marc Betoule committed
138

139
P = Pipeline(pipedot, codedir=, prefix=)
Marc Betoule's avatar
Marc Betoule committed
140 141

- pipedot is the string description of the pipeline
Maude Le Jeune's avatar
Maude Le Jeune committed
142 143
- codedir is the path where the segment scripts can be found
- prefix  is the path to the data repository (see below Hierarchical data storage)
Marc Betoule's avatar
Marc Betoule committed
144

Maude Le Jeune's avatar
Maude Le Jeune committed
145 146 147 148 149 150 151 152 153
It is possible to output the graphviz representation of the pipeline. 
First, save the graph string into a .dot file with the pipelet function: 

P.to_dot('pipeline.dot')

Then, convert it to an image file with the dot command: 

dot -Tpng -o pipeline.png pipeline.dot

Marc Betoule's avatar
Marc Betoule committed
154 155
*** Dependencies between segments

Marc Betoule's avatar
Marc Betoule committed
156 157 158 159 160
The modification of the code of one segment will trigger its
recalculation and the recalculation of all the segments which
depend on it.

The output of a segment is a list of python objects. If a segment as
Maude Le Jeune's avatar
Maude Le Jeune committed
161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205
no particular output this list can be empty and do not need to be
specified. Elements of the list are allowed to be any kind of
pickleable python objects. However, a good practice is to fill the
list with the minimal set of characteristics relevant to describe the
output of the segment and to defer the storage of the data to
appropriate structures and file formats. For example, a segment which
performs computation on large images could virtually pass the results
of its computation to the following segment using the output list. It
is a better practice to store the resulting image in a dedicated file
and to pass in the list only the information allowing a non ambiguous
identification of this file (like its name or part of it) for the
following segments.

The input of a child segment is taken in a set build from the output
lists of its parents. The content of the input set is actually tunable
using the multiplex directive (see below). However the simplest and
default behaviour of the engine is to form the Cartesian product of
the output list of its parent.

To illustrate this behaviour let us consider the following pipeline,
build from three segments:

 knights -> melt;
 quality -> melt;

and assume that the respective output lists of segments knights and
quality are:

 ["Lancelot", "Galahad"]
and:
 ['the Brave', 'the Pure']

The cartesian product of the previous set is:
 [('Lancelot','the Brave'), ('Lancelot,'the Pure'), ('Galahad','the Brave'), ('Galahad','the
Pure')]

Four instances of segment "melt" will thus be run, each one receving
as input one of the four 2-tuples.

At the end of the execution of all the instances of a segment, their
output lists are concatenated. If the action of segment "melt" is to
concatenate the two strings he receives separated by a space, the
final output set of segment "melt" will be: 

 [('Lancelot the Brave'), ('Lancelot the Pure'), ('Galahad the Brave'), ('Galahad the Pure')].
Marc Betoule's avatar
Marc Betoule committed
206

207 208 209

TODO : describe input data type : disctionnary , ... ? 

Marc Betoule's avatar
Marc Betoule committed
210 211
*** Multiplex directive

Maude Le Jeune's avatar
Maude Le Jeune committed
212
This default behavior can be altered by specifying a #multiplex
Marc Betoule's avatar
Marc Betoule committed
213
directive in the commentary of the segment code. If several multiplex
214
directives are present in the segment code the last one is retained.
Marc Betoule's avatar
Marc Betoule committed
215

Maude Le Jeune's avatar
Maude Le Jeune committed
216
- #multiplex : activate the default behaviour
Marc Betoule's avatar
Marc Betoule committed
217

Maude Le Jeune's avatar
Maude Le Jeune committed
218
- #multiplex cross_prod group by 0 : The input set contains one tuple of all the ouputs.
Marc Betoule's avatar
Marc Betoule committed
219

Maude Le Jeune's avatar
Maude Le Jeune committed
220
- #multiplex cross_prod group by ... : compute the cross_product and group the task
Betoule Marc's avatar
Betoule Marc committed
221 222 223
  that are identical. To make use of group, elements of the output set
  have to be hashable.

Marc Betoule's avatar
Marc Betoule committed
224
*** Orphan segments
Marc Betoule's avatar
Marc Betoule committed
225

226 227 228 229 230 231 232 233
By default, orphan segments have no input argument (an empty list),
and therefore are executed once. 
The possibility is offer to push an input list to an orphan segment.  
If P is an instance of the pipeline object:

P.push (segname=seg_input)


Maude Le Jeune's avatar
Maude Le Jeune committed
234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263
*** Depend directive

As explained in the introduction section, Pipelet offers the
possibility to spare CPU time by saving intermediate products on disk.
We call intermediate products the input/output data files of the
different segments.  

Each segment repository is identified by an unique key which depends
on: 
- the segment processing code and parameters (segment and hook
  scripts)
- the input data (identified from the key of the parent segments)

Every change made on a segment (new parameter or new parent) will then
give a different key, and tell the pipelet engine to compute a new
segment instance.

It is possible to add some external dependencies to the key
computation using the depend directive: 

#depend file1 file2

At the very beginning of the pipeline execution, all dependencies will
be stored, to prevent any change (code edition) between the key
computation and actual processing.

Note that this mecanism works only for segment and hook
scripts. External dependencies are also read as the beginning of the
pipeline execution, but only used for the key computation.

Marc Betoule's avatar
Marc Betoule committed
264 265 266 267 268 269 270 271 272 273
*** Hierarchical data storage

This system provides versionning of your data and easy access through
the web interface. It is also used to keep track of the code, of the
execution logs, and various meta-data of the processing. Of course,
you remain able to bypass the hierarchical storage and store your
actual data elsewhere, but you will loose the benefit of automated
versionning which proves to be quite convenient.

The storage is organized as follows:
274 275 276 277 278 279

- all pipeline instances are stored below a root which corresponds to
  the prefix parameter of the Pipeline object. 
      /prefix/
- all segment meta data are stored below a root which name corresponds
  to an unique match of the segment code.
280
      /prefix/segname_YFLJ65/
281 282 283 284 285 286 287
- Segment's meta data are: 
  - a copy of the segment python script
  - a copy of all segment hook scripts
  - a parameter file (.args) which contains segment parameters value
  - a meta data file (.meta) which contains some extra meta data
- all segment instances data and meta data are stored in a specific subdirectory
  which name corresponds to a string representation of its input
288
  	/prefix/segname_YFLJ65/data/1/
289
- if there is a single segment instance, then data are stored in
290
       /prefix/segname_YFLJ65/data/
291 292
- If a segment has at least one parent, its root will be located below
  one of its parent's one : 
293
       /prefix/segname_YFLJ65/segname2_PLMBH9/
294
- etc...
Maude Le Jeune's avatar
Maude Le Jeune committed
295 296 297
  


Marc Betoule's avatar
Marc Betoule committed
298 299 300 301 302 303 304

*** The segment environment

The segment code is executed in a specific environment that provides:

1. access to the segment input and output
   - seg_input:  this variable is a dictionnary containing the input of the segment
305
   - seg_output: this variable has to be a list. 
Marc Betoule's avatar
Marc Betoule committed
306 307 308

2. Functionnalities to use the automated hierarchical data storage system.
   - get_data_fn(basename): complete the filename with the path to the working directory. 
Maude Le Jeune's avatar
Maude Le Jeune committed
309
   - glob_seg(seg, regexp): return the list of filename matching regexp from segment seg
Marc Betoule's avatar
Marc Betoule committed
310 311
   - get_tmp_fn(): return a temporary filename.

312
3. Functionnalities to use the automated parameters handling
313 314
   - lst_par: list of parameter names of the segment 
   - lst_tag: list of parameter names which will be made visible from the web interface
Maude Le Jeune's avatar
Maude Le Jeune committed
315
   - load_param(seg, globals(), lst_par)
316 317

4. Various convenient functionalities
318
   - save_products(filename=', lst_par='*'): use pickle to save a
Marc Betoule's avatar
Marc Betoule committed
319
     part of a given namespace.
320
   - load_products(filename, lst_par): update the namespace by
Marc Betoule's avatar
Marc Betoule committed
321
     unpickling requested object from the file.
Betoule Marc's avatar
Betoule Marc committed
322 323 324 325
   - logged_subprocess(lst_args): execute a subprocess and log its
     output in processname.log and processname.err.
   - logger is a standard logging.Logger object that can be used to
     log the processing
Marc Betoule's avatar
Marc Betoule committed
326

327
5. Hooking support 
Marc Betoule's avatar
Marc Betoule committed
328 329
   Pipelet enables you to write reusable generic
   segments by providing a hooking system via the hook function.
330
   hook (hookname, globals()): execute Python script ‘segname_hookname.py’ and update the namespace.
Marc Betoule's avatar
Marc Betoule committed
331 332


333 334
*** The exemple pipelines
**** fft
Maude Le Jeune's avatar
Maude Le Jeune committed
335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401

***** Highlights

This example illustrates a very simple image processing use.
The problematic is the following : one wants to apply a Gaussian
filter in Fourier domain on several 2D images. 

The pipe scheme is: 

pipedot = """
mkgauss->convol;
fftimg->convol;
"""

where segment 'mkgauss' computes the Gaussian filter, 'fftimg' computes the
Fourier transforms of the input images, and 'convol' performs the
filtering in Fourier domain, and the inverse transform of the filtered
images. 

P = pipeline.Pipeline(pipedot, code_dir=op.abspath('./'), prefix=op.abspath('./'))
P.to_dot('pipeline.dot')

The pipe scheme is output as a .dot file, that can be converted to an
image file with the command line: 

dot -Tpng -o pipeline.png pipeline.dot

To apply this filter to several images (in our case 4 input images),
the pipe data parallelism is used. 
From the main script, a 4-element list is pushed to the 'fftimg'
segment. 

P.push(fftimg=[1,2,3,4]) 

At execution, 4 instances of the 'fftimg' segment will be
created, and each of them outputs one element of this list : 

img = seg_input.values()[0] #(fftimg.py - line 16)
seg_output = [img]          #(fftimg.py - line 41)

On the other side, a single instance of the 'mkgauss' segment will be
executed, as there is one filter to apply. 

The last segment 'convol', which depends on the two others, will be
executed with a number of instances that is the Cartesian product of
its 4+1 inputs (ie 4 instances)

The instance identifier which is set by the 'fftimg' output, can be
retrieve with the following instruction: 

img = seg_input['fftimg']   #(convol.py - line 12)

***** Running the pipe

Follow the same procedure than for the first example pipeline, to run
this pipe and browse the result. 





 





402
**** cmb
Maude Le Jeune's avatar
Maude Le Jeune committed
403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484
***** Problematic

This example illustrates a very simple CMB data processing use.  

The problematic is the following : one wants to characterize the
inverse noise weighting spectral estimator (as applied to the WMAP 1yr
data). A first demo pipeline is built to check that the algorithm
has correctly been implemented. Then, Monte Carlo simulations are used
to compute error bars estimates. 

***** A design pipeline

The design pipe scheme is: 

pipe_dot = """ 
cmb->clcmb->clplot;
noise->clcmb;
noise->clnoise->clplot;
"""

where: 
+ cmb: generate a CMB map from LCDM power spectrum. 
+ noise: compute the mode coupling matrix from the input hitcount map
+ clnoise: compute the empirical noise power spectrum from a noise
  realization. 
+ clcmb: generate two noise realizations, add them to the CMB map, to compute a
  first cross spectrum estimator. Then weighting mask and mode
  coupling matrix are applied to get the inverse noise weighting
  estimator
+ clplot: make a plot to compare pure cross spectrum vs inverse noise
  weighting estimators. 

As the two first orphan segments depends on a single shared parameter
which is the map resolution nside, this argument is pushed from the
main script. 

Another input argument of the cmb segment, is its simulation identifier,
which will be used for latter Monte Carlo. In order to push two
inputs to a single segment instance, we use python tuple data type.

P.push(cmb=[(nside, 1)])
P.push(noise=[nside])

From the segment, those inputs are retrieved with : 

nside  = seg_input.values()[0][0] ##(cmb.py line 13)
sim_id = seg_input.values()[0][1] ##(cmb.py line 14)

nside  = seg_input.values()[0]  ##(noise.py line 16)


The last segment produces a plot in which we compare: 
+ the input LCDM power spectrum
+ the binned cross spectrum of the noisy CMB maps
+ the binned cross spectrum of which we applied hitcount weight and
  mode coupling matrix. 
+ the noise power spectrum computed by clnoise segment. 

In this plot we check that both estimators are corrects, and that the
noise level is the expected one.


***** From the design pipeline to Monte Carlo

As a second step, Monte Carlo simulations are used to compute error
bars. 

The clnoise segment is no longer needed, so that the new pipe scheme
reads : 

pipe_dot = """ 
cmb->clcmb->clplot;
noise->clcmb;
"""

We now use the native data parallelization scheme of the pipe to build
many instances of the cmb and clcmb segments. 

cmbin = []
for sim_id in [1,2,3,4,5,6]:
    cmbin.append((nside, sim_id))
P.push(cmb=cmbin)
485

Maude Le Jeune's avatar
Maude Le Jeune committed
486 487
***** Running the pipe

Maude Le Jeune's avatar
Maude Le Jeune committed
488 489 490 491 492 493 494 495
This CMB pipeline depends on two external python modules: 
+ healpy   :  http://code.google.com/p/healpy/
+ spherelib:  http://gitorious.org/spherelib





Marc Betoule's avatar
Marc Betoule committed
496
** Running Pipes
497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568

*** The sample main file

A sample main file is made available when creating a new pipelet
framework. It is copied from the reference file: 

pipelet/pipelet/static/main.py

This script illustrates various ways of running pipes. It describes
the different parameters, and also, how to write a
main python script can be used as any binary from the command line
(including options parsing). 


*** Common options
Some options are common to each running modes.
**** log level

The logging system is handle by the python logging facility module. 
This module defines the following log levels : 
+ DEBUG
+ INFO
+ WARNING
+ ERROR
+ CRITICAL

All logging messages are saved in the differents pipelet log files,
available from the web interface (rotating file logging).  It is also
possible to print those messages on the standard output (stream
logging), by setting the desired log level in the launchers options:
For example: 

import logging
launch_process(P, N,log_level=logging.DEBUG)

If set to 0, stream logging will be disable. 


**** matplotlib

The matplotlib documentation says: 

"Many users report initial problems trying to use maptlotlib in web
application servers, because by default matplotlib ships configured to
work with a graphical user interface which may require an X11
connection. Since many barebones application servers do not have X11
enabled, you may get errors if you don’t configure matplotlib for use
in these environments. Most importantly, you need to decide what kinds
of images you want to generate (PNG, PDF, SVG) and configure the
appropriate default backend. For 99% of users, this will be the Agg
backend, which uses the C++ antigrain rendering engine to make nice
PNGs. The Agg backend is also configured to recognize requests to
generate other output formats (PDF, PS, EPS, SVG). The easiest way to
configure matplotlib to use Agg is to call:

matplotlib.use('Agg')
"

The matplotlib and matplotlib_interactive options turn the matplotlib
backend to Agg in order to allow the execution in non-interactive
environment.

Those two options are set to True by default in the sample main
script.
TODO : explain why. 







Marc Betoule's avatar
Marc Betoule committed
569
*** The interactive mode
570 571
This mode has been designed to ease debugging. If P is an instance of
the pipeline object, the syntax reads :
Marc Betoule's avatar
Marc Betoule committed
572 573 574 575 576 577 578 579

from pipelet.launchers import launch_interactive
w, t = launch_interactive(P)
w.run()

In this mode, each tasks will be computed in a sequential way. 
Do not hesitate to invoque the Python debugger from IPython : %pdb

580 581


Marc Betoule's avatar
Marc Betoule committed
582
*** The process mode
583 584 585 586 587 588 589
In this mode, one can run simultaneous tasks (if the pipe scheme
allows to do so). 
The number of subprocess is set by the N parameter : 

from pipelet.launchers import launch_process
launch_process(P, N)

Marc Betoule's avatar
Marc Betoule committed
590
*** The batch mode
591 592 593 594 595
In this mode, one can submit some batch jobs to execute the tasks. 
The number of job is set by the N parameter : 

from pipelet.launchers import launch_pbs
launch_pbs(P, N , address=(os.environ['HOST'],50000))
Marc Betoule's avatar
Marc Betoule committed
596

597 598 599 600 601 602 603 604 605 606 607 608
It is possible to specify some job submission options like: 
+ job name 
+ job header: this string is prepend to the PBS job scripts. You may
  want to add some environment specific paths. Log and error files are
  automatically handled by the pipelet engine, and made available from
  the web interface. 
+ cpu time: syntax is: "hh:mm:ss"

The 'server' option can be disable to add some workers to an existing
scheduler.


Maude Le Jeune's avatar
Maude Le Jeune committed
609

Marc Betoule's avatar
Marc Betoule committed
610
** Browsing Pipes
611 612 613 614 615 616 617
*** The pipelet webserver and ACL

The pipelet webserver allows the browsing of multiple pipelines. 
Each pipeline has to be register using : 

pipeweb track <shortname> sqlfile

618 619 620 621
To remove a pipeline from the tracked list, use : 

pipeweb untrack <shortname>

622 623 624 625 626
As the pipeline browsing implies a disk parsing, some basic security
has to be set also. All users have to be register with a specific access
level (1 for read-only access, and 2 for write access).  

pipeutils -a <username> -l 2 sqlfile
Marc Betoule's avatar
Marc Betoule committed
627

628 629 630 631
To remove a user from the user list: 

pipeutils -d <username> sqlfile

632 633 634 635 636
Start the web server using : 

pipeweb start

Then the web application will be available on the web page http://localhost:8080
Marc Betoule's avatar
Marc Betoule committed
637

638 639 640 641
To stop the web server : 

pipeweb stop

Marc Betoule's avatar
Marc Betoule committed
642
*** The web application
643

644
In order to ease the comparison of different processings, the web
645 646 647 648
interface displays various views of the pipeline data : 

**** The index page 

649
The index page displays a tree view of all pipeline instances. Each
650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691
segment may be expand or reduce via the +/- buttons.  

The parameters used in each segments are resumed and displayed with
the date of execution and the number of related tasks order by
status. 

A checkbox allows to performed operation on multiple segments :
  - deletion : to clean unwanted data
  - tag : to tag remarkable data

The filter panel allows to display the segment instances wrt 2
criterions :
  - tag
  - date of execution

**** The code page

Each segment names is a link to its code page. From this page the user
can view all python scripts code which have been applied to the data.

The tree view is reduced to the current segment and its related
parents.

The root path corresponding to the data storage is also displayed.


**** The product page

The number of related tasks, order by status, is a link to the product
pages, where the data can be directly displayed (if images, or text
files) or downloaded. 
From this page it is also possible to delete a specific product and
its dependencies. 


**** The log page

The log page can be acceed via the log button of the filter panel.
Logs are ordered by date. 



Marc Betoule's avatar
Marc Betoule committed
692 693 694 695

* Advanced usage
** Database reconstruction

696 697 698 699 700
In case of unfortunate lost of the pipeline sql data base, it is
possible to reconstruct it from the disk : 

import pipelet
pipelet.utils.rebuild_db_from_disk (prefix, sqlfile)
Marc Betoule's avatar
Marc Betoule committed
701

702 703 704
All information will be retrieve, but with new identifiers. 

** The hooking system
705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725

As described in the 'segment environment' section, pipelet supports
an hooking system which allows the use of generic processing code, and
code sectioning.

Let's consider a set of instructions that have to be systematically
applied at the end of a segment (post processing), one can put those
instruction in the separate script file named for example
'segname_postproc.py' and calls the hook function: 

hook('postproc', globals()) 

A specific dictionnary can be passed to the hook script to avoid
confusion. 

The hook scripts are included into the hash key computation (see
advanced usage section). 




Marc Betoule's avatar
Marc Betoule committed
726 727
** Writing custom environments

Maude Le Jeune's avatar
Maude Le Jeune committed
728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794
The pipelet software provides a set of default utilities available
from the segment environment. It is possible to extend this default
environment or even re-write a completely new one.

*** Extending the default environment

The different environment utilities are actually methods of the class
Environment. It is possible to add new functionnalities by using the
python heritage mechanism: 

File : myenvironment.py

   from pipelet.environment import *

   class MyEnvironment(Environment):
         def my_function (self):
            """ My function do nothing
	    """
	    return 


The pipelet engine objects (segments, tasks, pipeline) are available
from the worker attribut self._worker. See section "The pipelet
actors" for more details about the pipelet machinery.


*** Writing new environment

In order to start with a completely new environment, extend the base
environment: 

File : myenvironment.py

   from pipelet.environment import *

   class MyEnvironment(EnvironmentBase):
         def my_get_data_fn (self, x):
            """ New name for get_data_fn
	    """
	    return self._get_data_fn(x)

         def _close(self, glo):
            """ Post processing code
            """	 
	    return glo['seg_output']

From the base environment, the basic functionnalities for getting file
names and executing hook scripts are still available through: 
- self._get_data_fn 
- self._hook

The segment input argument is also stored in self._seg_input
The segment output argument has to be returned by the _close(self, glo)
method. 

The pipelet engine objects (segments, tasks, pipeline) are available
from the worker attribut self._worker. See section "The pipelet
actors" for more details about the pipelet machinery.


*** Loading another environment

To load another environment, set the pipeline environment attribut
accordingly. 

Pipeline(pipedot, codedir=, prefix=, env=MyEnvironment)

795
** Writing custom main files
Marc Betoule's avatar
Marc Betoule committed
796 797 798 799 800 801
** Launching pipeweb behind apache

Pipeweb use the cherrypy web framework server and can be run behind an
apache webserver which brings essentially two advantages:
- https support.
- faster static files serving.
802 803 804


* The pipelet actors
805 806
  
This section document the code for developpers.
807 808 809 810 811 812 813

** The Repository object
** The Pipeline object
** The Task object
** The Scheduler object
** The Tracker object
** The Worker object
814
** The Environment object
815