Commit a4ba521a authored by Maude Le Jeune's avatar Maude Le Jeune
Browse files

rebuild db ok

parent 92de9c89
......@@ -400,10 +400,99 @@ this pipe and browse the result.
**** cmb
***** Problematic
This example illustrates a very simple CMB data processing use.
The problematic is the following : one wants to characterize the
inverse noise weighting spectral estimator (as applied to the WMAP 1yr
data). A first demo pipeline is built to check that the algorithm
has correctly been implemented. Then, Monte Carlo simulations are used
to compute error bars estimates.
***** A design pipeline
The design pipe scheme is:
pipe_dot = """
+ cmb: generate a CMB map from LCDM power spectrum.
+ noise: compute the mode coupling matrix from the input hitcount map
+ clnoise: compute the empirical noise power spectrum from a noise
+ clcmb: generate two noise realizations, add them to the CMB map, to compute a
first cross spectrum estimator. Then weighting mask and mode
coupling matrix are applied to get the inverse noise weighting
+ clplot: make a plot to compare pure cross spectrum vs inverse noise
weighting estimators.
As the two first orphan segments depends on a single shared parameter
which is the map resolution nside, this argument is pushed from the
main script.
Another input argument of the cmb segment, is its simulation identifier,
which will be used for latter Monte Carlo. In order to push two
inputs to a single segment instance, we use python tuple data type.
P.push(cmb=[(nside, 1)])
From the segment, those inputs are retrieved with :
nside = seg_input.values()[0][0] ##( line 13)
sim_id = seg_input.values()[0][1] ##( line 14)
nside = seg_input.values()[0] ##( line 16)
The last segment produces a plot in which we compare:
+ the input LCDM power spectrum
+ the binned cross spectrum of the noisy CMB maps
+ the binned cross spectrum of which we applied hitcount weight and
mode coupling matrix.
+ the noise power spectrum computed by clnoise segment.
In this plot we check that both estimators are corrects, and that the
noise level is the expected one.
***** From the design pipeline to Monte Carlo
As a second step, Monte Carlo simulations are used to compute error
The clnoise segment is no longer needed, so that the new pipe scheme
reads :
pipe_dot = """
We now use the native data parallelization scheme of the pipe to build
many instances of the cmb and clcmb segments.
cmbin = []
for sim_id in [1,2,3,4,5,6]:
cmbin.append((nside, sim_id))
***** Highlights
***** Running the pipe
This CMB pipeline depends on two external python modules:
+ healpy :
+ spherelib:
** Running Pipes
*** The sample main file
......@@ -173,12 +173,19 @@ class Scheduler():
parents: list of string, parent segment names.
parents = self.pipe.get_parents(seg, nophantom=True)
fn = self.pipe.get_meta_file(seg)
lst_dir = []
for e in parents:
fn = self.pipe.get_meta_file(seg)
with closing(file(fn, 'r')) as f:
d = pickle.load(f)
d = {}
d['parents'] = lst_dir
with closing(file(fn, 'w')) as f:
r = pickle.dump(dict({'parents':lst_dir}),f)
r = pickle.dump(d,f)
def push_next_seg(self, seg):
......@@ -377,7 +377,7 @@ def parse_disk (pipedir):
lstpipe = []
found = False
dircontent = glob.glob(path.join(pipedir,"seg_*"))
dircontent = glob.glob(path.join(pipedir,"*_*"))
## visit each segment's repository
for c in dircontent:
......@@ -424,7 +424,8 @@ def rebuild_db_from_disk(pipedir, sqlfile=None):
for curr_dir in lst_dir:
curr_dir = path.abspath(curr_dir)
R = LocalRepository(curr_dir)
s = curr_dir.split("_")[-2] ## seg name
s = curr_dir.split("_")[-2].split("/")[-1]
#s = curr_dir.split("_")[-2] ## seg name
print "Creating segment %s instance (%s)."%(s, curr_dir)
## read curr_dir/ to get docstring
......@@ -433,11 +434,11 @@ def rebuild_db_from_disk(pipedir, sqlfile=None):
except Exception:
docline = ""
## read curr_dir/seg_s.meta to get parents, param, and tag
fn = path.join(curr_dir, "seg_"+s+".meta") ##
fn = path.join(curr_dir, s+".meta") ##
with closing(file(fn, 'r')) as f:
meta = pickle.load(f)
seg_depend_cache[curr_dir] = meta['parents']
param = meta['param']
param = meta['param']
if meta.has_key('tag'):
tag = meta['tag']
......@@ -477,7 +478,7 @@ def rebuild_db_from_disk(pipedir, sqlfile=None):
for t in lst_task:
## read task propertie from meta file
fn = glob.glob(path.join(t, "seg_*.meta"))
fn = glob.glob(path.join(t, "*.meta"))
if fn:
with closing (file(fn[0],'r')) as f:
meta = pickle.load(f)
......@@ -280,6 +280,11 @@ class Web:
str_lst_tag += "- "+t+"\\n"
## find log dir
log_dir = l[0][1].split("_")[0]
seg_1 = log_dir.split("/")[-1]
log_dir = log_dir[0:len(log_dir)-len(seg_1)]+ "log"
## Buttons
html += '<tr></tr>'
html += '</table>'
......@@ -289,7 +294,7 @@ class Web:
html += '<a class="icon clear" href="javascript:uncheck(\'checkbox\');"><small>Clear</small></a>'
html += '<a class="icon tag" href="javascript:edit_tag(\'%s\');"><small>Tag</small></a>'%str_lst_tag
html += '<a class="icon delete" href="javascript:del_seg();"><small>Delete</small></a>'
html += '<a class="icon log" href="log?logdir=%s"><small>Browse log</small></a>'%(l[0][1].split("seg")[0]+"log")
html += '<a class="icon log" href="log?logdir=%s"><small>Browse log</small></a>'%log_dir
html +='</p></fieldset>'
......@@ -32,7 +32,7 @@ import os.path as path
nside = 512
sim_ids = [1,2,3,4,5,6]
sim_ids = [8]
pipe_dot = """
Markdown is supported
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment