LISA Instrument issueshttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues2022-12-09T17:05:35+01:00https://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/113Missing TPS deviations from TPS in the TCB timer deviations2022-12-09T17:05:35+01:00Jean-Baptiste Baylej2b.bayle@gmail.comMissing TPS deviations from TPS in the TCB timer deviationsThe TCB timer deviations represent the deviations of the onboard clock time with respect to the TCB. They should contain the deviations of the clock with respect to the spacecraft proper time (clock offset, noise and drifts), in addition to the deviation of the TPS with respect to the TPS (relativistic effect).
TCB timer deviations have been initially implemented in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/81 as
```python
self.tcb_timer_deviations = \
self.clock_offsets + \
self.clock_freqoffsets * self.telemetry_t + \
self.clock_freqlindrifts * self.telemetry_t**2 / 2 + \
self.clock_freqquaddrifts * self.telemetry_t**3 / 3 + \
self.tps_proper_time_deviations + \
self.tcb_sync_noises
```
and missed clock noise (integrated to get a time). This was fixed in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/108. Issues with sampling of the TCB timer deviations were addressed in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/110, but it seems that it resulted in the `tps_proper_time_deviations` not being included (they are still read from orbit files).
This is a bug that we need to fix.
Also, we might reconsider the naming of these quantities (TCB timer deviations, TPS proper time deviations, etc.) as they are complicated to remember, not super consistent nor explicit.The TCB timer deviations represent the deviations of the onboard clock time with respect to the TCB. They should contain the deviations of the clock with respect to the spacecraft proper time (clock offset, noise and drifts), in addition to the deviation of the TPS with respect to the TPS (relativistic effect).
TCB timer deviations have been initially implemented in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/81 as
```python
self.tcb_timer_deviations = \
self.clock_offsets + \
self.clock_freqoffsets * self.telemetry_t + \
self.clock_freqlindrifts * self.telemetry_t**2 / 2 + \
self.clock_freqquaddrifts * self.telemetry_t**3 / 3 + \
self.tps_proper_time_deviations + \
self.tcb_sync_noises
```
and missed clock noise (integrated to get a time). This was fixed in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/108. Issues with sampling of the TCB timer deviations were addressed in https://gitlab.in2p3.fr/lisa-simulation/instrument/-/merge_requests/110, but it seems that it resulted in the `tps_proper_time_deviations` not being included (they are still read from orbit files).
This is a bug that we need to fix.
Also, we might reconsider the naming of these quantities (TCB timer deviations, TPS proper time deviations, etc.) as they are complicated to remember, not super consistent nor explicit.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/107Parallelize computations applied for `ForEachObject`2022-10-14T16:58:15+02:00Jean-Baptiste Baylej2b.bayle@gmail.comParallelize computations applied for `ForEachObject`Functions applied to `ForEachMOSA` and `ForEachSC` objects are applied _independently_ to each of the values (often Numpy arrays). In particular, we generate noise time series, filter, resample, etc. This means that this is an (embarrassingly parallel)[https://en.wikipedia.org/wiki/Embarrassingly_parallel] problem that can be speed up using parallelization. An easy solution is to use a thread pool from the multiprocessing or future.concurrent standard packages.
We should investigate and implement such a solution, and check the gain in runtime.Functions applied to `ForEachMOSA` and `ForEachSC` objects are applied _independently_ to each of the values (often Numpy arrays). In particular, we generate noise time series, filter, resample, etc. This means that this is an (embarrassingly parallel)[https://en.wikipedia.org/wiki/Embarrassingly_parallel] problem that can be speed up using parallelization. An easy solution is to use a thread pool from the multiprocessing or future.concurrent standard packages.
We should investigate and implement such a solution, and check the gain in runtime.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/106Investigate possibility of time chunking2022-10-14T16:53:04+02:00Jean-Baptiste Baylej2b.bayle@gmail.comInvestigate possibility of time chunkingLISA Instrument currently is not suited for long simulations (more than a few months) because of memory pressure. However, we need to run year-long simulations, and therefore need to investigate solutions. One of these solutions is to use time chunking and memory optimizations (releasing memory once it's not needed by downstream processed), and Dask is a well-known framework that can help. A nice side effect is (hopefully) a speed up because of parallel execution (Dask is basically a task scheduler) and the compatibility with most of computing clusters.
We should investigate the use of Dask, i.e.
* Investigate potential show stoppers
* Evaluate what needs to be changed and developed specifically for Dask
* Have a demonstrator that it can work (not to waste resources)
* Make sure that we have the resources (maybe see if we can have students on this), divide up the work and do it
* Test the result, and evaluate the performance
Using Dask probably means that we have to move away from the `ForEachObject` API, and use plain Dask arrays (and Numpy arrays under the hood, but the high-level interface is the same). This has been discussed at length (and investigated) by #34 and !46. We should use various elements of what's been discussed and implemented there.
@mastaa @ohartwigLISA Instrument currently is not suited for long simulations (more than a few months) because of memory pressure. However, we need to run year-long simulations, and therefore need to investigate solutions. One of these solutions is to use time chunking and memory optimizations (releasing memory once it's not needed by downstream processed), and Dask is a well-known framework that can help. A nice side effect is (hopefully) a speed up because of parallel execution (Dask is basically a task scheduler) and the compatibility with most of computing clusters.
We should investigate the use of Dask, i.e.
* Investigate potential show stoppers
* Evaluate what needs to be changed and developed specifically for Dask
* Have a demonstrator that it can work (not to waste resources)
* Make sure that we have the resources (maybe see if we can have students on this), divide up the work and do it
* Test the result, and evaluate the performance
Using Dask probably means that we have to move away from the `ForEachObject` API, and use plain Dask arrays (and Numpy arrays under the hood, but the high-level interface is the same). This has been discussed at length (and investigated) by #34 and !46. We should use various elements of what's been discussed and implemented there.
@mastaa @ohartwighttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/104Add a report job in CI2022-09-01T16:36:28+02:00Jean-Baptiste Baylej2b.bayle@gmail.comAdd a report job in CISimilarly to what we have in LISANode, it might be useful to add a "report" CI job that runs various simulations with individual noises enabled and produces plots as a diagnostic tool.Similarly to what we have in LISANode, it might be useful to add a "report" CI job that runs various simulations with individual noises enabled and produces plots as a diagnostic tool.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/95Add unprimed clocks to hexagon and resample measurements2022-07-04T16:05:26+02:00Jean-Baptiste Baylej2b.bayle@gmail.comAdd unprimed clocks to hexagon and resample measurementsAfter #94, all signals and measurements are sampled according to the lab timeframe. We should
* introduce 3 clocks (just the clock fluctuations for now?, i.e., pink noise)
* simulate sidebands
* simulate 3 sideband-to-sideband beatnotes
* resample each beatnote according to one clock
* make sure that 3-signal combination is no longer vanishing
And in post-processing, make sure that applying the algorithm from the paper yields a zero combination.
At this stage, we only have 3 clocks ("primed" clocks and electrical signals are not simulated, we assume that they are zero).After #94, all signals and measurements are sampled according to the lab timeframe. We should
* introduce 3 clocks (just the clock fluctuations for now?, i.e., pink noise)
* simulate sidebands
* simulate 3 sideband-to-sideband beatnotes
* resample each beatnote according to one clock
* make sure that 3-signal combination is no longer vanishing
And in post-processing, make sure that applying the algorithm from the paper yields a zero combination.
At this stage, we only have 3 clocks ("primed" clocks and electrical signals are not simulated, we assume that they are zero).Kohei YamamotoKohei Yamamotohttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/91Simulate initial beatnote phases2022-06-16T16:06:33+02:00Jan Niklas ReinhardtSimulate initial beatnote phasesMotivation: We might be able to deduce information about the pseudo-ranges from the ISI sideband beatnotes (maybe also from the ISI carrier beatnotes), if we have them given in phase. Currently, the beatnotes are simulated in frequency, so we need their initial phases to integrate them.
Procedure:
- simulate an initial phase for each laser
- simulate an initial phase for each clock (pilot tone) (confirm with @ohartwig if our understanding of the initial pilot tone phases is correct)
- go through the LISA simulation model equation by equation and write down how those initial phases propagate to the different optical benches (for which noises sources do we have to include initial phases on the way?)
- combine those propagated initial phases to form the initial phases of the beatnotesMotivation: We might be able to deduce information about the pseudo-ranges from the ISI sideband beatnotes (maybe also from the ISI carrier beatnotes), if we have them given in phase. Currently, the beatnotes are simulated in frequency, so we need their initial phases to integrate them.
Procedure:
- simulate an initial phase for each laser
- simulate an initial phase for each clock (pilot tone) (confirm with @ohartwig if our understanding of the initial pilot tone phases is correct)
- go through the LISA simulation model equation by equation and write down how those initial phases propagate to the different optical benches (for which noises sources do we have to include initial phases on the way?)
- combine those propagated initial phases to form the initial phases of the beatnoteshttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/79Handle decomposing into offsets and fluctuations more consistently2022-03-22T12:54:42+01:00Martin StaabHandle decomposing into offsets and fluctuations more consistentlyAt the moment we split up the calculation/propagation of some of the quantities (e.g. laser beams or clock noise) into an offset and an fluctuation part. The important properties of them are:
* offsets: slow variations (out-of-band), usually large variable
* fluctuations: in-band signal, usually small -> enables Taylor expansion/simplification of some terms
We should probably think about doing that more consistently throughout the simulator. Examples are some of the noise processes that have spectral shape that diverge at f -> 0Hz. For long simulations those noise time series can blow up in terms of numerical value. This has the disadvantages that we lose numerical precision in-band and some of the assumptions might not hold anymore (which is probably even more critical).
A solution could be to treat (all/some more) quantities in a decomposed form. To achieve that we could also contain the out-of-band part of any noise (not only deterministic drifts) in the offset part. This would also mean that any of the computation performed on the total variable need to implement a side-by-side processing of offsets and fluctuations.
As an example, I tried around with integration (cumulative sum), since this operation is numerically unstable and will cause large drifts when applied to white noise. The operation reads:
```math
\begin{aligned}
y[n] &= y[n-1] + \Delta t x[n] \\
&= y^o[n-1] + y^\epsilon[n-1] + \Delta t x^o[n] + \Delta t x^\epsilon[n] \\
&= y^o[n-1] + (1 - \hat\omega_\mathrm{sat} + \hat\omega_\mathrm{sat})y^\epsilon[n-1] + \Delta t x^o[n] + \Delta t x^\epsilon[n] \\
y^o[n] &= y^o[n-1] + \hat\omega_\mathrm{sat}y^\epsilon[n-1] + \Delta t x^o[n] \\
y^\epsilon[n] &= (1 - \hat\omega_\mathrm{sat})y^\epsilon[n-1] + \Delta t x^\epsilon[n]
\end{aligned}
```
I the last step I wrote down how to generate the integrated output $`y[n]`$ from the input $`x[n]`$ in decomposed form. Here, $`y^\epsilon[n]`$ is a well behaved time series that does not diverge for long times.
![image](/uploads/9c207c321b3290e5d4c11dc7eae6748f/image.png)
You can see that the cutoff (saturation at around 0.1mHz) is reproduced well in the two components: The offset part accounts for all frequencies below $`f_\mathrm{sat}`$ and the fluctuation part for all above. There is some mixing though and we need to investigate if this is of any importance...At the moment we split up the calculation/propagation of some of the quantities (e.g. laser beams or clock noise) into an offset and an fluctuation part. The important properties of them are:
* offsets: slow variations (out-of-band), usually large variable
* fluctuations: in-band signal, usually small -> enables Taylor expansion/simplification of some terms
We should probably think about doing that more consistently throughout the simulator. Examples are some of the noise processes that have spectral shape that diverge at f -> 0Hz. For long simulations those noise time series can blow up in terms of numerical value. This has the disadvantages that we lose numerical precision in-band and some of the assumptions might not hold anymore (which is probably even more critical).
A solution could be to treat (all/some more) quantities in a decomposed form. To achieve that we could also contain the out-of-band part of any noise (not only deterministic drifts) in the offset part. This would also mean that any of the computation performed on the total variable need to implement a side-by-side processing of offsets and fluctuations.
As an example, I tried around with integration (cumulative sum), since this operation is numerically unstable and will cause large drifts when applied to white noise. The operation reads:
```math
\begin{aligned}
y[n] &= y[n-1] + \Delta t x[n] \\
&= y^o[n-1] + y^\epsilon[n-1] + \Delta t x^o[n] + \Delta t x^\epsilon[n] \\
&= y^o[n-1] + (1 - \hat\omega_\mathrm{sat} + \hat\omega_\mathrm{sat})y^\epsilon[n-1] + \Delta t x^o[n] + \Delta t x^\epsilon[n] \\
y^o[n] &= y^o[n-1] + \hat\omega_\mathrm{sat}y^\epsilon[n-1] + \Delta t x^o[n] \\
y^\epsilon[n] &= (1 - \hat\omega_\mathrm{sat})y^\epsilon[n-1] + \Delta t x^\epsilon[n]
\end{aligned}
```
I the last step I wrote down how to generate the integrated output $`y[n]`$ from the input $`x[n]`$ in decomposed form. Here, $`y^\epsilon[n]`$ is a well behaved time series that does not diverge for long times.
![image](/uploads/9c207c321b3290e5d4c11dc7eae6748f/image.png)
You can see that the cutoff (saturation at around 0.1mHz) is reproduced well in the two components: The offset part accounts for all frequencies below $`f_\mathrm{sat}`$ and the fluctuation part for all above. There is some mixing though and we need to investigate if this is of any importance...https://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/75Implement TMI TTL2022-08-24T15:46:37+02:00Jean-Baptiste Baylej2b.bayle@gmail.comImplement TMI TTLWe need to add TTL couplings for the TMI.
* Agree on the model
* Add (potential) missing jitter time series on MOSAs
* Add missing TTL coupling coefficients
* Compute TMI TTLs
* Add them to the right places
To verify the implementation, let's try to subtract them from the data, assuming perfect knowledge of the TTL coefficients.
@swetashah @guwann @jslutskyWe need to add TTL couplings for the TMI.
* Agree on the model
* Add (potential) missing jitter time series on MOSAs
* Add missing TTL coupling coefficients
* Compute TMI TTLs
* Add them to the right places
To verify the implementation, let's try to subtract them from the data, assuming perfect knowledge of the TTL coefficients.
@swetashah @guwann @jslutskyv1.2https://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/70Add Sphinx documentation and tutorials2022-08-24T15:46:37+02:00Jean-Baptiste Baylej2b.bayle@gmail.comAdd Sphinx documentation and tutorialsThe LISANode Simulation Model document is now starting to diverge from the LISA Instrument model, and does not contain any technical reference or tutorial or Quickstart sections. This has become an urgent need if the tool is to be used by a larger audience.
Instead, we should have an automatically generated (and versioned) documentation using Sphinx.
It should contain:
* Getting started section
* A Quickstart page, for installation and a typical usage (simple simulation) with simple parametrization (sampling and size, orbits, glitch, GWs). A note on laser beam modeling (offset and fluctuations, carrier and sideband). A note on the measurements (quantities and units) one gets, and how to write the results to a measurement file. How to simply plots the beatnotes.
* The underlying model, taking mostly what's in the LISANode Simulation Model document (conventions, time frames, laser beam model, propagation, interferometers, etc). Maybe split these sections into various pages, and move to reference.
* A list of available noises, and default parametrizations.
* Reference section
* Structure of measurement file.
* Reference for `Instrument`.
* DSP: mainly just API reference for `timeshift()`.The LISANode Simulation Model document is now starting to diverge from the LISA Instrument model, and does not contain any technical reference or tutorial or Quickstart sections. This has become an urgent need if the tool is to be used by a larger audience.
Instead, we should have an automatically generated (and versioned) documentation using Sphinx.
It should contain:
* Getting started section
* A Quickstart page, for installation and a typical usage (simple simulation) with simple parametrization (sampling and size, orbits, glitch, GWs). A note on laser beam modeling (offset and fluctuations, carrier and sideband). A note on the measurements (quantities and units) one gets, and how to write the results to a measurement file. How to simply plots the beatnotes.
* The underlying model, taking mostly what's in the LISANode Simulation Model document (conventions, time frames, laser beam model, propagation, interferometers, etc). Maybe split these sections into various pages, and move to reference.
* A list of available noises, and default parametrizations.
* Reference section
* Structure of measurement file.
* Reference for `Instrument`.
* DSP: mainly just API reference for `timeshift()`.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/68More `tbc_timer_deviations` required to get information about the clock drifts2022-06-10T18:20:41+02:00Jan Niklas ReinhardtMore `tbc_timer_deviations` required to get information about the clock driftsThe `tbc_timer_deviations` of the clock of the communicating SC are used in the Kalman Filter to synchronize the clocks with TCB.
But we also need information about the linear and quadratic drift of that particular clock (USO frequency offset and USO linear frequency drift). In reality this information is available because after a few weeks we can just fit the timer deviation measurements (they come in once per telemetry contact ~ day). However, for the simulation this means that we have to simulate weeks of phasemeter data (and later on process it) only to get those few timer deviation measurements, which is unfeasible.
Instead I would suggest to decouple the simulation of the tcb_timer_deviations from the simulation of the phasemeter data, so that we can just simulate let's say 20 days of tcb_timer_deviations before the simulation of the phasmeter data starts.The `tbc_timer_deviations` of the clock of the communicating SC are used in the Kalman Filter to synchronize the clocks with TCB.
But we also need information about the linear and quadratic drift of that particular clock (USO frequency offset and USO linear frequency drift). In reality this information is available because after a few weeks we can just fit the timer deviation measurements (they come in once per telemetry contact ~ day). However, for the simulation this means that we have to simulate weeks of phasemeter data (and later on process it) only to get those few timer deviation measurements, which is unfeasible.
Instead I would suggest to decouple the simulation of the tcb_timer_deviations from the simulation of the phasemeter data, so that we can just simulate let's say 20 days of tcb_timer_deviations before the simulation of the phasmeter data starts.https://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/47Use properties to transform `ForEachObject` inputs2022-08-24T15:46:37+02:00Jean-Baptiste Baylej2b.bayle@gmail.comUse properties to transform `ForEachObject` inputsWe currently convert parameters given in the `Instrument`'s init method (can be strings for default or pre-defined values, scalar values when shared between all MOSAs or SC, or a dictionary, or a `ForEachObject` instance) to a `ForEachObject` instance.
This breaks when the user modifies the `Instrument` instance after initialization.
Two solutions:
* We can use properties to make sure that we correctly set those attributes at all times (performing verifications and conversions, maybe even reading and interpolating files).
* Verifications, conversions and other pre-processing steps are performed multiple times if one changes the parameters multiple times before running the simulation.
* On the other side, parameters are valid at all times (we check then when they are set). This means that users don't have to run simulations to see error messages.
* We can prevent users to mutate parameters once the simulation has been run (and prevent potential mismatches between simulation results and parameters in the `Instrument` instance).
* Or delay those steps to simulation time, and only keep raw parametrization until `simulate()` is called.
* Only perform verifications, conversions and others once before simulation.
* Initialization and parametrization are computationally inexpensive. All the heavy lifting is performed in `simulate()`.We currently convert parameters given in the `Instrument`'s init method (can be strings for default or pre-defined values, scalar values when shared between all MOSAs or SC, or a dictionary, or a `ForEachObject` instance) to a `ForEachObject` instance.
This breaks when the user modifies the `Instrument` instance after initialization.
Two solutions:
* We can use properties to make sure that we correctly set those attributes at all times (performing verifications and conversions, maybe even reading and interpolating files).
* Verifications, conversions and other pre-processing steps are performed multiple times if one changes the parameters multiple times before running the simulation.
* On the other side, parameters are valid at all times (we check then when they are set). This means that users don't have to run simulations to see error messages.
* We can prevent users to mutate parameters once the simulation has been run (and prevent potential mismatches between simulation results and parameters in the `Instrument` instance).
* Or delay those steps to simulation time, and only keep raw parametrization until `simulate()` is called.
* Only perform verifications, conversions and others once before simulation.
* Initialization and parametrization are computationally inexpensive. All the heavy lifting is performed in `simulate()`.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/33Change the noise parametrization2022-08-29T14:00:29+02:00Jean-Baptiste Baylej2b.bayle@gmail.comChange the noise parametrizationWe currently parametrize instrumental noises with one argument in `Instrument` init method for
* the ASD (amplitude)
* optionally, one or more knee frequencies.
Each of these values are converted to an `ForEachMOSA` or `ForEachSC` object, so that the user can provide one value to be share for all objects, a dictionary or different values, or a function of the object (MOSA or SC) index.
This is easy to use, but does not allow
* to change the noise spectral shape,
* to provide custom noise generation methods,
* to provide already-generated noise time series (containing correlation between various objects or noises).
We propose to change this noise parametrization with the following. Each noise will be parametrized with exactly one argument in `Instrument` init method. The value can be:
* an array or a `ForEachObject` instance of arrays for already-generated noise time series,
* a function or a `ForEachObject` instance of functions, taking the `fs` and `size` as arguments, if LISA Instrument should generate the noise time series at runtime. The default methods will use those defined in `lisainstrument.noises`, but users can use their own noise generation methods (or tweak the levels and knee frequencies of the methods provided with the package).
This also prevents the definition of default noise parameters (ASDs and frequencies) in both `Instrument` and in the definition of the noise generation methods in `lisainstrument.noises`, as well as the use of `None` for default set of noise levels.We currently parametrize instrumental noises with one argument in `Instrument` init method for
* the ASD (amplitude)
* optionally, one or more knee frequencies.
Each of these values are converted to an `ForEachMOSA` or `ForEachSC` object, so that the user can provide one value to be share for all objects, a dictionary or different values, or a function of the object (MOSA or SC) index.
This is easy to use, but does not allow
* to change the noise spectral shape,
* to provide custom noise generation methods,
* to provide already-generated noise time series (containing correlation between various objects or noises).
We propose to change this noise parametrization with the following. Each noise will be parametrized with exactly one argument in `Instrument` init method. The value can be:
* an array or a `ForEachObject` instance of arrays for already-generated noise time series,
* a function or a `ForEachObject` instance of functions, taking the `fs` and `size` as arguments, if LISA Instrument should generate the noise time series at runtime. The default methods will use those defined in `lisainstrument.noises`, but users can use their own noise generation methods (or tweak the levels and knee frequencies of the methods provided with the package).
This also prevents the definition of default noise parameters (ASDs and frequencies) in both `Instrument` and in the definition of the noise generation methods in `lisainstrument.noises`, as well as the use of `None` for default set of noise levels.v1.2Jean-Baptiste Baylej2b.bayle@gmail.comJean-Baptiste Baylej2b.bayle@gmail.comhttps://gitlab.in2p3.fr/lisa-simulation/instrument/-/issues/8Handle instrument configuration in another object2022-08-24T15:46:37+02:00Jean-Baptiste Baylej2b.bayle@gmail.comHandle instrument configuration in another objectThe configuration of the instrument and the noises should be handled by another object.The configuration of the instrument and the noises should be handled by another object.v1.2