extract: Data extraction and interpolation routine.
Code author: Peter Kraus
The function dgpost.utils.extract.extract()
processes the below specification
in order to extract the required data from the supplied datagram.
- pydantic model dgbowl_schemas.dgpost.recipe_1_1.extract.Extract
Show JSON schema
{ "title": "Extract", "type": "object", "properties": { "into": { "title": "Into", "type": "string" }, "from": { "title": "From", "type": "string" }, "at": { "$ref": "#/definitions/At" }, "constants": { "title": "Constants", "type": "array", "items": { "$ref": "#/definitions/Constant" } }, "columns": { "title": "Columns", "type": "array", "items": { "$ref": "#/definitions/Column" } } }, "required": [ "into" ], "additionalProperties": false, "definitions": { "At": { "title": "At", "type": "object", "properties": { "steps": { "title": "Steps", "type": "array", "items": { "type": "string" } }, "indices": { "title": "Indices", "type": "array", "items": { "type": "integer" } }, "timestamps": { "title": "Timestamps", "type": "array", "items": { "type": "number" } } }, "additionalProperties": false }, "Constant": { "title": "Constant", "type": "object", "properties": { "value": { "title": "Value" }, "as": { "title": "As", "type": "string" }, "units": { "title": "Units", "type": "string" } }, "required": [ "as" ], "additionalProperties": false }, "Column": { "title": "Column", "type": "object", "properties": { "key": { "title": "Key", "type": "string" }, "as": { "title": "As", "type": "string" } }, "required": [ "key", "as" ], "additionalProperties": false } } }
- field into: str [Required]
- Validated by
- field from_: Optional[str] = None (alias 'from')
- Validated by
- field at: Optional[dgbowl_schemas.dgpost.recipe_1_1.extract.At] = None
- Validated by
- field constants: Optional[Sequence[dgbowl_schemas.dgpost.recipe_1_1.extract.Constant]] = None
- Validated by
- field columns: Optional[Sequence[dgbowl_schemas.dgpost.recipe_1_1.extract.Column]] = None
- Validated by
- validator check_one_input » all fields
Note
The keys from
and into
are not processed by extract()
, they should
be used by its caller to supply the requested datagram
and assign the returned
pd.DataFrame
into the correct variable.
Handling of sparse data depends on the extraction format specified:
for direct extraction, if the value is not present at any of the timesteps specified in
at
, aNaN
is added insteadfor interpolation, if a value is missing at any of the timesteps specified in
at
or in thepd.DataFrame
index, that timestep is masked and the interpolation is performed from neighbouring points
Interpolation of uc.ufloat
is performed separately for the nominal and error
component.
Units are added into the attrs
dictionary of the pd.DataFrame
on a
per-column basis.
Data from multiple datagrams can be combined into one pd.DataFrame
using a
YAML such as the following example:
load:
- as: norm
path: normalized.dg.json
- as: sparse
path: sparse.dg.json
extract:
- into: df
from: norm
at:
step: "a"
columns:
- key: raw->T_f
as: rawT
- into: df
from: sparse
at:
steps: b1, b2, b3
direct:
- key: derived->xout->*
as: xout
In this example, the pd.DataFrame
is created with an index corresponding to
the timestamps of step: "a"
of the datagram. The values specified using columns
in the first section are entered directly, after renaming the column names.
The data pulled out of the datagram in the second step using the prescription in at
are interpolated onto the index of the existing pd.DataFrame
.
- dgpost.utils.extract.extract(obj, spec, index=None)
- Return type
DataFrame