datazen.parsing
index
/home/vkottler/src/vkottler/workspace/datazen/datazen/parsing.py

datazen - APIs for loading raw data from files.

 
Modules
       
jinja2
logging
time

 
Functions
       
dedup_dict_lists(data: Dict[Any, Any]) -> Dict[Any, Any]
Finds list elements in a dictionary and removes duplicate entries, mutates
the original list.
load(path: Union[pathlib.Path, str, NoneType], variables: Dict[str, Any], dict_to_update: Dict[str, Any], expect_overwrite: bool = False, is_template: bool = True, logger: logging.Logger = <Logger datazen.parsing (WARNING)>, **kwargs) -> vcorelib.io.types.LoadResult
Load raw file data and meld it into an existing dictionary. Update
the result as if it's a template using the provided variables.
set_file_hash(hashes: Dict[str, Any], path: Union[pathlib.Path, str, NoneType], set_new: bool = True) -> bool
Evaluate a hash dictionary and update it on a miss.
template_preprocessor_factory(variables: Dict[str, Any], is_template: bool, stack: contextlib.ExitStack) -> Callable[[Union[TextIO, _io.StringIO]], Union[TextIO, _io.StringIO]]
Create a stream-processing function for data decoding.

 
Data
        ARBITER = <vcorelib.io.arbiter.DataArbiter object>
DataStream = typing.Union[typing.TextIO, _io.StringIO]
GenericDict = typing.Dict[typing.Any, typing.Any]
GenericStrDict = typing.Dict[str, typing.Any]
LOG = <Logger datazen.parsing (WARNING)>
Pathlike = typing.Union[pathlib.Path, str, NoneType]
StreamProcessor = typing.Callable[[typing.Union[typing.TextIO, _io...gIO]], typing.Union[typing.TextIO, _io.StringIO]]