actio_python_utils.utils.flatten_cfg

actio_python_utils.utils.flatten_cfg(key, d={'data': {'directory': 'data'}, 'db': {'cursor_factory': <class 'actio_python_utils.database_functions.LoggingCursor'>, 'service': 'bioinfo_data_psql'}, 'loading': {'escape': '\\'"\\'', 'quote': '\\'"\\'', 'sanitize': False}, 'logging': {'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s', 'level': 'INFO', 'loggers_to_ignore': ['parso.cache', 'parso.python.diff'], 'names': {'db': 'sql_debug'}}, 'output': 'output', 'spark': {'codecs': {'glow': 'io.projectglow.sql.util.BGZFCodec'}, 'cores': '*', 'jdbc': '/usr/share/java/postgresql-42.6.0.jar', 'memory': '1g', 'packages': {'excel': 'com.crealytics:spark-excel_2.12:3.3.1_0.18.7', 'glow': 'io.projectglow:glow-spark3_2.12:1.2.1', 'xml': 'com.databricks:spark-xml_2.12:0.15.0'}}}, sep='.')[source]

Flatten a nested value in a dict

Parameters:
  • key (Hashable) – The key corresponding to the value to flatten

  • d (MutableMapping[Hashable, dict | list], default: {'db': {'service': 'bioinfo_data_psql', 'cursor_factory': <class 'actio_python_utils.database_functions.LoggingCursor'>}, 'data': {'directory': 'data'}, 'loading': {'escape': '\\'"\\'', 'quote': '\\'"\\'', 'sanitize': False}, 'logging': {'format': '%(asctime)s - %(name)s - %(levelname)s - %(message)s', 'level': 'INFO', 'names': {'db': 'sql_debug'}, 'loggers_to_ignore': ['parso.cache', 'parso.python.diff']}, 'output': 'output', 'spark': {'codecs': {'glow': 'io.projectglow.sql.util.BGZFCodec'}, 'cores': '*', 'jdbc': '/usr/share/java/postgresql-42.6.0.jar', 'memory': '1g', 'packages': {'excel': 'com.crealytics:spark-excel_2.12:3.3.1_0.18.7', 'glow': 'io.projectglow:glow-spark3_2.12:1.2.1', 'xml': 'com.databricks:spark-xml_2.12:0.15.0'}}}) – The dict to use

  • sep (str, default: '.') – The value to join key with its nested values

Raises:

TypeError – If d[key] is not either a dict or list

Return type:

None