- use relative imports
- use tf.keras everywhere (and ensure v2)
- `weights_ensembling`:
* use `Patches` and `PatchEncoder` from .models
* drop TF1 stuff
* make function / CLI more flexible (expect list of
checkpoint dirs instead of single top-level directory)
- train for `classification`: delegate to `weights_ensembling.run_ensembling`
major conflicts resolved manually:
- branches for non-`light` segmentation already removed in main
- Keras/TF setup and no TF1 sessions, esp. in new ModelZoo
- changes to binarizer and its CLI (`mode`, `overwrite`, `run_single()`)
- writer: `build...` w/ kwargs instead of positional
- training for segmentation/binarization/enhancement tasks:
* drop unused `generate_data_from_folder()`
* simplify `preprocess_imgs()`: turn `preprocess_img()`, `get_patches()`
and `get_patches_num_scale_new()` into generators, only writing
result files in the caller (top-level loop) instead of passing
output directories and file counter
- training for new OCR task:
* `train`: put keys into additional `config_params` where they belong,
resp. (conditioned under existing keys), and w/ better documentation
* `train`: add new keys as kwargs to `run()` to make usable
* `utils`: instead of custom data loader `data_gen_ocr()`, re-use
existing `preprocess_imgs()` (for cfg capture and top-level loop),
but extended w/ new kwargs and calling new `preprocess_img_ocr()`;
the latter as single-image generator (also much simplified)
* `train`: use tf.data loader pipeline from that generator w/ standard
mechanisms for batching, shuffling, prefetching etc.
* `utils` and `train`: instead of `vectorize_label`, use `Dataset.padded_batch`
* add TensorBoard callback and re-use our checkpoint callback
* also use standard Keras top-level loop for training
still problematic (substantially unresolved):
- `Patches` now only w/ fixed implicit size
(ignoring training config params)
- `PatchEncoder` now only w/ fixed implicit num patches and projection dim
(ignoring training config params)
in `return_boxes_of_images_by_order_of_reading_new`,
when the next multicol separator ends in the same column,
do not recurse into subspan if the next starts earlier
(but continue with top span to the right first)
- unify `generate_data_from_folder_training` w/ `..._evaluation`
- instead of recreating array after every batch, just zero out
- cast image results to uint8 instead of uint16
- cast categorical results to float instead of int
- make more config_params keys dependent on each other
- re-order accordingly
- in main, initialise them (as kwarg), so sacred actually
allows overriding them by named config file
- `index_start`: re-introduce cfg key, pass to Keras `Model.fit`
as `initial_epoch`
- make config keys `index_start` and `dir_of_start_model` dependent
on `continue_training`
- improve description
- `utils.provide_patches`: split up loop into
* `utils.preprocess_img` (single img function)
* `utils.preprocess_imgs` (top-level loop)
- capture exceptions for all cases (not just some)
at top level and with informative logging
- avoid repeating / delegating config keys in several
places: only as kwargs to `preprocess_img()`
- read files into memory only once, then re-use
- improve readability (avoiding long lines, repeated code)
when parsing `PrintSpace` or `Border` from PAGE-XML,
- use `lxml` XPath instead of nested loops
- convert points to polygons directly
(instead of painting on canvas and retrieving contours)
- pass result bbox in slice notation
(instead of xywh)
when matching files in `dir_images` by XML path name stem,
* use `dict` instead of `list` to assign reliably
* filter out `.xml` files (so input directories can be mixed)
* show informative warnings for files which cannot be matched
- do not restrict TF version, but depend on tf-keras and
set `TF_USE_LEGACY_KERAS=1` to avoid Keras 3 behaviour
- relax Numpy version requirement up to v2
- relax Torch version requirement
- drop TF1 session management code
- drop TF1 config in favour of TF2 config code for memory growth
- training.*: also simplify and limit line length
- training.train: always train with TensorBoard callback