- make more config_params keys dependent on each other
- re-order accordingly
- in main, initialise them (as kwarg), so sacred actually
allows overriding them by named config file
- `index_start`: re-introduce cfg key, pass to Keras `Model.fit`
as `initial_epoch`
- make config keys `index_start` and `dir_of_start_model` dependent
on `continue_training`
- improve description
- `utils.provide_patches`: split up loop into
* `utils.preprocess_img` (single img function)
* `utils.preprocess_imgs` (top-level loop)
- capture exceptions for all cases (not just some)
at top level and with informative logging
- avoid repeating / delegating config keys in several
places: only as kwargs to `preprocess_img()`
- read files into memory only once, then re-use
- improve readability (avoiding long lines, repeated code)
when parsing `PrintSpace` or `Border` from PAGE-XML,
- use `lxml` XPath instead of nested loops
- convert points to polygons directly
(instead of painting on canvas and retrieving contours)
- pass result bbox in slice notation
(instead of xywh)
when matching files in `dir_images` by XML path name stem,
* use `dict` instead of `list` to assign reliably
* filter out `.xml` files (so input directories can be mixed)
* show informative warnings for files which cannot be matched
- do not restrict TF version, but depend on tf-keras and
set `TF_USE_LEGACY_KERAS=1` to avoid Keras 3 behaviour
- relax Numpy version requirement up to v2
- relax Torch version requirement
- drop TF1 session management code
- drop TF1 config in favour of TF2 config code for memory growth
- training.*: also simplify and limit line length
- training.train: always train with TensorBoard callback
after selecting the optimum angle on the original
search range, narrow down around in the vicinity
with half the range (adding computational costs,
but gaining precision)
when passing the text region mask, do not apply erosion only
if there are more than 2 columns, but iff `not erosion_hurts`
(consistent with `find_num_col`'s expectations and making
it as easy to find the column gaps on 1 and 2-column pages
as on multi-column pages)
- `find_number_of_columns_in_document`: retain vertical separators
and pass to `find_num_col` for each vertical split
- `return_boxes_of_images_by_order_of_reading_new`: reconstruct
the vertical separators from the segmentation mask and the separator
bboxes; pass it on to `find_num_col` everywhere
- `return_boxes_of_images_by_order_of_reading_new`: no need to
try-catch `find_num_col` anymore
- `return_boxes_of_images_by_order_of_reading_new`: when a vertical
split has too few columns,
* do not raise but lower the threshold `multiplier` responsible for
allowing gaps as column boundaries
* do not pass the `num_col_classifier` (i.e. expected number of
resulting columns) of the entire page to the iterative
`find_num_col` for each existing column, but only the portion
of that span
when searching for gaps between text regions, consider the vertical
separator mask (if given): add the vertical sum of vertical separators
to the peak scores (making column detection more robust if still slighly
skewed or partially obscured by multi-column regions, but fg seps are
present)
- when analysing regions spanning across columns,
disregard tiny regions (smaller than half the median size)
- if a region spans across columns just by a tiny fraction,
and therefore is not good enough for a multi-col separator,
then it should also not be good enough for a multi-col box
maker
- avoid unnecessary `fillPoly` (we already have the mask)
- do not merge hseps if vseps interfere
- remove old criterion (based on total length of hseps)
- create new criterion (no x overlap and x close to each other)
- rename identifiers:
* `sum_dis` → `sum_xspan`
* `diff_max_min_uniques` → `tot_xspan`
* np.std / np.mean → `dev_xspan`
- remove rule cutting around the center of crossing seps
(which is unnecessary and creates small isolated seps
at the center, unrelated to the actual crossing points)
- create rule cutting hseps by vseps _prior_ to merging