No description
Find a file
Robert Sachunsky 4abc2ff572 rewrite/simplify manual reading order using recursive algorithm
- rename `return_x_start_end_mothers_childs_and_type_of_reading_order`
  → `return_multicol_separators_x_start_end`, and drop all the analysis
  pertaining to mother/child relationships and full-span separators,
  also drop the separator unification rules;
  instead of the latter, try to combine neighbouring separators more
  generally: join column spans iff there is nothing in between
  (which also necessitates passing the region mask), and keep only
  one of every such redundant pair;
  add the top (of each page part) as full-span separator up front,
  and return separators already ordered by y
- `return_boxes_of_images_by_order_of_reading_new`:
  - also pass regions with separators, so they do not have to be
    reconstructed from the separator coordinates, and also contain
    images and other non-text region types, when trying to elongate
    separators to maximize their span (without introducing overlaps)
  - determine connected components of the region mask, i.e. labels
    and their respective bboxes, in order to
    1. gain additional multi-column separators, if possible
    2. avoid cutting through regions which do cross column boundaries
       later on
  - whenever adding a new bbox, first look up the label map to see if
    there are any multi-column regions extending to the right of the
    current column; if there are, then advance not just one column
    to the right, but as many as necessary to avoid cutting through
    these regions
  - new core algorithm: iterate separators sorted by y and then column
    by column, but whenever the next separator ends in the same column
    as the current one or even further left, recurse (i.e. finish that
    span first before continuing with the top iteration)
2025-11-14 13:14:53 +01:00
.github/workflows Merge branch 'main' of https://github.com/qurator-spk/eynollah into loky-with-shm-for-175-rebuilt 2025-10-09 22:54:11 +02:00
docs training: update docs 2025-10-01 19:16:58 +02:00
src/eynollah rewrite/simplify manual reading order using recursive algorithm 2025-11-14 13:14:53 +01:00
tests mbreorder/enhancment: fix missing imports 2025-10-09 20:14:11 +02:00
train Merge branch 'integrate-training-from-sbb_pixelwise_segmentation' into training-installation 2025-10-16 16:18:02 +02:00
.dockerignore add Continuous Deployment via Dockerhub and GHCR 2025-04-01 01:31:00 +02:00
.gitignore Merge branch 'main' into integrate-training-from-sbb_pixelwise_segmentation 2025-10-16 20:50:16 +02:00
CHANGELOG.md 📦 v0.6.0 2025-10-17 10:36:30 +02:00
Dockerfile Dockerfile: fix up CUDA installation for mixed TF/Torch 2025-09-30 22:10:45 +02:00
LICENSE extend setup.py, add Makefile, gitignore, requirements.txt 2020-11-20 17:48:06 +01:00
Makefile make models: avoid re-download 2025-10-09 20:14:11 +02:00
ocrd-tool.json switch from qurator namespace to src-layout 2024-08-29 17:11:29 +02:00
pyproject.toml disable ruff check for training code for now 2025-10-16 21:29:37 +02:00
README.md force GH markdown code block in list 2025-10-01 01:16:25 +02:00
requirements-ocr.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements-plotting.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements-test.txt tests: switch from subtests to parametrize, use --isolate everywhere to free CUDA memory in between 2025-09-30 19:20:35 +02:00
requirements-training.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements.txt rm loky dependency 2025-09-30 23:12:18 +02:00

Eynollah

Document Layout Analysis, Binarization and OCR with Deep Learning and Heuristics

PyPI Version GH Actions Test GH Actions Deploy License: ASL DOI

Features

  • Support for 10 distinct segmentation classes:
  • Support for various image optimization operations:
    • cropping (border detection), binarization, deskewing, dewarping, scaling, enhancing, resizing
  • Textline segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
  • Text recognition (OCR) using either CNN-RNN or Transformer models
  • Detection of reading order (left-to-right or right-to-left) using either heuristics or trainable models
  • Output in PAGE-XML
  • OCR-D interface

⚠️ Development is focused on achieving the best quality of results for a wide variety of historical documents and therefore processing can be very slow. We aim to improve this, but contributions are welcome.

Installation

Python 3.8-3.11 with Tensorflow <2.13 on Linux are currently supported.

For (limited) GPU support the CUDA toolkit needs to be installed. A known working config is CUDA 11 with cuDNN 8.6.

You can either install from PyPI

pip install eynollah

or clone the repository, enter it and install (editable) with

git clone git@github.com:qurator-spk/eynollah.git
cd eynollah; pip install -e .

Alternatively, you can run make install or make install-dev for editable installation.

To also install the dependencies for the OCR engines:

pip install "eynollah[OCR]"
# or
make install EXTRAS=OCR

Models

Pretrained models can be downloaded from zenodo or huggingface.

For documentation on models, have a look at models.md. Model cards are also provided for our trained models.

Training

In case you want to train your own model with Eynollah, see the documentation in train.md and use the tools in the train folder.

Usage

Eynollah supports five use cases: layout analysis (segmentation), binarization, image enhancement, text recognition (OCR), and reading order detection.

Layout Analysis

The layout analysis module is responsible for detecting layout elements, identifying text lines, and determining reading order using either heuristic methods or a pretrained reading order detection model.

Reading order detection can be performed either as part of layout analysis based on image input, or, currently under development, based on pre-existing layout analysis results in PAGE-XML format as input.

The command-line interface for layout analysis can be called like this:

eynollah layout \
  -i <single image file> | -di <directory containing image files> \
  -o <output directory> \
  -m <directory containing model files> \
     [OPTIONS]

The following options can be used to further configure the processing:

option description
-fl full layout analysis including all steps and segmentation classes
-light lighter and faster but simpler method for main region detection and deskewing
-tll this indicates the light textline and should be passed with light version
-tab apply table detection
-ae apply enhancement (the resulting image is saved to the output directory)
-as apply scaling
-cl apply contour detection for curved text lines instead of bounding boxes
-ib apply binarization (the resulting image is saved to the output directory)
-ep enable plotting (MUST always be used with -sl, -sd, -sa, -si or -ae)
-eoi extract only images to output directory (other processing will not be done)
-ho ignore headers for reading order dectection
-si <directory> save image regions detected to this directory
-sd <directory> save deskewed image to this directory
-sl <directory> save layout prediction as plot to this directory
-sp <directory> save cropped page image to this directory
-sa <directory> save all (plot, enhanced/binary image, layout) to this directory

If no further option is set, the tool performs layout detection of main regions (background, text, images, separators and marginals). The best output quality is achieved when RGB images are used as input rather than greyscale or binarized images.

Binarization

The binarization module performs document image binarization using pretrained pixelwise segmentation models.

The command-line interface for binarization can be called like this:

eynollah binarization \
  -i <single image file> | -di <directory containing image files> \
  -o <output directory> \
  -m <directory containing model files> \

OCR

The OCR module performs text recognition using either a CNN-RNN model or a Transformer model.

The command-line interface for OCR can be called like this:

eynollah ocr \
  -i <single image file> | -di <directory containing image files> \
  -dx <directory of xmls> \
  -o <output directory> \
  -m <directory containing model files> | --model_name <path to specific model> \

Machine-based-reading-order

The machine-based reading-order module employs a pretrained model to identify the reading order from layouts represented in PAGE-XML files.

The command-line interface for machine based reading order can be called like this:

eynollah machine-based-reading-order \
  -i <single image file> | -di <directory containing image files> \
  -xml <xml file name> | -dx <directory containing xml files> \
  -m <path to directory containing model files> \
  -o <output directory> 

Use as OCR-D processor

Eynollah ships with a CLI interface to be used as OCR-D processor, formally described in ocrd-tool.json.

In this case, the source image file group with (preferably) RGB images should be used as input like this:

ocrd-eynollah-segment -I OCR-D-IMG -O OCR-D-SEG -P models eynollah_layout_v0_5_0

If the input file group is PAGE-XML (from a previous OCR-D workflow step), Eynollah behaves as follows:

  • existing regions are kept and ignored (i.e. in effect they might overlap segments from Eynollah results)

  • existing annotation (and respective AlternativeImages) are partially ignored:

    • previous page frame detection (cropped images)
    • previous derotation (deskewed images)
    • previous thresholding (binarized images)
  • if the page-level image nevertheless deviates from the original (@imageFilename) (because some other preprocessing step was in effect like denoised), then the output PAGE-XML will be based on that as new top-level (@imageFilename)

    ocrd-eynollah-segment -I OCR-D-XYZ -O OCR-D-SEG -P models eynollah_layout_v0_5_0
    

In general, it makes more sense to add other workflow steps after Eynollah.

There is also an OCR-D processor for binarization:

ocrd-sbb-binarize -I OCR-D-IMG -O OCR-D-BIN -P models default-2021-03-09

Additional documentation

Additional documentation is available in the docs directory.

How to cite

@inproceedings{hip23rezanezhad,
  title     = {Document Layout Analysis with Deep Learning and Heuristics},
  author    = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens},
  booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing {HIP} 2023,
               San José, CA, USA, August 25-26, 2023},
  publisher = {Association for Computing Machinery},
  address   = {New York, NY, USA},
  year      = {2023},
  pages     = {73--78},
  url       = {https://doi.org/10.1145/3604951.3605513}
}