No description
Find a file
2025-10-28 19:56:23 +01:00
.github/workflows Merge branch 'main' of https://github.com/qurator-spk/eynollah into loky-with-shm-for-175-rebuilt 2025-10-09 22:54:11 +02:00
docs Merge remote-tracking branch 'origin/updating_docs' into docs_and_minor_fixes 2025-10-28 19:53:12 +01:00
src/eynollah remove unnecessary backslash 2025-10-17 18:27:18 +02:00
tests mbreorder/enhancment: fix missing imports 2025-10-09 20:14:11 +02:00
train integrate training docs 2025-10-20 22:39:54 +02:00
.dockerignore add Continuous Deployment via Dockerhub and GHCR 2025-04-01 01:31:00 +02:00
.gitignore Merge branch 'main' into integrate-training-from-sbb_pixelwise_segmentation 2025-10-16 20:50:16 +02:00
CHANGELOG.md 📦 v0.6.0 2025-10-17 10:36:30 +02:00
Dockerfile Dockerfile: fix up CUDA installation for mixed TF/Torch 2025-09-30 22:10:45 +02:00
LICENSE extend setup.py, add Makefile, gitignore, requirements.txt 2020-11-20 17:48:06 +01:00
Makefile make models: avoid re-download 2025-10-09 20:14:11 +02:00
ocrd-tool.json switch from qurator namespace to src-layout 2024-08-29 17:11:29 +02:00
pyproject.toml expand keywords and supported Python versions 2025-10-17 18:17:41 +02:00
README.md remove newspaper images from main readme 2025-10-28 19:56:23 +01:00
requirements-ocr.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements-plotting.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements-test.txt tests: switch from subtests to parametrize, use --isolate everywhere to free CUDA memory in between 2025-09-30 19:20:35 +02:00
requirements-training.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
requirements.txt rm loky dependency 2025-09-30 23:12:18 +02:00

Eynollah

Document Layout Analysis, Binarization and OCR with Deep Learning and Heuristics

Python Versions PyPI Version GH Actions Test GH Actions Deploy License: ASL DOI

Features

  • Document layout analysis using pixelwise segmentation models with support for 10 distinct segmentation classes:
  • Textline segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
  • Document image binarization with pixelwise segmentation or hybrid CNN-Transformer models
  • Text recognition (OCR) with CNN-RNN or TrOCR models
  • Detection of reading order (left-to-right or right-to-left) using heuristics or trainable models
  • Output in PAGE-XML
  • OCR-D interface

⚠️ Development is focused on achieving the best quality of results for a wide variety of historical documents using a combination of multiple deep learning models and heuristics; therefore processing can be slow.

Installation

Python 3.8-3.11 with Tensorflow <2.13 on Linux are currently supported. For (limited) GPU support the CUDA toolkit needs to be installed. A working config is CUDA 11.8 with cuDNN 8.6.

You can either install from PyPI

pip install eynollah

or clone the repository, enter it and install (editable) with

git clone git@github.com:qurator-spk/eynollah.git
cd eynollah; pip install -e .

Alternatively, you can run make install or make install-dev for editable installation.

To also install the dependencies for the OCR engines:

pip install "eynollah[OCR]"
# or
make install EXTRAS=OCR

Docker

Use

docker pull ghcr.io/qurator-spk/eynollah:latest

When using Eynollah with Docker, see docker.md.

Models

Pretrained models can be downloaded from Zenodo or Hugging Face.

For model documentation and model cards, see models.md.

Training

To train your own model with Eynollah, see train.md and use the tools in the train folder.

Usage

Eynollah supports five use cases:

  1. layout analysis (segmentation),
  2. binarization,
  3. image enhancement,
  4. text recognition (OCR), and
  5. reading order detection.

Layout Analysis

The layout analysis module is responsible for detecting layout elements, identifying text lines, and determining reading order using heuristic methods or a pretrained model.

The command-line interface for layout analysis can be called like this:

eynollah layout \
  -i <single image file> | -di <directory containing image files> \
  -o <output directory> \
  -m <directory containing model files> \
     [OPTIONS]

The following options can be used to further configure the processing:

option description
-fl full layout analysis including all steps and segmentation classes (recommended)
-light lighter and faster but simpler method for main region detection and deskewing (recommended)
-tll this indicates the light textline and should be passed with light version (recommended)
-tab apply table detection
-ae apply enhancement (the resulting image is saved to the output directory)
-as apply scaling
-cl apply contour detection for curved text lines instead of bounding boxes
-ib apply binarization (the resulting image is saved to the output directory)
-ep enable plotting (MUST always be used with -sl, -sd, -sa, -si or -ae)
-eoi extract only images to output directory (other processing will not be done)
-ho ignore headers for reading order dectection
-si <directory> save image regions detected to this directory
-sd <directory> save deskewed image to this directory
-sl <directory> save layout prediction as plot to this directory
-sp <directory> save cropped page image to this directory
-sa <directory> save all (plot, enhanced/binary image, layout) to this directory
-thart threshold of artifical class in the case of textline detection. The default value is 0.1
-tharl threshold of artifical class in the case of layout detection. The default value is 0.1
-ocr do ocr
-tr apply transformer ocr. Default model is a CNN-RNN model
-bs_ocr ocr inference batch size. Default bs for trocr and cnn_rnn models are 2 and 8 respectively
-ncu upper limit of columns in document image
-ncl lower limit of columns in document image
-slro skip layout detection and reading order
-romb apply machine based reading order detection
-ipe ignore page extraction

If no further option is set, the tool performs layout detection of main regions (background, text, images, separators and marginals). The best output quality is achieved when RGB images are used as input rather than greyscale or binarized images.

Additional documentation can be found in usage.md.

Binarization

The binarization module performs document image binarization using pretrained pixelwise segmentation models.

The command-line interface for binarization can be called like this:

eynollah binarization \
  -i <single image file> | -di <directory containing image files> \
  -o <output directory> \
  -m <directory containing model files> 

Image Enhancement

TODO

OCR

Input Image Output Image

Input Image Output Image

The OCR module performs text recognition using either a CNN-RNN model or a Transformer model.

The command-line interface for OCR can be called like this:

eynollah ocr \
  -i <single image file> | -di <directory containing image files> \
  -dx <directory of xmls> \
  -o <output directory> \
  -m <directory containing model files> | --model_name <path to specific model>

The following options can be used to further configure the ocr processing:

option description
-dib directory of bins(files type must be '.png'). Prediction with both RGB and bins.
-doit Directory containing output images rendered with the predicted text
--model_name Specific model file path to use for OCR
-trocr transformer ocr will be applied, otherwise cnn_rnn model
-etit textlines images and text in xml will be exported into output dir (OCR training data)
-nmtc cropped textline images will not be masked with textline contour
-bs ocr inference batch size. Default bs for trocr and cnn_rnn models are 2 and 8 respectively
-ds_pref add an abbrevation of dataset name to generated training data
-min_conf minimum OCR confidence value. OCRs with textline conf lower than this will be ignored

Reading Order Detection

Reading order detection can be performed either as part of layout analysis based on image input, or, currently under development, based on pre-existing layout analysis data in PAGE-XML format as input.

The reading order detection module employs a pretrained model to identify the reading order from layouts represented in PAGE-XML files.

The command-line interface for machine based reading order can be called like this:

eynollah machine-based-reading-order \
  -i <single image file> | -di <directory containing image files> \
  -xml <xml file name> | -dx <directory containing xml files> \
  -m <path to directory containing model files> \
  -o <output directory> 

Use as OCR-D processor

See ocrd.md.

How to cite

@inproceedings{hip23rezanezhad,
  title     = {Document Layout Analysis with Deep Learning and Heuristics},
  author    = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens},
  booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing {HIP} 2023,
               San José, CA, USA, August 25-26, 2023},
  publisher = {Association for Computing Machinery},
  address   = {New York, NY, USA},
  year      = {2023},
  pages     = {73--78},
  url       = {https://doi.org/10.1145/3604951.3605513}
}