You cannot select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
eynollah/README.md

6.0 KiB

Eynollah

Document Layout Analysis (segmentation) using pre-trained models and heuristics

PyPI Version CircleCI Build Status GH Actions Test License: ASL

Features

  • Support for up to 10 segmentation classes:
  • Support for various image optimization operations:
    • cropping (border detection), binarization, deskewing, dewarping, scaling, enhancing, resizing
  • Text line segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
  • Detection of reading order
  • Output in PAGE-XML
  • OCR-D interface

Installation

Python versions 3.7-3.10 with Tensorflow >=2.4 are currently supported.

For (limited) GPU support the matching CUDA toolkit >=10.1 needs to be installed.

You can either install via

pip install eynollah

or clone the repository, enter it and install (editable) with

git clone git@github.com:qurator-spk/eynollah.git
cd eynollah; pip install -e .

Alternatively, you can run make install or make install-dev for editable installation.

click to expand/collapse

First, this model makes use of up to 9 trained models which are responsible for different operations like size detection, column classification, image enhancement, page extraction, main layout detection, full layout detection and textline detection.That does not mean that all 9 models are always required for every document. Based on the document characteristics and parameters specified, different scenarios can be applied.

Pre-trained models can be downloaded from qurator-data.de.

  • If you set -ae (allow image enhancement) parameter to true, the tool will first check the ppi (pixel-per-inch) of the image and when it is less than 300, the tool will resize it and only then image enhancement will occur. Image enhancement can also take place without this option, but by setting this option to true, the layout xml data (e.g. coordinates) will be based on the resized and enhanced image instead of the original image.

In case you want to train your own model to use with Eynollah, have a look at sbb_pixelwise_segmentation.

Usage

The command-line interface can be called like this:

eynollah \
  -i <image file> \
  -o <output directory> \
  -m <path to directory containing model files> \
     [OPTIONS]

The following options can be used to further configure the processing:

option description
-fl apply full layout analysis including all steps and segmentation classes
-light apply a lighter and faster but simpler method for main region detection and deskewing
-tab apply table detection
-ae apply enhancement and adapt coordinates (the resulting image is saved to the output directory)
-as apply scaling
-cl apply polygonal countour detection for curved text lines instead of rectangular bounding boxes
-ib apply binarization (the resulting image is saved to the output directory)
-ep enable plotting (MUST always be used with -sl, -sd, -sa, -si or -ae)
-ho ignore headers for reading order dectection
-di <directory> process all images in a directory in batch mode
-si <directory> save image regions detected in documents to this directory
-sd <directory> save deskewed image to this directory
-sl <directory> save layout prediction as plot to this directory
-sp <directory> save cropped page image to this directory
-sa <directory> save all (plot, enhanced, binary image and layout prediction) to this directory

If no option is set, the tool will perform layout detection of main regions (background, text, images, separators and marginals).

The tool produces better output from RGB images as input than greyscale or binarized images.

Use as OCR-D processor

Eynollah ships with a CLI interface to be used as OCR-D processor.

In this case, the source image file group with (preferably) RGB images should be used as input like this:

ocrd-eynollah-segment -I OCR-D-IMG -O SEG-LINE -P models

Any image referenced by @imageFilename in PAGE-XML is passed on directly to Eynollah as a processor, so that e.g. calling

ocrd-eynollah-segment -I OCR-D-IMG-BIN -O SEG-LINE -P models

still uses the original (RGB) image despite any binarization that may have occured in previous OCR-D processing steps