# Eynollah > Document Layout Analysis, Binarization and OCR with Deep Learning and Heuristics [![Python Versions](https://img.shields.io/pypi/pyversions/eynollah.svg)](https://pypi.python.org/pypi/eynollah) [![PyPI Version](https://img.shields.io/pypi/v/eynollah)](https://pypi.org/project/eynollah/) [![GH Actions Test](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml/badge.svg)](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml) [![GH Actions Deploy](https://github.com/qurator-spk/eynollah/actions/workflows/build-docker.yml/badge.svg)](https://github.com/qurator-spk/eynollah/actions/workflows/build-docker.yml) [![License: ASL](https://img.shields.io/github/license/qurator-spk/eynollah)](https://opensource.org/license/apache-2-0/) [![DOI](https://img.shields.io/badge/DOI-10.1145%2F3604951.3605513-red)](https://doi.org/10.1145/3604951.3605513) ![](https://user-images.githubusercontent.com/952378/102350683-8a74db80-3fa5-11eb-8c7e-f743f7d6eae2.jpg) ## Features * Document layout analysis using pixelwise segmentation models with support for 10 distinct segmentation classes: * background, [page border](https://ocr-d.de/en/gt-guidelines/trans/lyRand.html), [text region](https://ocr-d.de/en/gt-guidelines/trans/lytextregion.html#textregionen__textregion_), [text line](https://ocr-d.de/en/gt-guidelines/pagexml/pagecontent_xsd_Complex_Type_pc_TextLineType.html), [header](https://ocr-d.de/en/gt-guidelines/trans/lyUeberschrift.html), [image](https://ocr-d.de/en/gt-guidelines/trans/lyBildbereiche.html), [separator](https://ocr-d.de/en/gt-guidelines/trans/lySeparatoren.html), [marginalia](https://ocr-d.de/en/gt-guidelines/trans/lyMarginalie.html), [initial](https://ocr-d.de/en/gt-guidelines/trans/lyInitiale.html), [table](https://ocr-d.de/en/gt-guidelines/trans/lyTabellen.html) * Textline segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text * Document image binarization with pixelwise segmentation or hybrid CNN-Transformer models * Text recognition (OCR) with CNN-RNN or TrOCR models * Detection of reading order (left-to-right or right-to-left) using heuristics or trainable models * Output in [PAGE-XML](https://github.com/PRImA-Research-Lab/PAGE-XML) * [OCR-D](https://github.com/qurator-spk/eynollah#use-as-ocr-d-processor) interface :warning: Development is focused on achieving the best quality of results for a wide variety of historical documents using a combination of multiple deep learning models and heuristics; therefore processing can be slow. ## Installation Python `3.8-3.11` with Tensorflow `<2.13` on Linux are currently supported. For (limited) GPU support the CUDA toolkit needs to be installed. A working config is CUDA `11.8` with cuDNN `8.6`. You can either install from PyPI ``` pip install eynollah ``` or clone the repository, enter it and install (editable) with ``` git clone git@github.com:qurator-spk/eynollah.git cd eynollah; pip install -e . ``` Alternatively, you can run `make install` or `make install-dev` for editable installation. To also install the dependencies for the OCR engines: ``` pip install "eynollah[OCR]" # or make install EXTRAS=OCR ``` With Docker, use ``` docker pull ghcr.io/qurator-spk/eynollah:latest ``` For additional documentation on using Eynollah and Docker, see [`docker.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/docker.md). ## Models Pretrained models can be downloaded from [Zenodo](https://zenodo.org/records/17194824) or [Hugging Face](https://huggingface.co/SBB?search_models=eynollah). For documentation on models, have a look at [`models.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/models.md). ## Training To train your own model with Eynollah, see the documentation in [`train.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/train.md) and use the tools in the [`train`](https://github.com/qurator-spk/eynollah/tree/main/train) folder. ## Usage Eynollah supports five use cases: 1. [layout analysis (segmentation)](#layout-analysis), 2. [binarization](#binarization), 3. [image enhancement](#image-enhancement), 4. [text recognition (OCR)](#ocr), and 5. [reading order detection](#reading-order-detection). ### Layout Analysis The layout analysis module is responsible for detecting layout elements, identifying text lines, and determining reading order using either heuristic methods or a [pretrained reading order detection model](https://github.com/qurator-spk/eynollah#machine-based-reading-order). Reading order detection can be performed either as part of layout analysis based on image input, or, currently under development, based on pre-existing layout analysis results in PAGE-XML format as input. The command-line interface for layout analysis can be called like this: ```sh eynollah layout \ -i | -di \ -o \ -m \ [OPTIONS] ``` The following options can be used to further configure the processing: | option | description | |-------------------|:-------------------------------------------------------------------------------| | `-fl` | full layout analysis including all steps and segmentation classes | | `-light` | lighter and faster but simpler method for main region detection and deskewing | | `-tll` | this indicates the light textline and should be passed with light version | | `-tab` | apply table detection | | `-ae` | apply enhancement (the resulting image is saved to the output directory) | | `-as` | apply scaling | | `-cl` | apply contour detection for curved text lines instead of bounding boxes | | `-ib` | apply binarization (the resulting image is saved to the output directory) | | `-ep` | enable plotting (MUST always be used with `-sl`, `-sd`, `-sa`, `-si` or `-ae`) | | `-eoi` | extract only images to output directory (other processing will not be done) | | `-ho` | ignore headers for reading order dectection | | `-si ` | save image regions detected to this directory | | `-sd ` | save deskewed image to this directory | | `-sl ` | save layout prediction as plot to this directory | | `-sp ` | save cropped page image to this directory | | `-sa ` | save all (plot, enhanced/binary image, layout) to this directory | If no further option is set, the tool performs layout detection of main regions (background, text, images, separators and marginals). The best output quality is achieved when RGB images are used as input rather than greyscale or binarized images. Additional documentation can be found in [`usage.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/models.md). ### Binarization The binarization module performs document image binarization using pretrained pixelwise segmentation models. The command-line interface for binarization can be called like this: ```sh eynollah binarization \ -i | -di \ -o \ -m \ ``` ### Image Enhancement TODO ### OCR The OCR module performs text recognition using either CNN-RNN or TrOCR models. The command-line interface for OCR can be called like this: ```sh eynollah ocr \ -i | -di \ -dx \ -o \ -m | --model_name \ ``` ### Reading Order Detection The machine-based reading-order module employs a pretrained model to identify the reading order from layouts represented in PAGE-XML files. The command-line interface for machine based reading order can be called like this: ```sh eynollah machine-based-reading-order \ -i | -di \ -xml | -dx \ -m \ -o ``` #### Use as OCR-D processor Eynollah ships with a CLI interface to be used as [OCR-D](https://ocr-d.de) [processor](https://ocr-d.de/en/spec/cli), formally described in [`ocrd-tool.json`](https://github.com/qurator-spk/eynollah/tree/main/src/eynollah/ocrd-tool.json). Further documentation on using Eynollah with OCR-D can be found in [`ocrd.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/ocrd.md). ## How to cite ```bibtex @inproceedings{hip23eynollah, title = {Document Layout Analysis with Deep Learning and Heuristics}, author = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens}, booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing {HIP} 2023, San José, CA, USA, August 25-26, 2023}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, year = {2023}, pages = {73--78}, url = {https://doi.org/10.1145/3604951.3605513} } ```