# Eynollah
> Document Layout Analysis, Binarization and OCR with Deep Learning and Heuristics
[](https://pypi.org/project/eynollah/)
[](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml)
[](https://github.com/qurator-spk/eynollah/actions/workflows/build-docker.yml)
[](https://opensource.org/license/apache-2-0/)
[](https://doi.org/10.1145/3604951.3605513)

## Features
* Support for 10 distinct segmentation classes:
* background, [page border](https://ocr-d.de/en/gt-guidelines/trans/lyRand.html), [text region](https://ocr-d.de/en/gt-guidelines/trans/lytextregion.html#textregionen__textregion_), [text line](https://ocr-d.de/en/gt-guidelines/pagexml/pagecontent_xsd_Complex_Type_pc_TextLineType.html), [header](https://ocr-d.de/en/gt-guidelines/trans/lyUeberschrift.html), [image](https://ocr-d.de/en/gt-guidelines/trans/lyBildbereiche.html), [separator](https://ocr-d.de/en/gt-guidelines/trans/lySeparatoren.html), [marginalia](https://ocr-d.de/en/gt-guidelines/trans/lyMarginalie.html), [initial](https://ocr-d.de/en/gt-guidelines/trans/lyInitiale.html), [table](https://ocr-d.de/en/gt-guidelines/trans/lyTabellen.html)
* Support for various image optimization operations:
* cropping (border detection), binarization, deskewing, dewarping, scaling, enhancing, resizing
* Textline segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
* Text recognition (OCR) using either CNN-RNN or Transformer models
* Detection of reading order (left-to-right or right-to-left) using either heuristics or trainable models
* Output in [PAGE-XML](https://github.com/PRImA-Research-Lab/PAGE-XML)
* [OCR-D](https://github.com/qurator-spk/eynollah#use-as-ocr-d-processor) interface
:warning: Development is focused on achieving the best quality of results for a wide variety of historical
documents and therefore processing can be very slow. We aim to improve this, but contributions are welcome.
## Installation
Python `3.8-3.11` with Tensorflow `<2.13` on Linux are currently supported.
For (limited) GPU support the CUDA toolkit needs to be installed. A known working config is CUDA `11` with cuDNN `8.6`.
You can either install from PyPI
```
pip install eynollah
```
or clone the repository, enter it and install (editable) with
```
git clone git@github.com:qurator-spk/eynollah.git
cd eynollah; pip install -e .
```
Alternatively, you can run `make install` or `make install-dev` for editable installation.
To also install the dependencies for the OCR engines:
```
pip install "eynollah[OCR]"
# or
make install EXTRAS=OCR
```
## Models
Pretrained models can be downloaded from [zenodo](https://zenodo.org/records/17194824) or [huggingface](https://huggingface.co/SBB?search_models=eynollah).
For documentation on models, have a look at [`models.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/models.md).
Model cards are also provided for our trained models.
## Training
In case you want to train your own model with Eynollah, see the
documentation in [`train.md`](https://github.com/qurator-spk/eynollah/tree/main/docs/train.md) and use the
tools in the [`train` folder](https://github.com/qurator-spk/eynollah/tree/main/train).
## Usage
Eynollah supports five use cases: layout analysis (segmentation), binarization,
image enhancement, text recognition (OCR), and reading order detection.
### Layout Analysis
The layout analysis module is responsible for detecting layout elements, identifying text lines, and determining reading
order using either heuristic methods or a [pretrained reading order detection model](https://github.com/qurator-spk/eynollah#machine-based-reading-order).
Reading order detection can be performed either as part of layout analysis based on image input, or, currently under
development, based on pre-existing layout analysis results in PAGE-XML format as input.
The command-line interface for layout analysis can be called like this:
```sh
eynollah layout \
-i | -di \
-o