@ -1,10 +1,11 @@
# Eynollah
# Eynollah
> Document Layout Analysis (segmentation) using pre-trained models and h euristics
> Document Layout Analysis with Deep Learning and H euristics
[![PyPI Version ](https://img.shields.io/pypi/v/eynollah )](https://pypi.org/project/eynollah/)
[![PyPI Version ](https://img.shields.io/pypi/v/eynollah )](https://pypi.org/project/eynollah/)
[![CircleCI Build Status ](https://circleci.com/gh/qurator-spk/eynollah.svg?style=shield )](https://circleci.com/gh/qurator-spk/eynollah)
[![CircleCI Build Status ](https://circleci.com/gh/qurator-spk/eynollah.svg?style=shield )](https://circleci.com/gh/qurator-spk/eynollah)
[![GH Actions Test ](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml/badge.svg )](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml)
[![GH Actions Test ](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml/badge.svg )](https://github.com/qurator-spk/eynollah/actions/workflows/test-eynollah.yml)
[![License: ASL ](https://img.shields.io/github/license/qurator-spk/eynollah )](https://opensource.org/license/apache-2-0/)
[![License: ASL ](https://img.shields.io/github/license/qurator-spk/eynollah )](https://opensource.org/license/apache-2-0/)
[![DOI ](https://img.shields.io/badge/DOI-10.1145%2F3604951.3605513-red )](https://doi.org/10.1145/3604951.3605513)
![](https://user-images.githubusercontent.com/952378/102350683-8a74db80-3fa5-11eb-8c7e-f743f7d6eae2.jpg)
![](https://user-images.githubusercontent.com/952378/102350683-8a74db80-3fa5-11eb-8c7e-f743f7d6eae2.jpg)
@ -14,16 +15,19 @@
* Support for various image optimization operations:
* Support for various image optimization operations:
* cropping (border detection), binarization, deskewing, dewarping, scaling, enhancing, resizing
* cropping (border detection), binarization, deskewing, dewarping, scaling, enhancing, resizing
* Text line segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
* Text line segmentation to bounding boxes or polygons (contours) including for curved lines and vertical text
* Detection of reading order
* Detection of reading order (left-to-right or right-to-left)
* Output in [PAGE-XML ](https://github.com/PRImA-Research-Lab/PAGE-XML )
* Output in [PAGE-XML ](https://github.com/PRImA-Research-Lab/PAGE-XML )
* [OCR-D ](https://github.com/qurator-spk/eynollah#use-as-ocr-d-processor ) interface
* [OCR-D ](https://github.com/qurator-spk/eynollah#use-as-ocr-d-processor ) interface
:warning: Eynollah development is currently focused on achieving high quality results for a wide variety of historical documents.
Processing can be very slow, with a lot of potential to improve. We aim to work on this too, but contributions are always welcome.
## Installation
## Installation
Python versions `3.8-3.11` with Tensorflow versions `2.12-2.15` on Linux are currently supported.
Python `3.8-3.11` with Tensorflow `2.12-2.15` on Linux are currently supported.
For (limited) GPU support the CUDA toolkit needs to be installed.
For (limited) GPU support the CUDA toolkit needs to be installed.
You can either install via
You can either install from PyPI
```
```
pip install eynollah
pip install eynollah
@ -39,18 +43,21 @@ cd eynollah; pip install -e .
Alternatively, you can run `make install` or `make install-dev` for editable installation.
Alternatively, you can run `make install` or `make install-dev` for editable installation.
## Models
## Models
Pre-trained models can be downloaded from [qurator-data.de ](https://qurator-data.de/eynollah/ ).
Pre-trained models can be downloaded from [qurator-data.de ](https://qurator-data.de/eynollah/ ) or [huggingface ](https://huggingface .co/SBB).
In case you want to train your own model to use with Eynollah, have a look at [sbb_pixelwise_segmentation ](https://github.com/qurator-spk/sbb_pixelwise_segmentation ).
## Train
🚧 **Work in progress**
In case you want to train your own model, have a look at [`sbb_pixelwise_segmentation` ](https://github.com/qurator-spk/sbb_pixelwise_segmentation ).
## Usage
## Usage
The command-line interface can be called like this:
The command-line interface can be called like this:
```sh
```sh
eynollah \
eynollah \
-i < image file > \
-i < single image file > | -di < directory containing image files > \
-o < output directory > \
-o < output directory > \
-m < path to directory containing model files > \
-m < directory containing model files > \
[OPTIONS]
[OPTIONS]
```
```
@ -67,7 +74,6 @@ The following options can be used to further configure the processing:
| `-ib` | apply binarization (the resulting image is saved to the output directory) |
| `-ib` | apply binarization (the resulting image is saved to the output directory) |
| `-ep` | enable plotting (MUST always be used with `-sl` , `-sd` , `-sa` , `-si` or `-ae` ) |
| `-ep` | enable plotting (MUST always be used with `-sl` , `-sd` , `-sa` , `-si` or `-ae` ) |
| `-ho` | ignore headers for reading order dectection |
| `-ho` | ignore headers for reading order dectection |
| `-di <directory>` | process all images in a directory in batch mode |
| `-si <directory>` | save image regions detected to this directory |
| `-si <directory>` | save image regions detected to this directory |
| `-sd <directory>` | save deskewed image to this directory |
| `-sd <directory>` | save deskewed image to this directory |
| `-sl <directory>` | save layout prediction as plot to this directory |
| `-sl <directory>` | save layout prediction as plot to this directory |
@ -78,6 +84,7 @@ If no option is set, the tool will perform layout detection of main regions (bac
The tool produces better quality output when RGB images are used as input than greyscale or binarized images.
The tool produces better quality output when RGB images are used as input than greyscale or binarized images.
#### Use as OCR-D processor
#### Use as OCR-D processor
🚧 **Work in progress**
Eynollah ships with a CLI interface to be used as [OCR-D ](https://ocr-d.de ) processor.
Eynollah ships with a CLI interface to be used as [OCR-D ](https://ocr-d.de ) processor.
@ -95,11 +102,14 @@ ocrd-eynollah-segment -I OCR-D-IMG-BIN -O SEG-LINE -P models
uses the original (RGB) image despite any binarization that may have occured in previous OCR-D processing steps
uses the original (RGB) image despite any binarization that may have occured in previous OCR-D processing steps
#### Additional documentation
Please check the [wiki ](https://github.com/qurator-spk/eynollah/wiki ).
## How to cite
## How to cite
If you find this tool useful in your work, please consider citing our paper:
If you find this tool useful in your work, please consider citing our paper:
```bibtex
```bibtex
@inproceedings {rezanezhad2023eynollah ,
@inproceedings {hip23 rezanezhad,
title = {Document Layout Analysis with Deep Learning and Heuristics},
title = {Document Layout Analysis with Deep Learning and Heuristics},
author = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens},
author = {Rezanezhad, Vahid and Baierer, Konstantin and Gerber, Mike and Labusch, Kai and Neudecker, Clemens},
booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing {HIP} 2023,
booktitle = {Proceedings of the 7th International Workshop on Historical Document Imaging and Processing {HIP} 2023,