eynollah/train
2025-10-01 18:12:45 +02:00
..
.gitkeep code to produce models 2019-12-05 12:01:54 +01:00
config_params.json adding foreground rgb to augmentation 2024-08-28 02:09:27 +02:00
config_params_docker.json docker file to train model with desired cuda and cudnn 2025-06-25 18:24:16 +02:00
custom_config_page2label.json scaling and cropping of labels and org images 2024-05-30 16:59:50 +02:00
Dockerfile docker file to train model with desired cuda and cudnn 2025-06-25 18:24:16 +02:00
README.md 📝 update train/README.md, align with docs/train.md 2025-10-01 17:43:32 +02:00
requirements.txt make training dependencies optional-dependencies of eynollah 2025-10-01 18:01:25 +02:00
scales_enhancement.json pass degrading scales for image enhancement as a json file 2024-05-28 10:01:17 +02:00

Training eynollah

This README explains the technical details of how to set up and run training, for detailed information on parameterization, see docs/train.md

Introduction

This folder contains the source code for training an encoder model for document image segmentation.

Installation

Clone the repository and install eynollah along with the dependencies necessary for training:

git clone https://github.com/qurator-spk/eynollah
cd eynollah
pip install '.[training]'

Pretrained encoder

Download our pretrained weights and add them to a train/pretrained_model folder:

cd train
wget -O pretrained_model.tar.gz https://zenodo.org/records/17243320/files/pretrained_model_v0_5_1.tar.gz?download=1
tar xf pretrained_model.tar.gz

Binarization training data

A small sample of training data for binarization experiment can be found on zenodo, which contains images and labels folders.

Helpful tools

Tool to extract 2-D or 3-D RGB images from PAGE-XML data. In the former case, the output will be 1 2-D image array which each class has filled with a pixel value. In the case of a 3-D RGB image, each class will be defined with a RGB value and beside images, a text file of classes will also be produced.

Convert COCO GT or results for a single image to a segmentation map and write it to disk.

Extract region classes and their colours in mask (pseg) images. Allows the color map as free dict parameter, and comes with a default that mimics PageViewer's coloring for quick debugging; it also warns when regions do overlap.

Train using Docker

Build the Docker image:

cd train
docker build -t model-training .

Run Docker image

cd train
docker run --gpus all -v $PWD:/entry_point_dir model-training