mirror of
https://github.com/qurator-spk/eynollah.git
synced 2025-11-09 22:24:13 +01:00
combine Docker docs
This commit is contained in:
parent
496a0e2ca4
commit
6e3399fe7a
2 changed files with 23 additions and 20 deletions
|
|
@ -1,4 +1,8 @@
|
|||
# 1. ocrd resource manager
|
||||
## Inference with Docker
|
||||
|
||||
docker pull ghcr.io/qurator-spk/eynollah:latest
|
||||
|
||||
### 1. ocrd resource manager
|
||||
(just once, to get the models and install them into a named volume for later re-use)
|
||||
|
||||
vol_models=ocrd-resources:/usr/local/share/ocrd-resources
|
||||
|
|
@ -6,15 +10,18 @@
|
|||
|
||||
Now, each time you want to use Eynollah, pass the same resources volume again.
|
||||
Also, bind-mount some data directory, e.g. current working directory $PWD (/data is default working directory in the container).
|
||||
|
||||
Either use standalone CLI (2) or OCR-D CLI (3):
|
||||
|
||||
# 2. standalone CLI (follow self-help, cf. readme)
|
||||
### 2. standalone CLI
|
||||
(follow self-help, cf. readme)
|
||||
|
||||
docker run --rm -v $vol_models -v $PWD:/data ocrd/eynollah eynollah binarization --help
|
||||
docker run --rm -v $vol_models -v $PWD:/data ocrd/eynollah eynollah layout --help
|
||||
docker run --rm -v $vol_models -v $PWD:/data ocrd/eynollah eynollah ocr --help
|
||||
|
||||
# 3. OCR-D CLI (follow self-help, cf. readme and https://ocr-d.de/en/spec/cli)
|
||||
### 3. OCR-D CLI
|
||||
(follow self-help, cf. readme and https://ocr-d.de/en/spec/cli)
|
||||
|
||||
docker run --rm -v $vol_models -v $PWD:/data ocrd/eynollah ocrd-eynollah-segment -h
|
||||
docker run --rm -v $vol_models -v $PWD:/data ocrd/eynollah ocrd-sbb-binarize -h
|
||||
|
|
@ -22,3 +29,15 @@ Either use standalone CLI (2) or OCR-D CLI (3):
|
|||
Alternatively, just "log in" to the container once and use the commands there:
|
||||
|
||||
docker run --rm -v $vol_models -v $PWD:/data -it ocrd/eynollah bash
|
||||
|
||||
## Training with Docker
|
||||
|
||||
Build the Docker image
|
||||
|
||||
cd train
|
||||
docker build -t model-training .
|
||||
|
||||
Run the Docker image
|
||||
|
||||
cd train
|
||||
docker run --gpus all -v $PWD:/entry_point_dir model-training
|
||||
|
|
|
|||
|
|
@ -41,19 +41,3 @@ each class will be defined with a RGB value and beside images, a text file of cl
|
|||
> Convert COCO GT or results for a single image to a segmentation map and write it to disk.
|
||||
* [`ocrd-segment-extract-pages`](https://github.com/OCR-D/ocrd_segment/blob/master/ocrd_segment/extract_pages.py)
|
||||
> Extract region classes and their colours in mask (pseg) images. Allows the color map as free dict parameter, and comes with a default that mimics PageViewer's coloring for quick debugging; it also warns when regions do overlap.
|
||||
|
||||
### Train using Docker
|
||||
|
||||
Build the Docker image:
|
||||
|
||||
```bash
|
||||
cd train
|
||||
docker build -t model-training .
|
||||
```
|
||||
|
||||
Run Docker image
|
||||
|
||||
```bash
|
||||
cd train
|
||||
docker run --gpus all -v $PWD:/entry_point_dir model-training
|
||||
```
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue