Go to file
Gerber, Mike 0022e4a275 🚧 github: Use reusable workflow to build both core + rest of images
.github/workflows 🚧 github: Use reusable workflow to build both core + rest of images
wrapper Merge branch 'master' of code.dev.sbb.berlin:qurator/ocrd-galley
.drone.star 🐛 ocrd-galley: Drone CI: Use SMTP port 25
.gitignore 🚧 Add a wrapper script to call containers
Dockerfile-core 🚧 Install pyenv
Dockerfile-core-cuda10.0 🚧 Use Ubuntu 22.04
Dockerfile-core-cuda10.1 🚧 Use Ubuntu 22.04
Dockerfile-dinglehopper ⬆️ Update dinglehopper
Dockerfile-eynollah ⬆ eynollah 0.0.10
Dockerfile-ocrd_anybaseocr ⬆️ ocrd_anybaseocr 1.8.2
Dockerfile-ocrd_calamari ⬆️ ocrd_calamari 1.0.5
Dockerfile-ocrd_calamari03 ⚙️ ocrd-galley: Fix ocrd_calamari version for Calamari 0.3 container
Dockerfile-ocrd_cis 🐳 Upload images to quratorspk/ocrd-galley*
Dockerfile-ocrd_fileformat ⬆️ ocrd_fileformat 0.5.0
Dockerfile-ocrd_olena ⬆ ocrd_olena 1.3.0
Dockerfile-ocrd_segment ⬆️ ocrd_segment 0.1.21
Dockerfile-ocrd_tesserocr ⬆️ ocrd_tesserocr 0.16.0
Dockerfile-ocrd_wrap ⬆️ ocrd-galley: Update ocrd_wrap
Dockerfile-sbb_binarization ⬆️ ocrd-galley: sbb_binarization 0.0.10
Dockerfile-sbb_textline_detector ⬆️ Update (and fix) sbb_textline_detector
LICENSE ⚖️ Add a LICENSE file
README-DEV.md 📝 ocrd-galley: Inclusion policy
README.md Use MAX file group name instead of BEST
build 🚧 Install pyenv
build-tmp-XXX 🚧 ocrd-galley: Add model files for eynollah
my_ocrd_workflow ⬆️ ocrd_calamari 1.0.5
my_ocrd_workflow-sbb ⬆️ ocrd_calamari 1.0.5
ocrd-workspace-from-images ocrd-workspace-from-images
ppn2ocr 🐛 ppn2ocr: Don't break now that we have IIIF URLs
qurator_data_lib.sh 🚧 Try out Drone CI
requirements-ppn2ocr.txt 🚧 Add a script that checks FULLTEXT dimensions against BEST dimensions
zdb2ocr 🚧 zdb2ocr: Add TODOs from notes.md

README.md

ocrd-galley

A Dockerized test environment for OCR-D processors 🚢

WIP. Given a OCR-D workspace with document images in the OCR-D-IMG file group, the example workflow produces:

  • Binarized images
  • Line segmentation
  • OCR text (using Calamari and Tesseract, both with GT4HistOCR models)
  • (Given ground truth in OCR-D-GT-PAGE, also an OCR text evaluation report)

If you're interested in the exact processors, versions and parameters, please take a look at the script and possibly the individual Dockerfiles.

Goal

Provide a test environment to produce OCR output for historical prints, using OCR-D, especially ocrd_calamari and sbb_textline_detection, including all dependencies in Docker.

How to use

Currently, due to problems with the Travis CI, we do not provide pre-built containers anymore.*

To build the containers yourself using Docker:

cd ~/devel/ocrd-galley/
./build

You can then install the wrappers into a Python venv:

cd ~/devel/ocrd-galley/wrapper
pip install .

You may then use the script my_ocrd_workflow to use your self-built containers on an example workspace:

# Download an example workspace
cd /tmp
wget https://qurator-data.de/examples/actevedef_718448162.first-page.zip
unzip actevedef_718448162.first-page.zip

# Run the workflow on it
cd actevedef_718448162.first-page
~/devel/ocrd-galley/my_ocrd_workflow

Viewing results

You may then examine the results using PRImA's PAGE Viewer:

java -jar /path/to/JPageViewer.jar \
  --resolve-dir . \
  OCR-D-OCR-CALAMARI/OCR-D-OCR-CALAMARI_00000024.xml

The example workflow also produces OCR evaluation reports using dinglehopper, if ground truth was available:

firefox OCR-D-OCR-CALAMARI-EVAL/OCR-D-OCR-CALAMARI-EVAL_00000024.html

ppn2ocr

The ppn2ocr script produces a workspace and METS file with the best images for a given document in the Berlin State Library (SBB)'s digitized collection.

Install it with an up-to-date pip (otherwise this will fail due to a opencv-python-headless build failure):

pip install -r ~/devel/ocrd-galley/requirements-ppn2ocr.txt

The document must be specified by its PPN, for example:

~/devel/ocrd-galley/ppn2ocr PPN77164308X
cd PPN77164308X
~/devel/ocrd-galley/my_ocrd_workflow -I MAX --skip-validation

This produces a workspace directory PPN77164308X with the OCR results in it; the results are viewable as explained above.

ppn2ocr requires properly set up environment variables for the proxy configuration. At SBB, please read howto/docker-proxy.md and howto/proxy-settings-for-shell+python.md (in qurator's mono-repo).

ocrd-workspace-from-images

The ocrd-workspace-from-images script produces a OCR-D workspace (incl. METS) for the given images.

~/devel/ocrd-galley/ocrd-workspace-from-images 0005.png
cd workspace-xxxxx  # output by the last command
~/devel/ocrd-galley/my_ocrd_workflow

This produces a workspace from the files and then runs the OCR workflow on it.