Go to file
Gerber, Mike 92391747a7 🧹 Remove obsolete xsd/ directory
data@0cc78464e7 🧹 Update Calamari model path
ocrd-bugs 🐛 ocrd-bugs: Most/All workspaces in bag files don't validate
.gitignore 🧹 .gitignore __pychache__/*.pyc
.gitmodules Run Calamari OCR
.travis.yml ⬆️ ocrd_olena → 1.2.0
Dockerfile-boxed-base Move processors into their own Docker container
Dockerfile-boxed-dinglehopper Move processors into their own Docker container
Dockerfile-boxed-ocrd_calamari Move processors into their own Docker container
Dockerfile-boxed-ocrd_olena Move processors into their own Docker container
Dockerfile-boxed-ocrd_tesserocr Move processors into their own Docker container
Dockerfile-boxed-sbb_textline_detector Move processors into their own Docker container
README-DEV.md 📝 README-DEV: Fix git push instructions
README.md 🗒️ README: Break jpageviewer line
build Move processors into their own Docker container
my_ocrd_workflow 🧹 Update Calamari model path
ocrd_logging.py 🔧 Set up logging level using /etc/ocrd_logging.py instead of "-l"
ppn2ocr ppn2ocr: Support TIFF in the BEST group
qurator_data_lib.sh ⬆️ Update qurator_data_lib.sh to allow not unpacking a downloaded file
requirements-ppn2ocr.txt 🚧 Add a script that checks FULLTEXT dimensions against BEST dimensions
run Move processors into their own Docker container
zdb2ocr 🚧 zdb2ocr: Add TODOs from notes.md

README.md

My OCR-D workflow

Build Status

WIP. Given a OCR-D workspace with document images in the OCR-D-IMG file group, this workflow produces:

  • Binarized images
  • Line segmentation
  • OCR text (using Calamari and Tesseract, both with GT4HistOCR models)
  • (Given ground truth in OCR-D-GT-PAGE, also an OCR text evaluation report)

If you're interested in the exact processors, versions and parameters, please take a look at the script and possibly the Dockerfile and the requirements.

Goal

Provide a test environment to produce OCR output for historical prints, using OCR-D, especially ocrd_calamari and sbb_textline_detection, including all dependencies in Docker.

How to use

It's easiest to use it as a pre-built container. To run the container on an example workspace:

# Download an example workspace
cd /tmp
wget https://qurator-data.de/examples/actevedef_718448162.first-page.zip
unzip actevedef_718448162.first-page.zip

# Run the workflow on it
cd actevedef_718448162.first-page
~/devel/my_ocrd_workflow/run-docker-hub

Build the container yourself

To build the container yourself using Docker:

cd ~/devel/my_ocrd_workflow
./build

You may then use the script run to use your self-built container, analogous to the example above.

Viewing results

You may then examine the results using PRImA's PAGE Viewer:

java -jar /path/to/JPageViewer.jar \
  --resolve-dir . \
  OCR-D-OCR-CALAMARI/OCR-D-OCR-CALAMARI_00000024.xml

The workflow also produces OCR evaluation reports using dinglehopper, if ground truth was available:

firefox OCR-D-OCR-CALAMARI-EVAL/OCR-D-OCR-CALAMARI-EVAL_00000024.html

ppn2ocr

The ppn2ocr script produces a METS file with the best images for a given document in the State Library Berlin (SBB)'s digitized collection. The document must be specified by its PPN, for example:

pip install -r ~/devel/my_ocrd_workflow/requirements-ppn2ocr.txt
~/devel/my_ocrd_workflow/ppn2ocr PPN77164308X
cd PPN77164308X
~/devel/my_ocrd_workflow/run-docker-hub -I BEST --skip-validation

This produces a workspace directory PPN77164308X with the OCR results in it; the results are viewable as explained above.

ppn2ocr requires a working Docker setup and properly set up environment variables for the proxy configuration. At SBB, this following howto/docker-proxy.md and howto/proxy-settings-for-shell+python.md (in qurator's mono-repo).