No description
Find a file
2020-05-22 17:23:49 +02:00
data@dcd3643917 ⬆ Update calamari-models URL + path 2020-01-16 15:46:43 +01:00
ocrd-bugs 🐛 ocrd-bugs: Most/All workspaces in bag files don't validate 2019-10-09 13:36:54 +02:00
xsd Use PAGE 2019 2019-08-02 11:59:16 +02:00
.gitmodules Run Calamari OCR 2019-08-21 11:54:01 +02:00
.travis.yml 👷 Travis: Hopefully fix deploy on tags 2020-03-02 17:16:14 +01:00
build Travis: Cache Docker builds from previous image 2020-02-26 13:18:27 +01:00
Dockerfile ⬆ Update ocrd_olena 2020-03-09 16:49:43 +01:00
my_ocrd_workflow Allow skipping validation 2020-03-09 16:50:30 +01:00
ocrd_logging.py 🔧 Set up logging level using /etc/ocrd_logging.py instead of "-l" 2020-02-07 17:51:53 +01:00
ppn2ocr 📝 ppn2ocr: Add to README, including proxy configuration 2020-05-22 17:23:49 +02:00
qurator_data_lib.sh ⬆ Update qurator_data_lib.sh to use a silent curl instead of wget 2020-02-27 12:31:09 +01:00
README-DEV.md 📝 README-DEV: Fix git push instructions 2020-03-09 18:26:40 +01:00
README.md 📝 ppn2ocr: Add to README, including proxy configuration 2020-05-22 17:23:49 +02:00
requirements.txt 🐛 Add tessdata_best Tesseract models again 2020-03-02 12:47:46 +01:00
run Support --input-file-grp/-I command line parameter 2020-03-09 12:26:38 +01:00
run-docker-hub Support --input-file-grp/-I command line parameter 2020-03-09 12:26:38 +01:00
zdb2ocr 🚧 zdb2ocr: Add TODOs from notes.md 2020-05-22 13:49:34 +02:00

My OCR-D workflow

Build Status

WIP. Given a OCR-D workspace with document images in the OCR-D-IMG file group, this workflow produces:

  • Binarized images
  • Line segmentation
  • OCR text (using Calamari and Tesseract, both with GT4HistOCR models)
  • (Given ground truth in OCR-D-GT-PAGE, also an OCR text evaluation report)

If you're interested in the exact processors, versions and parameters, please take a look at the script and possibly the Dockerfile and the requirements.

Goal

Provide a test environment to produce OCR output for historical prints, using OCR-D, especially ocrd_calamari and sbb_textline_detection, including all dependencies in Docker.

How to use

It's easiest to use it as a pre-built container. To run the container on an example workspace:

# Download an example workspace
cd /tmp
wget https://qurator-data.de/examples/actevedef_718448162.first-page.zip
unzip actevedef_718448162.first-page.zip

# Run the workflow on it
cd actevedef_718448162.first-page
~/devel/my_ocrd_workflow/run-docker-hub

Build the container yourself

To build the container yourself using Docker:

cd ~/devel/my_ocrd_workflow
./build

You may then use the script run to use your self-built container, analogous to the example above.

Viewing results

You may then examine the results using PRImA's PAGE Viewer:

java -jar /path/to/JPageViewer.jar --resolve-dir . OCR-D-OCR-CALAMARI/OCR-D-OCR-CALAMARI_00000024.xml

The workflow also produces OCR evaluation reports using dinglehopper, if ground truth was available:

firefox OCR-D-OCR-CALAMARI-EVAL/OCR-D-OCR-CALAMARI-EVAL_00000024.html

ppn2ocr

The ppn2ocr script produces OCR output for a given document in the State Library Berlin (SBB)'s digitized collection. The document must be specified by its PPN, for example:

./ppn2ocr PPN77164308X

This produces a workspace directory PPN77164308X with the OCR results in it; the results are viewable as explained above.

ppn2ocr requires a working Docker setup and properly set up environment variables for the proxy configuration. At SBB, this means:

export HTTP_PROXY=http://http-proxy.sbb.spk-berlin.de:3128/
export HTTPS_PROXY=$HTTP_PROXY; export http_proxy=$HTTP_PROXY; export https_proxy=$HTTP_PROXY
export no_proxy=localhost,digital.staatsbibliothek-berlin.de,content.staatsbibliothek-berlin.de