Gerber, Mike
10a8020243
Since the introduction of color in the wrapper (for printing out a warning if not using the "lastest" images), we had a circular dependency in setup.py. Fix this by moving the sub_images list to sub_images.py. |
2 years ago | |
---|---|---|
.github | 2 years ago | |
wrapper | 2 years ago | |
.gitignore | 2 years ago | |
Dockerfile-core | 2 years ago | |
Dockerfile-core-cuda10.0 | 2 years ago | |
Dockerfile-core-cuda10.1 | 2 years ago | |
Dockerfile-dinglehopper | 2 years ago | |
Dockerfile-eynollah | 2 years ago | |
Dockerfile-ocrd_anybaseocr | 2 years ago | |
Dockerfile-ocrd_calamari | 2 years ago | |
Dockerfile-ocrd_calamari03 | 2 years ago | |
Dockerfile-ocrd_cis | 2 years ago | |
Dockerfile-ocrd_fileformat | 2 years ago | |
Dockerfile-ocrd_olena | 2 years ago | |
Dockerfile-ocrd_segment | 2 years ago | |
Dockerfile-ocrd_tesserocr | 2 years ago | |
Dockerfile-ocrd_wrap | 2 years ago | |
Dockerfile-sbb_binarization | 2 years ago | |
Dockerfile-sbb_textline_detector | 2 years ago | |
LICENSE | 4 years ago | |
README-DEV.md | 4 years ago | |
README.md | 2 years ago | |
build | 2 years ago | |
build-tmp-XXX | 4 years ago | |
my_ocrd_workflow | 2 years ago | |
my_ocrd_workflow-sbb | 2 years ago | |
ocrd-workspace-from-images | 4 years ago | |
ppn2ocr | 3 years ago | |
qurator_data_lib.sh | 4 years ago | |
requirements-ppn2ocr.txt | 5 years ago | |
zdb2ocr | 5 years ago |
README.md
ocrd-galley
A Dockerized test environment for OCR-D processors 🚢
WIP. Given a OCR-D workspace with document images in the OCR-D-IMG file group, the example workflow produces:
- Binarized images
- Line segmentation
- OCR text (using Calamari and Tesseract, both with GT4HistOCR models)
- (Given ground truth in OCR-D-GT-PAGE, also an OCR text evaluation report)
If you're interested in the exact processors, versions and parameters, please take a look at the script and possibly the individual Dockerfiles.
Goal
Provide a test environment to produce OCR output for historical prints, using OCR-D, especially ocrd_calamari and sbb_textline_detection, including all dependencies in Docker.
How to use
ocrd-galley uses Docker to run the OCR-D images. We provide pre-built container images that get downloaded automatically when you run the provided wrappers for the OCR-D processors.
You can then install the wrappers into a Python venv:
cd ~/devel/ocrd-galley/wrapper
pip install .
To download models, you need to use the -a
flag of ocrd resmgr
:
ocrd resmgr download -a ocrd-calamari-recognize default
You may then use the script my_ocrd_workflow
to use your self-built
containers on an example workspace:
# Download an example workspace
cd /tmp
wget https://qurator-data.de/examples/actevedef_718448162.first-page.zip
unzip actevedef_718448162.first-page.zip
# Run the workflow on it
cd actevedef_718448162.first-page
~/devel/ocrd-galley/my_ocrd_workflow
Viewing results
You may then examine the results using PRImA's PAGE Viewer:
java -jar /path/to/JPageViewer.jar \
--resolve-dir . \
OCR-D-OCR-CALAMARI/OCR-D-OCR-CALAMARI_00000024.xml
The example workflow also produces OCR evaluation reports using dinglehopper, if ground truth was available:
firefox OCR-D-OCR-CALAMARI-EVAL/OCR-D-OCR-CALAMARI-EVAL_00000024.html
ppn2ocr
The ppn2ocr
script produces a workspace and METS file with the best images for
a given document in the Berlin State Library (SBB)'s digitized collection.
Install it with an up-to-date pip (otherwise this will fail due to a opencv-python-headless build failure):
pip install -r ~/devel/ocrd-galley/requirements-ppn2ocr.txt
The document must be specified by its PPN, for example:
~/devel/ocrd-galley/ppn2ocr PPN77164308X
cd PPN77164308X
~/devel/ocrd-galley/my_ocrd_workflow -I MAX --skip-validation
This produces a workspace directory PPN77164308X
with the OCR results in it;
the results are viewable as explained above.
ppn2ocr requires properly set up environment variables for the proxy
configuration. At SBB, please read howto/docker-proxy.md
and
howto/proxy-settings-for-shell+python.md
(in qurator's mono-repo).
ocrd-workspace-from-images
The ocrd-workspace-from-images
script produces a OCR-D workspace (incl. METS)
for the given images.
~/devel/ocrd-galley/ocrd-workspace-from-images 0005.png
cd workspace-xxxxx # output by the last command
~/devel/ocrd-galley/my_ocrd_workflow
This produces a workspace from the files and then runs the OCR workflow on it.
Build the containers yourself
To build the containers yourself using Docker:
cd ~/devel/ocrd-galley/
./build