extend README

This commit is contained in:
vahidrezanezhad 2025-10-22 18:29:14 +02:00
parent c8455370a9
commit 2fc723d292

View file

@ -93,9 +93,9 @@ The following options can be used to further configure the processing:
| option | description | | option | description |
|-------------------|:------------------------------------------------------------------------------- | |-------------------|:------------------------------------------------------------------------------- |
| `-fl` | full layout analysis including all steps and segmentation classes | | `-fl` | full layout analysis including all steps and segmentation classes (recommended) |
| `-light` | lighter and faster but simpler method for main region detection and deskewing | | `-light` | lighter and faster but simpler method for main region detection and deskewing (recommended) |
| `-tll` | this indicates the light textline and should be passed with light version | | `-tll` | this indicates the light textline and should be passed with light version (recommended) |
| `-tab` | apply table detection | | `-tab` | apply table detection |
| `-ae` | apply enhancement (the resulting image is saved to the output directory) | | `-ae` | apply enhancement (the resulting image is saved to the output directory) |
| `-as` | apply scaling | | `-as` | apply scaling |
@ -109,6 +109,17 @@ The following options can be used to further configure the processing:
| `-sl <directory>` | save layout prediction as plot to this directory | | `-sl <directory>` | save layout prediction as plot to this directory |
| `-sp <directory>` | save cropped page image to this directory | | `-sp <directory>` | save cropped page image to this directory |
| `-sa <directory>` | save all (plot, enhanced/binary image, layout) to this directory | | `-sa <directory>` | save all (plot, enhanced/binary image, layout) to this directory |
| `-thart` | threshold of artifical class in the case of textline detection. The default value is 0.1 |
| `-tharl` | threshold of artifical class in the case of layout detection. The default value is 0.1 |
| `-ocr` | do ocr |
| `-tr` | apply transformer ocr. Default model is a CNN-RNN model |
| `-bs_ocr` | ocr inference batch size. Default bs for trocr and cnn_rnn models are 2 and 8 respectively |
| `-ncu` | upper limit of columns in document image |
| `-ncl` | lower limit of columns in document image |
| `-slro` | skip layout detection and reading order |
| `-romb` | apply machine based reading order detection |
| `-ipe` | ignore page extraction |
If no further option is set, the tool performs layout detection of main regions (background, text, images, separators If no further option is set, the tool performs layout detection of main regions (background, text, images, separators
and marginals). and marginals).
@ -124,7 +135,7 @@ The command-line interface for binarization can be called like this:
eynollah binarization \ eynollah binarization \
-i <single image file> | -di <directory containing image files> \ -i <single image file> | -di <directory containing image files> \
-o <output directory> \ -o <output directory> \
-m <directory containing model files> \ -m <directory containing model files>
``` ```
### OCR ### OCR
@ -138,9 +149,24 @@ eynollah ocr \
-i <single image file> | -di <directory containing image files> \ -i <single image file> | -di <directory containing image files> \
-dx <directory of xmls> \ -dx <directory of xmls> \
-o <output directory> \ -o <output directory> \
-m <directory containing model files> | --model_name <path to specific model> \ -m <directory containing model files> | --model_name <path to specific model>
``` ```
The following options can be used to further configure the ocr processing:
| option | description |
|-------------------|:------------------------------------------------------------------------------- |
| `-dib` | directory of bins(files type must be '.png'). Prediction with both RGB and bins. |
| `-doit` | Directory containing output images rendered with the predicted text |
| `--model_name` | Specific model file path to use for OCR |
| `-trocr` | transformer ocr will be applied, otherwise cnn_rnn model |
| `-etit` | textlines images and text in xml will be exported into output dir (OCR training data) |
| `-nmtc` | cropped textline images will not be masked with textline contour |
| `-bs` | ocr inference batch size. Default bs for trocr and cnn_rnn models are 2 and 8 respectively |
| `-ds_pref` | add an abbrevation of dataset name to generated training data |
| `-min_conf` | minimum OCR confidence value. OCRs with textline conf lower than this will be ignored |
### Machine-based-reading-order ### Machine-based-reading-order
The machine-based reading-order module employs a pretrained model to identify the reading order from layouts represented in PAGE-XML files. The machine-based reading-order module employs a pretrained model to identify the reading order from layouts represented in PAGE-XML files.