Update README.md (layout detection)

tbc
pull/37/head
Clemens Neudecker 4 years ago committed by GitHub
parent a53915e002
commit 0466d6914e
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23

@ -17,8 +17,7 @@ The first three stages are based on [pixelwise segmentation](https://github.com/
For the purpose of text recognition (OCR) and in order to avoid noise being introduced from texts outside the printspace, one first needs to detect the border of the printed frame. This is done by a binary pixelwise-segmentation model trained on a dataset of 2,000 documents where about 1,200 of them come from the [dhSegment](https://github.com/dhlab-epfl/dhSegment/) project (you can download the dataset from [here](https://github.com/dhlab-epfl/dhSegment/releases/download/v0.2/pages.zip) and the remainder having been annotated in SBB. For border detection, the model needs to be fed with the whole image at once rather than separated in patches.
## Layout detection
As a next step text regions need to be identified in layout detection. Again a pixelwise segmentation is implemented. For this purpose we had to provide training images and labels. In SBB we have a good resources and gt for layout of documents. By historical documents we have some main regions like , textregion, separators, images, tables and background and each has its own subclasses. For example for textregions we have subclasses like header, heading, drop capitals , main text and etc. As we can see we have many classes and in fact the ideal is to classify documents based on all those classes. But this is really a tough job and since here our focus is on ocr we decided to train our model with main regions including background, textregions, images and separators. <br/>
We have used 131 documents to train our model. Of course augmentation also hase done but here I do not want to explain training process in detail.
As a next step, text regions need to be identified by means of layout detection. Again a pixelwise segmentation model was trained on 131 labeled images from the SBB digital collections, including some data augmentation. Since the target of this tool are historical documents, we consider as main region types text regions, separators, images, tables and background - each with their own subclasses, e.g. in the case of text regions, subclasses like header/heading, drop capital, main body text etc. While it would be desirable to detect and classify each of these classes in a granular way, there are also limitations due to having a suitably large and balanced training set. Accordingly, the current version of this tool is focussed on the main region types background, text region, image and separator.
## Textline detection
Last step is to do a binary pixelwise segmentation in order to classify textline pixels in document. For textline segmentation we had GT of documents with only one columns. This means that scale of documents were almost same , we tried to resolve this by feeding model with different scales of documents. However, even with this augmentation it was not easy to cover all spectrum of scales. So, this time we tried to use trained model and with tuning the parameters for multicolumns documents detect textlines. We then used this results also as GT to train new model which was much more robust.

Loading…
Cancel
Save