How to Preprocess Images for Text OCR in Python (OCR in Python Tutorials 02.02)
HTML-код
- Опубликовано: 6 апр 2021
- If you enjoy this video, please subscribe.
✅Be my Patron: / wjbmattingly
✅PayPal: www.paypal.com/cgi-bin/webscr...
How to Open an Image with OpenCV: 4:10
01. Invert an Image: 9:47
03: Binarization: 13:33
04: Noise Reduction: 20:40
05: Dilation and Erosion: 28:33
06: Rotation and Deskewing: 35:07
07: Removing Borders: 42:18
08: Missing Borders: 49:09
GitHub Notebook: github.com/wjbmattingly/ocr_p...
If there's a specific video you would like to see or a tutorial series, let me know in the comments and I will try and make it.
If you liked this video, check out www.PythonHumanities.com, where I have Coding Exercises, Lessons, on-site Python shells where you can experiment with code, and a text version of the material discussed here.
You can follow me at:
/ wjb_mattingly
Hey thank you so much! The video is about an hour long, so I’m really happy that your voice is so pleasant and happy to listen to! This video helped me a lot on my project.
Amazing, I am going to recommend to my colleagues. Great work.
I appreciate your step-by-step explanations
What a legend only one ad in the beginning . Your so damn underrated
Thank you, I've been looking for a long time
Men you are an amazing teacher! I love the way you explain things! thank you BTW!
Thank you so much. I just upgraded from Making Waves, so the concepts are familiar, but tNice tutorials is a whole new world! Look forward to
found actually helpful for people started from scratch
This is awesome ! Really helpful for my dissertation research about OCR in the accountability field. Many thanks from Paris !
So happy to hear that! No problem!
why are we working
on jupyter all of a sudden?
Thank you man for sharing this stuff
you really helped me, thank you
Thank you for the very helpful tutorial. I am wondering if there is any difference between from preprocessing SCENE-TEXT dataset from OCR dataset? In my POV, BW function will not be needed in SCENE-TEXT since there is various color in an image. Thanks again.
i can't believe i found this amazing channel !!! I've been working on an OCR mobile app to extract fields from scanned invoices and these videos are exactly what I m looking for million thanks man !!!
Awesome! Glad you are finding them useful! I have 10 more videos planned for this series. I am getting over a cold and have not been able to post for a while.
@@python-programming i hope you get well soon and i m looking forward to these videos
same!
did you finish your work ?
@@cuneytsn yes I did
Doing all the steps finally got me to 98% accuracy. Thanks so much!
Awesome! No problem!
How did u calculate accurency ?
just amazing, thank you!
No problem!
Great content and clear explanation
Thanks!
yo bro, really thankya. Big respect
Inverting took me from garbage to great recognition. Thanks!
Awesome! Glad these are useful and working. I was not sure if I explained it well enough.
fire video, thanks bro
Thank you for this awesome tutorial! 1 questions though. Can we just manually crop the image instead of removing the borders?
Great videos thanks for sharing !. Would you be able to tackle the problem of removing an image watermark in scanned pdfs please?
Thank you! This will help me save my work :>
You're welcome!
Thankyou so much for this Video . This is an Great Video.
Thanks! No problem! Happy I could help you.
Plzz upload more videos on this
Awesome!
good job man!
Thanks!
Hi, what if I have a region of interest that is about 150x50 which has a single number let's say 1235 that I want to ocr, and that number takes a large portion of the roi image, like 60% of the height and 50% of the width, the image itself is a frame from a video capture so its good quality, what preprocessing do you think should be done? I am using tesseract for the ocr. Thanks!
Great thanks from Uganda the pearl of Africa
Hi!, First thank for the tutorial. In my case I had a problem while correcting the rotated image with the function. In the link it gives you a really cool solution, if anyone had the problem just comment the line of the angle and put this lines instead and it will work perfectly.
middleContour = contours[len(contours) // 2]
angle = cv2.minAreaRect(middleContour)[-1]
As I said it comes in the link given but this is in the case you're in a rush.
Thanks for the channel and the videos for real. I would have liked to have teachers like you while I was studying!!!!
Thanks for your input. I tried every option but it'll always get -90º to -30º (aprox) rotated. Cannot find where the problem is. I'm trying with a picture of my own, like the ones I will process.
Thank you
Sir, Is there any way to separate handwritten text and printed text in documents?
For anyonne getting a error message when importing pylot. heres a few solutions:
1. Instalation problem. error in python-slugify setup command: use_2to3 - solution: update your setup tools in the terminal with the comand "setuptools==58". Then try "pip install pylot"
2. Error in import - solution: change comand to "import matplotlib.pyplot as plt"
Hi! Thanks for the amazing explanation, it has helped me alot! I am currently working on optimizing the Tesseract engine for my specific needs. However, I am having some problems with rotating documents. For some reason, removing the borders is not working. Since the document is rotated, so are the borders (so I am seeing black triangle-shaped borders). Furthermore, if the document is rotated counter-clockwise just a little bit, it will correct it by rotating the image to 90 degrees counter-clockwise, rather then to the closest 90 degree angle.
Any ideas for the problems I am having?
How did you success to rotate the image ? It rotate every time the image to 90, i don't know how to fix that !
Thank you very much, Sir. it helped me a a lot in my OCR project
I am so happy to hear that!
Wow good
Hello, can you help me please?
how to detect inverted text or letters in an image? and after rotate the image the normal state?, thanks
can you plz share display method which you found in stack overflow
from numpy import ones, uint8
kernel = ones((1, 1), uint8)
is a lot faster than importing the entire numpy module
Great and helpful comment!
Thank you for your valuable tutorial, but i have a question, how to save extracted text and corresponding image name in excel.
Hi , Thank you so much for such a detailed explanation and making a series out of this ,
I have one question,
Can you please tell me or refer a link as to how to get a threshold of image, like get the threshold of an image, so that we can use it as a reference for other images?
Hi! Interesting question. If I understand you right, that is not possible as far as I know. The threshold is a change to the image based on a certain pixle number. Are instead thinking of trying to get an ubderstanding of the average range of pixles in an image?
@@python-programming uhh yes something like that, so in here , instead of setting the threshold on an image and correcting it via trial and error, I was thinking if it is possible to get a threshold of a perfect image and apply those values to a problamatic image. Actually I was trying ur tutorial, and you have mentioned that , if we try to use dilation and erotion on a perfectly fine image, it will ruin it. and thats what was happening. So I got into thinking if there is any way to get a threshold value of an image.
Hello!!
So when I try to convert the image and when I pass on this command
inverted_image = cv2.bitwise_not(img)
cv2.imwrite('PYTESSERACT/inverted.png', inverted_image)
Its output is showing me "False"
Please tell me how do I fix this.
Wow awesome video, totally subscribing 🙂, I would like a transparency example, as we have invoices scanning from thin paper with duplex pages bleeding through the image. I'm assuming the transparency will help?
Thanks! Can you DM me on Twitter with some example images?
@@python-programming hi sorry I don’t do twitter or social media. But simply put sometimes a scanner will produce an image which has some ghosting of the other side coming through
I am working through the video stack and trying to comprehend it all. Trying to develop an auto scan ocr solution of sorts. Really helpful what you and others have shared
Great Tutorial. How to deskew an image when the page is rotated upside down? Or atleast how to get an angle of rotation.?
If anyone can help, please suggest me some method/ approach.
How to extract different kind of text such as hand written, tabular, free text from different scanned docs and pdf's and images using ML OCR and its pre-processing techniques ?
Are the functions you built only suitable for this specific document? Or will they be able to handle new input documents?
Hey, any idea why I got "ValueError: not enough values to unpack (expected 3, got 2)" at 15:43. I'm using Python3.8.2 and open-cv 4.5.1.
Also, please check patron messages.
Thanks!
Good question. I don't think I explained this in the video. The problem is likely from the display function with the height, width, (sometimes depth). Make sure that im_data.shape[:2] is in your display function. I updated this for GitHub so this error wouldn't pop up, but I don't think I mentioned it in the video. Here is the corrected display function:
def display(im_path):
dpi = 80
im_data = plt.imread(im_path)
height, width = im_data.shape[:2]
# What size does the figure need to be in inches to fit the image?
figsize = width / float(dpi), height / float(dpi)
# Create a figure of the right size with one axes that takes up the full figure
fig = plt.figure(figsize=figsize)
ax = fig.add_axes([0, 0, 1, 1])
# Hide spines, ticks, etc.
ax.axis('off')
# Display the image.
ax.imshow(im_data, cmap='gray')
plt.show()
@@python-programming Thank you for your response, nonetheless, I already found and fix it from SO. The point was that I didn't get the idea.
Here is my current thought about it, the reason behind it is the code trying to unpack more values from an object than those that actually exist. So either we have to remove some requests or return the 'requested' value (before actually request it) to the program and then request the results.
I know that this is a bit more detailed and out of the topic to appropriately fit on your video comment section but, a clarified answer would be greatly appreciated.
Learn without gaps is better in the long run, don't you agree?
Precisely the problem! It was a good catch and I'm glad this comment is here. No worries about it being out of topic. I think it is very much on topic.
Good video. not completely relevant for my case as i dont know what the image will look like, but still usefull!
Thanks!
Thanks a lot !! That's exactly the kind of material I was looking for. I have a question about deskewing though. Although it straightens the document well, there are cases where it completely flips it 90 degrees (the original skew was quite small actually), so I had to set a limit of 35/-35 in the angle to ignore what's outside the range. However, what bothers me more is that the OCR results get worse after applying the deskew. I mean, the results before are quite good, but if I apply it, it fails to detect many words that I can detect without applying it and even detects another language not existing in the document (I used 2 languages because in some documents, they both appear). Again, these two issues don't happen when I don't deskew
Thanks for the commen! Glad you liked the video. That is odd indeed! I am not sure why that would be the case
the reason this problem is happening is that the rotation function does the same things we already did such as turning the pic to gray, dialation etc.. so if u do it again the quality is going to change not in your favor this time which will make rotating the pic impossible.
so you simply need to start by rotating the pic THEN move on to the next steps.
Hello sir I am working on a project where I need user to upload a pdf if it has only handwritten text in the images of pdf and not the computer typed text. Is there any python library which can help me in this?
Great video. But I need to point out that `/` is actually a forward slash
I'm unable to acces the notebook. Can you please share the notebook.
Ok nice
Please make the video on how to detect the table and how to Calculate the accuracy of the image
That sounds like fun! I will do that.
Please please make the video on how to detect the table in image ?
what is kernel in the deskewing function ?
I tried the code for rotation and deskewing but it doesn't change anything. I used raspberry pi 4 to try it. Is it possible that it cannot perform in raspberry pi? Please reply..
(14:49) Why we use function grayscale, when we can simply use built-in function cvtColor?
at 41:15 line 395 i get an error saying : name 'new' is not defined ( im working with pycharm)
Can you suggest me an algorithm or any library through which we can separate text from image which is having text in different languages. I am having a image which is having text in both German and English language. I want to separate the text for text summarization. Any help will be appreciated.
OpenCV. I have 5 videos coming out on how to do this statting next week. You will want to use bounding boxes.
Lot of modules r here u can use Fitz,pypdf2 , pdfplumber complot ,pytesseratte
First I'd like to thank you for the tutorial. I have a question. In the 28th min you are talking about pros and cons of noise removal. And you said that for this particular image you wouldn't use it for stated reasons. So what if you have a thousand images you want to process with maximum rate of success. How would you then implement noise removal to those images that actually benefit from it and skip it for those that don't and would in case of using it result in worse performance?
Thanks for the comment! I'm glad you liked the video! There are a few solutions here, perhaps a small image classifier to detect those pages that have a lot of noise and flag them for manual adjustments. It likely wouldn't take many examples to train a binary model to do this. Another approach could be with Open-CV but it may take longer for some data to get that up and running effectively. It really comes down to the images you are using, though.
What happened to using PIL to open an image..?
Rescaling code is not filled in the final notebook on GitHub.
wait we dont need to invert image on tesseract 4.0?
your display function is not working in greyscale image bcz function need three argument "width" "height" "depth" , but grey scale give only 2 parameter .
Which IDE you have used in vedio
I am learning tons of stuff from this OCR series! It has helped me massively in developing my own project! But the display (im_path) function you are using to print every image is never working for me; I keep getting the same error ---> "ValueError: not enough values to unpack (expected 3, got 2)". Can you please help me with this?
Not sure if you have solved this or not, but this is how I got around that issue. Please comment if there is a better way. When converted to grayscale you lose the depth dimension. I put an if statement in the function to specify the number of dimensions to be expected. I hope this helps
# 3 dimensions for RGB, 2 dimensions for gray scale
def display(im_path,dimensions):
if dimensions == 3:
dpi = mpl.rcParams['figure.dpi']
im_data = plt.imread(im_path)
elif dimensions == 2:
dpi = mpl.rcParams['figure.dpi']
im_data = plt.imread(im_path)
Wierd - i agree with you - but if the shape is only 2 dimensional for grayscale that the error didn't come up in the video.
Did you make a video for rescaling? :)
I do not think I did. I will try and do that.
which theme ur using for jupyter, please share the method so it doesnt effect intellisense as well
I'm using the standard dark theme in JupyterLab
The only tNice tutorialng I learnt myself in soft soft is pressing tab in the keyboard to bring up the channel rack
I get this error when trying to get the gray version of the image at 18:12
Cell In[4], line 4, in display(img_path)
2 dpi = 80
3 img_data = plt.imread(img_path)
----> 4 height, width, depth = img_data.shape
6 img_size = width / float(dpi), height/ float(dpi)
8 fig = plt.figure(figsize=img_size)
ValueError: not enough values to unpack (expected 3, got 2)
copy and paste the fonction call it "display2" for exemple and remove the "depth" then use the display2 when display dosen't work
which IDE is used in this
Could you please tell me where do you write this code , I mean is it in python or IDLE?
For this I used Jupyter Lab
Hi Sir could you do a session on Houghline transformation
Thats a great idea!
bro did you forget to fill the rescaling the img in Github? Anyway Thankyou
Will it work with scanned pdf?
Hi
I just copied the same exact code that you have used for deskewing and also used the image that you used, but it doesn't work correctly, could you guide me through this?
Thank you
Whats the error?
@@python-programming
I had imported the rotated image with borders accidentally.
my bad
thank you for sharing your knowledge. I am learning a ton of new things
No worries at all! Glad you figured it out and I am glad you are getting a lot from this channel!
Amazing video.
I wanna skew code plz....................
What name application editor ? tq...
Dear Sir, I am your Subscriber
I want to create a tool that finds text errors in the image.
For Example:
if I forgot to write CONTACT US, BUY NOW, CONTACT NUMBER, SPELLING MISTAKE, etc... in my social media post.
that the tool finds error and suggests what are missing or what is incorrect in social media post.
🙏 Please guide me and suggest what course I need to buy or what I need to learn to create this tool
Thank you!
Border removal does not work with white borders. Anyone has any idea why?
how can i detect check box yes and no answer ?
I get the following error, can anyone suggest a solution ? :
error: OpenCV(4.5.4) :-1: error: (-5:Bad argument) in function 'dilate'
> Overload resolution failed:
> - src is not a numpy array, neither a scalar
> - Expected Ptr for argument 'src'
i need ocr detection of kannada language...;how can i co that?
not able to access the notebook!
Broooooo!
HOW TO RETRAIN OCR? I WANT TO DETECT LIVE LICENCE PLATE NUMBER
That would be object detection first then OCR
@@python-programming i already did the object detection part. trained the license plate in yolo v4. i cropped the license plate from images and having a hard time using easy OCR to get the text from plate.
@@modeltrainer1246 excellent. Okay mind dming me on Twitter with some sample images?
Hi bro ,Have you got to know ? how can i retrain my model to implement ocr object detection was done
Can i use it for handwritten text recognition? Please?
HTR is a different problem. Much more custom solutions are needed. Transkribus or OCR4All (open source) are both good options to consider.
@@python-programming Thanks
not wNice tutorialle quarantine but how r u doing is that hard ?
misconceptions that the comnt is supposed to soft like I am in love with Nice tutorialm or sotNice tutorialng.
I actually tried to play around on my own before watcNice tutorialng tNice tutorials and finally knowing what many of the buttons I randomly clicked on an is
It took 1,5 hours to finish watcNice tutorialng tNice tutorials 18 MINUTES video wNice tutorialle doing all the sa steps on soft soft myself. My brain is fried and
display('photos/gray.jpg')
ValueError Traceback (most recent call last)
Cell In[83], line 1
----> 1 display1('photos/gray.jpg')
Cell In[65], line 4, in display1(im_path)
2 dpi = 80
3 im_data = plt.imread(im_path)
----> 4 height, width, depth = im_data.shape
6 figsize = width / float(dpi), height / float(dpi)
8 fig = plt.figure(figsize=figsize)
ValueError: not enough values to unpack (expected 3, got 2)
Jupyter lab is saying “No module named matplotlib”
too I made like s on garage band and thought it be easier in softsoft. nope
Not Working for Windows...I am stuffed up with this...!
guilty, I feel like being honest here is going to be the most aningful.
need to have pdf scanned into text
Good idea for a video!
Tutorial*
#tiga :
Nasty charge never said it did but ok
i was wondering if you would be interested in a big project with a lot of clout.....
you didn't say "no homo" man