- Видео 198
- Просмотров 169 065
AdiTOSH
Индия
Добавлен 18 сен 2016
Oops, I make technical videos!
Keeping it simple. Just doing it.
Keeping it simple. Just doing it.
Lane Detection to Autonomous Driving | Calculate Offset & Steering Angle with Python & OpenCV
🚗 Transform Lane Detection into Autonomous Driving!
In this tutorial, we build upon our lane detection project and take the next big step towards autonomous driving. Learn how to calculate the lane offset and determine the steering angle to simulate self-driving behavior.
🔍 What You’ll Learn:
Recap of lane detection and extracting lane coordinates
Fitting lane points to calculate curvature
Computing lane offset and its impact
Deriving the steering angle using Python and OpenCV
This video is perfect for anyone interested in autonomous vehicles, computer vision, or Python programming.
📖 Pre-requisite Video:
Watch our previous video on lane detection using sliding windows:
1. ruclips.net/video/ApYo6t...
In this tutorial, we build upon our lane detection project and take the next big step towards autonomous driving. Learn how to calculate the lane offset and determine the steering angle to simulate self-driving behavior.
🔍 What You’ll Learn:
Recap of lane detection and extracting lane coordinates
Fitting lane points to calculate curvature
Computing lane offset and its impact
Deriving the steering angle using Python and OpenCV
This video is perfect for anyone interested in autonomous vehicles, computer vision, or Python programming.
📖 Pre-requisite Video:
Watch our previous video on lane detection using sliding windows:
1. ruclips.net/video/ApYo6t...
Просмотров: 40
Видео
CORS in .NET: Secure Your APIs with Step-by-Step Implementation
Просмотров 2621 день назад
Struggling to manage cross-origin requests in your web applications? In this video, we dive deep into CORS (Cross-Origin Resource Sharing) and demonstrate how to implement and configure it in a .NET Core application. Whether you're building APIs or working with modern front-end frameworks, mastering CORS is essential for seamless integration and security. What You'll Learn ✅ What is CORS and wh...
CSP vs CORS Explained: Web Security Made Simple with Demos in 10 Minutes!
Просмотров 5628 дней назад
Are you confused about the differences between CSP (Content Security Policy) and CORS (Cross-Origin Resource Sharing) ? In this video, we’ll demystify these two web security mechanisms with a step-by-step guide that combines theory, practical demos, and implementation insights-all in just 10 minutes! Here’s what you’ll learn: ✅ The Big Picture : How CSP and CORS work and when each is applied in...
Edge Detection in Digital Image Processing with Python Code & Visual Results Explanation
Просмотров 105Месяц назад
🚀 Master Edge Detection in Digital Image Processing with Python! 🚀 Welcome to this comprehensive tutorial where we dive deep into edge detection , a fundamental concept in digital image processing. In this video, you'll not only understand the theory behind edge detection but also see it in action with Python code implementations and visual results . 🔍 What You'll Learn: 1. The role of kernels ...
Content Security Policy Explained | Prevent XSS with CSP, Nonce, and Unsafe-Inline Walkthrough
Просмотров 197Месяц назад
Are you looking to secure your web applications from Cross-Site Scripting (XSS) attacks? In this video, I’ll take you through a step-by-step guide to understanding and implementing Content Security Policy (CSP)-a powerful browser security feature to prevent malicious code injection. What makes this video unique? While most explainers stop at theory, I go further to show you CSP in action with a...
Lane Detection with Sliding Windows | Map Lanes to Original Video Frame | OpenCV Python Tutorial
Просмотров 252Месяц назад
Welcome to another exciting image processing project! 🎥 In this video, we take lane detection to the next level by detecting lanes in a transformed perspective and mapping them back to the original video frame. This tutorial builds upon the previous video, where we used sliding windows to detect lanes in a bird's-eye view. What You'll Learn: ✅ How to detect lanes using the sliding windows metho...
Perspective Transformation | OpenCV in Python | Image Processing [2024 Enhanced Edition]
Просмотров 213Месяц назад
Enhanced Audio Edition: Perspective Transformation with OpenCV in Python This updated tutorial on perspective transformation in Python using OpenCV brings you improved audio quality and refined explanations for a clearer, more engaging learning experience. This video covers the essentials of perspective transformation, a foundational concept in digital image processing and computer vision. Vide...
Warp Perspective with OpenCV | Document Scanner | Python Image Processing Tutorial
Просмотров 1442 месяца назад
This video focuses on implementing perspective transformation using OpenCV in Python to build a document scanner. Images can undergo two main types of transformations: 1. Geometrical Transformation - Alters pixel positions and shapes. 2. Intensity Transformation - Changes pixel intensity values. Perspective transformation is a type of geometrical transformation that reshapes the pixel coordinat...
Azure ML Essentials for Research Projects | Start Your ML Journey Here!
Просмотров 552 месяца назад
🌟 Welcome to Azure ML Essentials for Research Projects! 🌟 In this video, you’ll get a sneak peek into my comprehensive RUclips series designed to help college students and beginners unlock the power of machine learning with Microsoft Azure ML. Whether you're looking to enhance your research or kickstart a career in data science, this series has everything you need to get started. 🎓 What You'll ...
Automate Your .NET Builds with CI/CD in Azure DevOps | Quick & Easy Guide!
Просмотров 512 месяца назад
In this video, we’ll walk through setting up Continuous Integration (CI) for a .NET project in Azure DevOps, streamlining the build process and automating your workflow. Whether you’re new to CI or just looking to enhance your DevOps skills, this guide will get you up and running quickly. In This Video, You’ll Learn: Setting Up CI in Azure DevOps - Step-by-step instructions for configuring CI i...
Deploy .NET Web App with Docker on Azure in 15 Minutes | Full Guide with DevOps Pipeline
Просмотров 1352 месяца назад
Deploy a .NET Web App with Docker on Azure in 15 Minutes! | No Prerequisites, Full Guide with DevOps Pipeline In this step-by-step guide, I’ll walk you through deploying a .NET web application on Azure using Docker and an Azure DevOps pipeline-all in just 15 minutes! This tutorial is designed with beginners in mind, so there are no prerequisites. Whether you’re new to DevOps or experienced but ...
VGG Transfer Learning Tutorial in Azure ML Studio: Custom Image Classification Made Easy
Просмотров 782 месяца назад
In this video, I’ll walk you through retraining the powerful VGG model for custom image classification using Azure ML Studio! We’ll cover everything from a quick intro to transfer learning and the architecture of VGG, to hands-on steps for preparing the dataset, configuring the experiment in Azure ML Studio, and retraining the model with your custom data. Whether you’re new to transfer learning...
Beginner’s Guide to Azure ML: Authoring Notebooks and Python Scripting
Просмотров 1252 месяца назад
🌟 Welcome to the Azure Machine Learning Fundamentals Series! 🌟 In this tutorial, we dive into the world of authoring notebooks in Azure ML by creating Python scripts. Whether you’re a beginner or looking to sharpen your skills, this video will guide you through: Setting up your Azure Machine Learning workspace Creating a structured folder for your projects Writing and testing a simple Python sc...
Azure ML Studio vs Google Colab: Which Should You Use for ML?
Просмотров 792 месяца назад
Azure ML Studio vs Google Colab: Which Platform is Best for Machine Learning? In this video, we compare two popular machine learning platforms: Azure ML Studio and Google Colab. If you're wondering which platform is better for training models, you're in the right place! We walk you through the process of training a LeNet-5 model, giving you a hands-on look at how each platform performs. From se...
LeNet-5 CNN Tutorial: Learn, Build & Train Your CNN with Azure ML | Using Notebooks in Azure ML
Просмотров 1392 месяца назад
LeNet-5 CNN Tutorial: Learn, Build & Train Your CNN with Azure ML | Using Notebooks in Azure ML
Azure Machine Learning: The Ultimate Beginner's Guide to Pipelines (designer)!
Просмотров 942 месяца назад
Azure Machine Learning: The Ultimate Beginner's Guide to Pipelines (designer)!
Set Up Your Azure Machine Learning Workspace: Complete Beginner's Guide | FREE
Просмотров 1992 месяца назад
Set Up Your Azure Machine Learning Workspace: Complete Beginner's Guide | FREE
Integrate Microsoft Account Authentication in .NET Core | Step-by-Step Tutorial
Просмотров 2073 месяца назад
Integrate Microsoft Account Authentication in .NET Core | Step-by-Step Tutorial
Master .NET 8: Build Powerful ASP.NET Core Web Apps with Razor Pages!
Просмотров 2413 месяца назад
Master .NET 8: Build Powerful ASP.NET Core Web Apps with Razor Pages!
Selenium vs Playwright: Which One Wins? Pros & Cons in 5 Minutes!
Просмотров 864 месяца назад
Selenium vs Playwright: Which One Wins? Pros & Cons in 5 Minutes!
Selenium in 15 Minutes: Hands-On Web Automation Tutorial
Просмотров 2484 месяца назад
Selenium in 15 Minutes: Hands-On Web Automation Tutorial
Selenium Basics: Your 5-Minute Introduction to Web Automation | Head First Guide
Просмотров 744 месяца назад
Selenium Basics: Your 5-Minute Introduction to Web Automation | Head First Guide
Integrating AI Image Generation Into Web Apps | React (Node.js), OpenAI | Step-by-Step Project guide
Просмотров 2176 месяцев назад
Integrating AI Image Generation Into Web Apps | React (Node.js), OpenAI | Step-by-Step Project guide
How to Explore New York in Layover | Cheapest 24 hours in NY | Complete practical travel guide
Просмотров 586 месяцев назад
How to Explore New York in Layover | Cheapest 24 hours in NY | Complete practical travel guide
US Hyderabad Consulate - Travel Guide | Visa | Know well before you plan your trip
Просмотров 6016 месяцев назад
US Hyderabad Consulate - Travel Guide | Visa | Know well before you plan your trip
Image Processing - Shared Edge Detection | Fun Project #opencv
Просмотров 2918 месяцев назад
Image Processing - Shared Edge Detection | Fun Project #opencv
US Kolkata Consulate - Travel Guide | Visa | Know well before you plan your trip
Просмотров 4,8 тыс.8 месяцев назад
US Kolkata Consulate - Travel Guide | Visa | Know well before you plan your trip
How to run C# in visual studio code | Dotnet - Running and Debugging with VS Code
Просмотров 40311 месяцев назад
How to run C# in visual studio code | Dotnet - Running and Debugging with VS Code
Implementing SRCNN in Python using Keras | Image Super Resolution | Tutorial
Просмотров 5 тыс.Год назад
Implementing SRCNN in Python using Keras | Image Super Resolution | Tutorial
What is Single Image Super Resolution | Course of Development | Head First | best Explanation
Просмотров 306Год назад
What is Single Image Super Resolution | Course of Development | Head First | best Explanation
Great video. Can't wait for the next video
Hi @@orientasiprodikelasA, thank you so much! I really appreciate your support and enthusiasm. Glad you enjoyed the video!
@AdiTOSH yeah, I enjoyed your video. Maybe next video for controlling autonomous car using Model Predictive Control?
I’m so glad to hear you enjoyed the video-thank you for your kind words! 😊 Model Predictive Control sounds like a fascinating topic, and I’ll definitely consider it, but no promises just yet. That said, I’m really looking forward to hearing about your progress on the project when you get to it! It’s always exciting to see how these ideas come to life in different applications. Best of luck, and keep me updated!
Nice video. I would like to ask, what algorithm is used in the lane detection? Is it call histogram and sliding window? I rarely hear about that algorithm, is it different from the Hough Transform? Please answer me, thank you.
Hi @@orientasiprodikelasA, thank you for your kind words! 😊 Great question-let me clarify this for you. In this video, the lane detection uses a histogram-based approach and sliding windows to identify lane pixels. Here's how it works: 1️⃣ Histogram: It helps locate the base points of the lanes by analyzing pixel intensity in the lower part of the image. 2️⃣ Sliding Windows: Starting from the base points, sliding windows are used to follow the lane lines vertically and gather lane pixels efficiently. This method is different from the Hough Transform, which detects lines by finding intersections in a parameter space. The sliding window approach is more specific to curved or non-linear lanes, making it better suited for many real-world scenarios like highways. If you'd like, I can make a more detailed comparison of these methods in a future video. Let me know if that would be helpful! 😊
@@AdiTOSH A very good explanation. I want to use this lane detection as a reference path for my control. What meaningful parameters should I take? In Hough Transform, it provides x and y coordinates, but how about this sliding window method? Great video, I’ve subscribed to your channel.
Thank you so much @@orientasiprodikelasA for subscribing! 😊 I'm glad you found the explanation helpful. For using this lane detection as a reference path for control, here are the meaningful parameters you can extract: 1️⃣ Lane Pixel Coordinates (x, y): Just like Hough Transform, you can gather the x and y coordinates of the detected lane pixels from the sliding window method. These can help you define a trajectory or centerline. 2️⃣ Lane Polynomial Coefficients: After detecting the lane pixels, this method fits a polynomial curve (usually a second-order polynomial) to approximate the lane’s shape. The coefficients of this polynomial (a, b, c) are useful for describing the lane curvature and slope. 3️⃣ Lane Offset: Calculate the car's offset from the lane center (typically based on the midpoint between the detected left and right lanes). This is critical for control systems. If you're planning to implement a control algorithm, the polynomial coefficients and offset values are particularly helpful for tasks like path following and steering correction. Let me know if you'd like more guidance, and I'd be happy to assist. Best of luck with your project-it sounds exciting! 😊
@@AdiTOSH yes, I'm interested with the hough transform too for detecting lane.
@@orientasiprodikelasA that's great to hear! 😊 The Hough Transform is indeed a powerful technique for lane detection, especially for straight or slightly curved lanes. I can create a dedicated video explaining how to use the Hough Transform for lane detection, comparing it with the sliding window method, and discussing its pros and cons. Let me know if that would be helpful! Thanks again for your enthusiasm and support-it means a lot! 🚗✨
What do you think about this video? Let me know in the comments below.
What do you think about this video? Let me know in the comments.
What do you think about this video? Let me know in the comments below.
¡Qué buen video! Siempre me ha costado entender bien la diferencia entre CSP y CORS, pero con esta explicación todo queda más claro. Una vez tuve un problema en un proyecto por no configurarlos bien 😅. Ahora estoy aprendiendo más sobre esto en cоdigо hеrое, y la verdad es que me está gustando mucho 🙌🏻.
Thank you so much for your kind words! 😊 I'm glad the video helped clarify the difference between CSP and CORS-it’s definitely a tricky topic, and I can relate to running into issues when they’re not configured properly. It's great that you're diving deeper into this topic with Código Heroe! If you have any questions or if there’s another web security concept you'd like me to cover in a video, feel free to let me know. 🙌🏻 I used a translator app to write the following in Spanish. Please forgive any errors! ¡Muchas gracias por tus amables palabras! 😊 Me alegra saber que el video ayudó a aclarar la diferencia entre CSP y CORS. Es un tema complicado, y puedo relacionarme con los problemas que surgen cuando no se configuran correctamente 😅. ¡Qué genial que estés profundizando en este tema con Código Heroe! Si tienes alguna pregunta o hay otro concepto de seguridad web que te gustaría que cubra en un video, no dudes en decírmelo. 🙌🏻 Usé una aplicación de traducción para escribir esto en español. ¡Perdón por cualquier error!
hello sir, do you have a tutorial on how to deploy opencv backend on azure or any free website? i have tried to deploy my opencv + flask backend. it's just a route i created that uses opencv to count the objects in an image. however when i deployed that opencv backend on railway it doesn't work. chatgpt suggested to use docker so that the dependencies of opencv can be installed.. but i don't know how to go about it.. i guess just understand docker maybe? is azure free? is it easy to deploy on azure?
@@adfinemrising Hi there! Thanks for your question-your OpenCV + Flask project sounds exciting! Deploying it with Docker is a smart idea, as it ensures all your dependencies (like OpenCV) work seamlessly on the server. Let me help you out with this! 1. Docker for OpenCV + Flask: Docker is perfect for packaging your app along with OpenCV dependencies. I recommend starting with a simple Dockerfile to create a lightweight container. In fact, in my video on deploying a .NET web app with Docker on Azure, I show how to set up Docker and use it for deployments. The process will be quite similar for your Flask app: 2. Is Azure Free? Yes! Azure offers a free tier with services like Azure App Service and Azure Container Instances. You can easily deploy your containerized app without incurring costs as long as you stay within the free limits. 3. Is Azure Easy to Use? Definitely! Once you understand Docker (which I can help with), deploying on Azure is straightforward. My video walks through using Docker and Azure together step by step. If you follow a similar approach, you'll have your OpenCV backend running in no time. If you're interested, I can create a tutorial showing: ->How to create a Docker container for your OpenCV + Flask app. ->How to deploy it on Azure using Azure App Service or Azure Container Instances. Let me know if this would help, or feel free to share any specific questions-I’d be happy to assist! 😊
@@AdiTOSH ->How to create a Docker container for your OpenCV + Flask app. ->How to deploy it on Azure using Azure App Service or Azure Container Instances. absolutely!! both would help, thank you very much in advance sir!!
Hi again @@adfinemrising 😊 Thank you for confirming-I'm thrilled that you found my suggestions helpful. I’ll start working on a tutorial covering both topics: 1⃣ How to create a Docker container for your OpenCV + Flask app. 2⃣ How to deploy it on Azure using Azure App Service or Azure Container Instances. In the meantime, if you’d like to get a head start, I recommend checking out my video on deploying a .NET web app with Docker on Azure-it introduces Docker and Azure fundamentals that will also apply to your project: 🔗 ruclips.net/video/akyE98yY-E8/видео.htmlsi=PNuLiHtssmhI6iLJ | Deploy .NET Web App with Docker on Azure in 15 Minutes | Full Guide with DevOps Pipeline Stay tuned, and I’ll notify you once the new tutorial is live. If you have any specific challenges you’re facing right now with Docker or Azure, feel free to share them-I’d be happy to assist further! 😊
@@AdiTOSH i've already subscribed hehe, there not many people who do opencv. so seeing someone doing deployment is a god send! i'll watch this video too so i can familiarize myself with azure and docker. thanks again!!
Hello can you learning wrighting traffic signs code?
@@mahdiAsh-k6b Thank you for your comment! If you're asking about detecting and recognizing traffic signs, that's definitely an exciting topic in computer vision. It involves techniques like object detection and image classification using tools like OpenCV or deep learning frameworks. Let me know if you're interested, and I could consider creating a video on how to detect and classify traffic signs. 😊
What do you think about this video? Let me know in the comment below.
Where to keep mobile phone😊
@@ArmanHudda-w9m For the US visa interview in Kolkata, mobile phones and other electronic devices are generally not allowed inside the consulate premises. However, you can keep your mobile phone in nearby locker facilities or safe deposit services available around the consulate. As per my experience, it is best to leave your phone in hotel room and not bring it if you are going solo. If the venue is the new Pataka House, some visitors have shared that there are local shops or kiosks nearby offering locker services for a small fee. Make sure to confirm their reliability before handing over your belongings. I recommend arriving one day before to explore these options and ensure a smooth experience next day. Let me know if you have any more questions! 😊
Can you share your YAML file?
@powerpat1 Sure! Here's the YAML file used in the video: drive.google.com/file/d/1CGdbrX1Py6TKNBs-6xNAEumYKke-W4FU/view?usp=sharing Feel free to check it out and let me know if you have any questions or need clarification on any part of it! 😊
the venue is changed from this place to pataka house which is 1.5km away
@nandagopalchalasani6964 Thank you so much for the update! It's incredibly helpful for everyone planning their trip. I'll make sure to highlight this in the video description and a pinned comment so viewers are aware of the new venue at Pataka House. If you have any additional tips or details about the new location, feel free to share. It’ll definitely help others! 😊
If you found this video helpful, check out my CSP vs CORS Explained video to further strengthen your web security knowledge! - ruclips.net/video/OOYVPKeBmHo/видео.html
🇧🇷 Will there be any implementation of CNN?
@@laudemirferreira3227 Thank you for your support from Brazil! A CNN-based approach for lane detection is a great idea & and since my channel already covers CNN projects-I’ll definitely consider it for a future video. Stay tuned!
What do you think about this video? Let me know in the comments below.
There’s part-2 to this - see the lanes mapped back to the original video frame: ruclips.net/video/QkfVvktGyEs/видео.html! 🚗✨
Traveling to Kolkata for your US visa interview? After your safe trip, come back and share your tips here to help others!
I've created a new video on Perspective Transformation with improved audio quality and clearer explanations. 👉 Watch it here: ruclips.net/video/YGSAhRA1GTw/видео.html Let me know your thoughts on the updated version-I’d love to hear your feedback!
This concept and Birds eye view similar only right
You're absolutely right! Perspective transformation can be used to achieve a bird’s eye view, but it’s also versatile enough for other angles too. 😊
What do you think about this video? Let me know in the comments below.
Can you please tell me that why am i getting error"Error processing file: win_size exceeds image extenet. Either ensure that your images are at least 7x7; or pass win_size explicitly in the function call, with an odd value less than or equal to the samller side of your images" on running the section "Testing Quality difference between source and image (degraded)"
Hi @@priyampratimsaikia4932, thanks for watching the video! The error you're seeing usually happens when the **window size (`win_size`)** used to calculate image quality metrics (like SSIM) is larger than the dimensions of your input image. This can occur if your degraded or source images are smaller than **7x7 pixels**. To fix this, you can either: 1. **Ensure that your input images are at least 7x7 pixels** before processing. 2. **Specify a smaller `win_size`** by adding it to the function call where the error occurs, with an odd value that’s smaller than the shortest dimension of your images. For example: # Add win_size parameter in your SSIM calculation (adjust to your image dimensions) ssim_value = ssim(image1, image2, win_size=3) # Ensure 3 or another odd value works for your image size Also, is the case that you using the code as it is from google colab link shared in the description and getting this error? Let me know if that helps or if you have more questions!
Thanks for helping❤
It's working now
@@priyampratimsaikia4932 Glad I could help! Feel free to reach out if you have more questions. Happy learning!
Thank you so much for making this, Just what I needed.
You're very welcome! I'm so glad the video was helpful for you. Best of luck with your visa application-I hope this makes your travel smoother & easier. Safe travels! Also, I would be grateful and it would be really helpful if you could share the differences from your experience after the travel. Your insights could help others as well!
@AdiTOSH Surely, I'll be more then happy to do that.
Hello sir
I need your help
Hello! How can I assist you? Feel free to share your question or concern here.
Being able to cover deployment end-to-end in 15 minutes is something I’m really excited about, and I hope it makes things easier for anyone diving into Azure and .NET. Thank you for watching-let me know how it helps you!
idiot only musikkkk!!!!!
way of taliking so lazy !!!!!!!!!!!!!
Thanks for your feedback! This was one of my earlier pieces, and I’ve worked hard to improve since then. I appreciate your understanding.
✅👌
👍👌 Thanks!
Are wallets allowed inside?
@@dilipviswanath9106 yes its allowed.
Can you give me some hotel name
@@sonaliteachergoa2073 You can choose any nearby hotel with good reviews that fits your budget But it’s best to check recent ratings and prices online. Safe travels!
Can you provide GitHub code
Thank you for your interest in the code! Unfortunately, I can’t share it as it’s a collaborative effort involving a large team. I appreciate your understanding.
@@AdiTOSH so can you tell me atleast to make such a robot and learning platform
Thank you @@pushpanjalijha2779 for understanding, sure, Here’s a quick guide to get you started: Platform Overview: This project uses ROS (Robot Operating System) and Gazebo for simulation. ROS provides a flexible framework for writing robot software, and Gazebo is a powerful tool integrated with ROS for simulating complex robotic environments. Programming and Libraries: The code is in Python, and for image processing, we use OpenCV. If you're new to OpenCV, here's a guide I put together to help you understand the basics: ruclips.net/p/PLCiTDJays9rXh-TycvwVYHLYNg1JZigVS Learning Resources: 1. For ROS: Start with the official ROS documentation, which has extensive tutorials on setting up and working with ROS nodes, topics, and other essential components. - www.ros.org/ 2. For Gazebo: Here’s the official Gazebo documentation, which covers everything from installation to advanced simulation features. - gazebosim.org/home
Thanks....It helped me in doing my semester project..
Glad to hear it helped with your semester project! Best wishes for your studies! :)
when i run your video with your code it runs for a while then it stopped with error "could not read a frame from the video"
I don’t understand why this could be happening. Try removing entire logic and performing a simple video run. If that too gets the error, there’s some issue with the video
Awesome. Does HSV works for gray-images video? Thanks.
Yes it does. But you might not want to do that. For gray images you can threshold on gray-scale itself. *Example code:* import cv2 import numpy as np from matplotlib import pyplot as plt # Read the grayscale image img = cv2.imread('your_grayscale_image.jpg', cv2.IMREAD_GRAYSCALE) # Apply different thresholding types ret, thresh1 = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY) ret, thresh2 = cv2.threshold(img, 127, 255, cv2.THRESH_BINARY_INV) ret, thresh3 = cv2.threshold(img, 127, 255, cv2.THRESH_TRUNC) ret, thresh4 = cv2.threshold(img, 127, 255, cv2.THRESH_TOZERO) ret, thresh5 = cv2.threshold(img, 127, 255, cv2.THRESH_TOZERO_INV) # Display the results titles = ['Original Image', 'BINARY', 'BINARY_INV', 'TRUNC', 'TOZERO', 'TOZERO_INV'] images = [img, thresh1, thresh2, thresh3, thresh4, thresh5] for i in range(6): plt.subplot(2, 3, i + 1) plt.imshow(images[i], 'gray', vmin=0, vmax=255) plt.title(titles[i]) plt.xticks([]), plt.yticks([]) plt.show()
can i get this code?
Sure, the link is there in the video description.
Thanks! Very helpful. Are room keys allowed?
Yes of course, anything thats not electronic and bluetooth.
Good One ! Thanks.
Hi AdiTosh, I think sorting is unnecessary for this problem.
Hi @acrowfliedover, Thank you for engaging with my video! I appreciate your thoughtful question. Let’s discuss why sorting might not be necessary for this problem. While sorting can indeed simplify certain problems, it’s essential to consider the specific context. In some cases, we can achieve the solution without sorting. Here’s how: *Problem Context:* First, let’s understand the problem requirements. If the problem involves counting permutations (rather than combinations), sorting might not be crucial. Combinations focus on selecting items without regard to their order, whereas permutations consider order. *Example:* We can see in the video from 4:30 to 6:30 on impact of sorting. *Efficiency:* Sorting can be computationally expensive, especially for large datasets. If we can avoid it without compromising correctness, that’s a win! *Trade-offs:* However, there are trade-offs. Sorting might simplify the problem conceptually, making it easier to reason about. It could also lead to more efficient algorithms in some cases. In summary, I believe sorting truly affects the problem outcome. If you have further insights or disagree, I’d love to hear them!
Bro this is my project incan directly run the cofe in collab or any error comes according to it
Any one help me to run the code broo
Yeah bro, you should be able to directly run the code, just follow along with the video, it should be fine. You will not get any error.
@@AdiTOSH image is not downloading broo
That's the problem
@@My-bself I see, that should not be happening. It does download for me, can you try this link in that case, drive.google.com/drive/folders/1e5u38BG6lBjUm24uRb4WS3lmms0r6UtM?usp=share_link
hi... thanks for the vdo, bt can I use it for a set of 200 images or more? and one more thing, when both original image and super resolution images are similar ,what is the need of super resolution?
Yes, absolutely. You can use it for a set of 200 or more given thats your testing set. For training, we need a much larger set and we are using pre-trained weights for the sake of avoiding high gpu consumption that training typically takes. The SRCNN model is a basic model with just 3 neural network layers. In the world of image super resolution, there are way more deep neural networks with a huge number of layer, way more powerful models that result in astonishingly sharp changes on low resolution images. This model is usually used to learn to fundamentals and understand how image super resolution works. The intention is not to use it to actually perform super resolution on real images captured by our cameras with huge number of pixels. But for less number of pixels, on standard test set, the model performs well even on visual levels. Even, google phones, use image super resolution as post processing to produce picture qualities which are impossible to capture without dslr camera like huge lenses. But the models they use surpasses this basic model that we are studying as a part of research by leagues. However, like any technology, the fundamentals stays the same and without them you cannot comprehend the versions built on top of it. Thus, from study point of view, research point of view, this is an important model, but of little use in actual image super-resolution tasks on pictures captured by cameras. I hope this generates clarity over your concerns, thanks!
@@AdiTOSH ooooho ok , now I got it. thank you.
Hello bro i need urgent help from you. I have created a copy of your code and made some changes to it in order to directly take Degraded image and work on it. But it keeps crashing or getting errors. We have to show this project on 19 march. Please make the necessary changes in this code. Please help Here is the link colab.research.google.com/drive/1-1fbRaFiTOghVN761tm-PLPTUD2TjWES?usp=sharing Or if you cannot help, at least can you please share with me some other code that directly works on degraded image ? Instead of converting an original image to degraded image
How do you find those constants??
Hello @DinosaurRex-tw8mn, allow me to help you generate clarity on this. We have 8 pairs in total, namely, (x1,y1), (x2,y2), (x3,y3), (x4,y4), <- initial points (x1', y1'), (x2', y2'), (x3', y3'), (x4', y4') , <- transformed points Now, For initial points, these are the points we have chosen, so they are not variables but known coordinates as we ourselves have chosen. These are the coordinates of the trapezoid. It could be anything, like, (x1', y1') = (100, 100) (x2', y2') = (60, 150) (x3', y3') = (180, 150) (x4', y4') = (200, 100) Also, we know the transformed points to be on the corner of the screen and therefore we know them as well, i.e, (x1', y1') = (0, 0) (x2', y2') = (0, 480) (x3', y3') = (640, 480) (x4', y4') = (640, 0) Now, we have the relation from bilinear transformation, x1' = c1.x1 + c2.y1 + c3.x1.y1 + c4 x2' = c2.x2 + c2.y2 + c3.x2.y2 + c4 x3' = c1.x3 + c2.y3 + c3.x3.y3 + c4 x4' = c1.x3 + c2.y3 + c3.x3.y3 + c4 Here only c1, c2, c3 and c4 are unknown as rest are known. So, we have 4 equations and 4 unknowns. We can use these to find the values of the constants c1, c2, c3, c4. The entire idea is to find constant values using set of know coordinates both initial and transformed and then once we have the constant values, use the same four equations to find transformed coordinates for other set of input coordinates for which we do not know the transformed coordinates. I hope this helps you, thank you for giving you time, happy learning. :)
@@AdiTOSH Great Stuff, helped a lot.
The source images link is not working.
Hi @ramakrishnamiryala, The link does not open a webpage but instead downloads the zip directly through your browser. Can you please check if your browser if blocking the download or try a different browser. Thanks.
Firstly thanks for your video, so great work. You already mentioned it starting of video but i wanna still ask you are you sure that there is no way to implement this method to build autonomous car?. Can't we make any arrangement for it? What i'm trying to achive is getting rough angle value to send it arduino in order to steer my car.
Thank you @omercandemirci8580, Well the answer is yes, you can get the angle and I believe that it can be used to program an autonomous toy sized to cycle-size robotic car but not an actual one. However, one of the resources that you can use to get the steering angle from the above code: github.com/georgesung/advanced_lane_detection I hope this helps, thanks for watching, happy learning.
Thanks! Quick question, can I apply perspective transformation to obtain the range or distance of detection of the certain area selected?
Welcome @cc-ut7ow! While perspective transformation changes the view, distance transformation is a different concept. Remember that perspective transformation is a non-linear process, and it’s essential to understand the context and purpose of your application. If you’re interested in measuring distances, consider other techniques like stereo vision or depth estimation
That is working thx
Welcome :)
Is there any complete code in google colab for image super resolution using FSRCNN,EDSR,VDSR models.If so please provide the link
Hi @Lachu-hi6gg, Sure, here is what I am aware of: EDSR, WDSR, and SRGAN: ◦ The GitHub repository provides implementations of EDSR, WDSR, and SRGAN for single image super-resolution. You can load pre-trained weights and apply super resolution to images 2. ◦ Check out the code and examples in the repository here. github.com/krasserm/super-resolution
Great video❤ It would be amazing if you make a video on wav2lip architecture. How to customise it andget the best output from the model.
Thank you, will look along the suggested lines.
Thank you for great video!! I want to make h5 file by myself so I try to train my model, so I use fit function but it doesn't work. Could you help me?
Most welcome! About creating your own h5 file, I am afraid I could not do it either. I observed that google colab is not enough to train SRCNN model and my computer has neither the amount of GPU that training requires. I am afraid I cannot help you here. Thanks.
can we do the same using raspberry pi and dump our trained ml model into Raspbian os and do the project, Is it possible?
Yes we can do the same using raspberry pi. But in the approach shared, we did not make use of any ml models, but can we dump ml model to Raspberry pi is a different question. Thanks!
@@AdiTOSH ok thank you
@@bharathvarun3407 Most welcome, happy learning!
i think getting the perspective transformation matrix need not be kept in the loop. Once you know the targeted coordinates then you could easily get the matrix before the loop of reading the video. In the while loop only include the warp command. This will make the code more efficient and you'll be making use of the full potential of the library. Loved the video. Thanks 👍
Agreed. Great observation! Glad you loved it, most welcome.