Great job🥰 I am working on my graduate project about: detecting people in video and classify them in two categories (kids/adulate). I am using CCTV camera in public area. Can you give me some advice please.
Thank you for such a informative video. I have a question related to keeping multiple versions of cuda. In this video you have shown that you have multiple versions of cuda in your system. Is it possible to install multiple version of cuda in Ubuntu? If possible you may consider making a video on installing and using multiple version of cuda in windows as well as ubuntu. Thanking you in anticipation.🙏🙏
Yes, it is possible. Follow these steps: 1- Install CUDA Versions: Download the .run or .deb files for each CUDA version you need from NVIDIA's website. Install each version in a unique directory (e.g., /usr/local/cuda-11.2, /usr/local/cuda-11.4, etc.). 2- Set Environment Variables: Configure your environment to use a specific version by setting the CUDA_HOME, PATH, and LD_LIBRARY_PATH variables to the version you need. You can switch between versions by changing these variables.
Thanks for the video ma'am, Can you please verify whether there is any document or paper stating the architecture of YOLOv11, also the entire code/ model files for YOLO v11? Thanks
hi! thank you for this video! I am making my thesis where I am planning to use yolo11 for labelling drivers if they are drowsy, awake, distracted, and if they are covering the camera for safety reasons can you help me? i just learned coding 2 weeks ago
If you want the newest and possibly best-performing option, go with YOLO11. If you’d rather have a well-tested version with lots of community help, YOLOv8 is a solid choice. Both will work well for detecting swimming pools, but YOLO11 might be a bit better for tougher situations.
import cv2 from ultralytics import YOLO # Load the YOLO model model = YOLO("yolo11n.pt") # Open the video file video_path = "path/to/your/video/file.mp4" cap = cv2.VideoCapture(video_path) # Loop through the video frames while cap.isOpened(): # Read a frame from the video success, frame = cap.read() if success: # Run YOLO inference on the frame results = model(frame) # Visualize the results on the frame annotated_frame = results[0].plot() # Display the annotated frame cv2.imshow("YOLO Inference", annotated_frame) # Break the loop if 'q' is pressed if cv2.waitKey(1) & 0xFF == ord("q"): break else: # Break the loop if the end of the video is reached break # Release the video capture object and close the display window cap.release() cv2.destroyAllWindows()
I had been into YOLO for 1.5 years, its really interesting to work with such awsm AI advancements.
Thank you arohi ma'am. For bringing this video.
You're welcome!
Excellent work
Really Insightful Ma'am
Thank you!
Excellent content
Glad you think so!
Great job🥰
I am working on my graduate project about: detecting people in video and classify them in two categories (kids/adulate). I am using CCTV camera in public area. Can you give me some advice please.
Email me at aarohisingla1987@gmail.com
Thank you for such a informative video. I have a question related to keeping multiple versions of cuda. In this video you have shown that you have multiple versions of cuda in your system. Is it possible to install multiple version of cuda in Ubuntu? If possible you may consider making a video on installing and using multiple version of cuda in windows as well as ubuntu.
Thanking you in anticipation.🙏🙏
Yes, it is possible.
Follow these steps:
1- Install CUDA Versions: Download the .run or .deb files for each CUDA version you need from NVIDIA's website. Install each version in a unique directory (e.g., /usr/local/cuda-11.2, /usr/local/cuda-11.4, etc.).
2- Set Environment Variables: Configure your environment to use a specific version by setting the CUDA_HOME, PATH, and LD_LIBRARY_PATH variables to the version you need. You can switch between versions by changing these variables.
@@CodeWithAarohi Thank you...🙏🙏. Your videos are very informative.
Thanks for the video ma'am,
Can you please verify whether there is any document or paper stating the architecture of YOLOv11, also the entire code/ model files for YOLO v11?
Thanks
This is the only information I found about YOLO11: docs.ultralytics.com/models/yolo11/
hi! thank you for this video! I am making my thesis where I am planning to use yolo11 for labelling drivers if they are drowsy, awake, distracted, and if they are covering the camera for safety reasons can you help me? i just learned coding 2 weeks ago
What kind of help? If you have any queries you can ask. I will try to help.
which yolo i should used to detect the swimming pools ?
If you want the newest and possibly best-performing option, go with YOLO11. If you’d rather have a well-tested version with lots of community help, YOLOv8 is a solid choice. Both will work well for detecting swimming pools, but YOLO11 might be a bit better for tougher situations.
Thank you :)
You're welcome!
Copy move forgery detection in video using machine learning.
Use dataset casia
Model yolo
Please make video
Yes please
I will try!
@@CodeWithAarohi thank u so much 💕
How much time you can upload this vedio?
Please must share vedio on this topic
Mam can you make a video on Conversational Image Recognition Chatbot. Please it would helpful..
I will try.
Mam how to make 3d model using 2d images. is there anything there to do this task
I never tried it but there are options available to perform this task.
Madam make spoof detection face. Model please and also run an android
Plz make videos on gen ai
Yes, I will continue that soon.
after 11:41 my camera still turn on, I cant turn it off. how should I turn my camera off with code?
import cv2
from ultralytics import YOLO
# Load the YOLO model
model = YOLO("yolo11n.pt")
# Open the video file
video_path = "path/to/your/video/file.mp4"
cap = cv2.VideoCapture(video_path)
# Loop through the video frames
while cap.isOpened():
# Read a frame from the video
success, frame = cap.read()
if success:
# Run YOLO inference on the frame
results = model(frame)
# Visualize the results on the frame
annotated_frame = results[0].plot()
# Display the annotated frame
cv2.imshow("YOLO Inference", annotated_frame)
# Break the loop if 'q' is pressed
if cv2.waitKey(1) & 0xFF == ord("q"):
break
else:
# Break the loop if the end of the video is reached
break
# Release the video capture object and close the display window
cap.release()
cv2.destroyAllWindows()