- Видео 27
- Просмотров 24 944
Coderific
Добавлен 4 сен 2021
In this channel, you will learn programming solutions to different problems, in Python, MATLAB, PHP. You will also learn basic concepts related to web development. I want to help research students doing small tasks so that their big projects do not stop. :)
Remember that, every problem has multiple solutions, and you will find one of those solutions here in this channel.
Remember that, every problem has multiple solutions, and you will find one of those solutions here in this channel.
Evaluating Gemini 1.5: Tackling Complex Time-Series Fall Detection with Long Contexts
This video investigates the use of Gemini 1.5-pro, a state-of-the-art generative AI model, for handling complex fall detection tasks in time-series data. Fall detection is a critical problem in healthcare and safety, involving intricate patterns in accelerometer data. By leveraging the long context handling capabilities of Gemini 1.5, we explore both zero-shot and few-shot prompting to classify falls effectively.
Link to dataset: userweb.cs.txstate.edu/~hn12/data/SmartFallDataSet/
Paper of dataset: pmc.ncbi.nlm.nih.gov/articles/PMC6210545/
Link to Notebook: github.com/sanalmgr/gemini-pro-for-fall-detection/tree/main
#prompt #ai #falldetection #timeseriesanalysis #longcontext #largelanguagemod...
Link to dataset: userweb.cs.txstate.edu/~hn12/data/SmartFallDataSet/
Paper of dataset: pmc.ncbi.nlm.nih.gov/articles/PMC6210545/
Link to Notebook: github.com/sanalmgr/gemini-pro-for-fall-detection/tree/main
#prompt #ai #falldetection #timeseriesanalysis #longcontext #largelanguagemod...
Просмотров: 46
Видео
Machine Learning vs Deep Learning: Explained with a Simple Linear Regression Example!
Просмотров 742 месяца назад
#MachineLearning #DeepLearning #AI #LinearRegression #DataScience #ArtificialIntelligence #MLvsDL #TechExplained #NeuralNetworks #AIExplained #TechEducation #DataScienceTutorial #MLvsDLExplained #AIModels #TechWithExamples This video explains the key differences between machine learning and deep learning with a clear, real-world example! As an example, we use linear regression to show how these...
Addressing AI Concerns: Separating Facts from Fears
Просмотров 61Год назад
AI is a tool created by humans, for humans. It is designed to assist and augment our capabilities, not replace or overpower us. In this video/post, I try to address common fears about AI by posing questions from the perspective of a concerned layman, and then providing answers to reassure that AI is a tool designed to assist humans, with humans always retaining control and oversight. Full Post ...
Visual Quality Assessment of Multimedia Based on Machine Learning
Просмотров 1423 года назад
#EchoesOfScienceandTechnology #Mexico This video is the recorded session of my talk in which I presented my work at the 7th International Research Forum, "Echoes of Science & Technology", remote conference, Mexico. I am truly grateful to the forum for inviting and giving me the opportunity to motivate the students. Facebook Page of the event: gestionescolarcet1/
How to Extract Bottleneck Features from Pretrained Networks VGG16, Resnet50 and Xception - Python
Просмотров 3,8 тыс.3 года назад
#Bottleneck #NeuralNetwork #FeatureExtraction In this video, you will learn how to Extract Bottleneck Features from Pretrained Neural Networks VGG16, Resnet50 and Xception in Python. You will also learn how to save bottleneck features into .npz numpy files and load these files back into program. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. In the v...
How to Generate 32x32 Patches from A Grayscale Image - Python
Просмотров 2,3 тыс.3 года назад
#Patches #FeatureExtraction In this video, you will learn how to Generate 32x32 Patches from A Grayscale Image in Python. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. Input Image Source: Photo by Tomas Ryant from Pexels - www.pexels.com/photo/close-up-photo-of-cat-2870353/
Create Augmented Data Using Albumentations Library in Python
Просмотров 1,2 тыс.3 года назад
#Albumentations #DataAugmentation #FeatureExtraction In this video, you will learn how to generate augmented data using Albumentations Library in Python. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. In the video below, you can learn first how to read MNIST images from the datasets of Tensorflow: ruclips.net/video/5d3QaDsx67s/видео.html Albumentatio...
Optical Flow Maps Using OpenCV in Python
Просмотров 8443 года назад
#OpticalFlow #OpenCV #Python #FeatureExtraction #VideoProcessing In this video, you will learn how to extract Optical Flow Maps from a .mp4 video using OpenCV library in Python. You will also learn how to read and write video using skvideo.io library. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. In the videos below, you can learn first how to read ...
Curve Fitting Plots in Python
Просмотров 5 тыс.3 года назад
#CurveFitting #Scipy #Python #DataAnalysis #DataVisualization In this video, you will learn how to analyse data using Curve Fitting Plots of Scipy library in Python. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. Matplotlib Markers: matplotlib.org/stable/api/markers_api.html matplotlib.pyplot.scatter: matplotlib.org/stable/api/_as_gen/matplotlib.pypl...
Data Analysis Using Violin Plots and Bar Plots of Seaborn Library - Python
Просмотров 663 года назад
#seaborn #ViolinPlot #BoxPlot #Python #DataAnalysis #DataVisualization In this video, you will learn how to analyze data using Violin Plots and Box Plots of Seaborn library of Python. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. Seaborn Python: seaborn.pydata.org/
Classification Evaluation Metrics of Sklearn: AUC-ROC, Confusion Metrix and Classification Report
Просмотров 2993 года назад
#auc #roc #ConfusionMetrix #ClassificationReport #ClassificationEvaluation #Python #DataAnalysis #DataVisualization In this video, we you will learn how a trained classification model is evaluated. First, you will see some very basics textual evaluation of model. Then, you will learn to code and evaluate classification model using AUC-ROC, Confusion Metrix and Classification Report of Sklearn l...
Regression or Classification - An Experiment on MNIST Dataset - Python | Part 01
Просмотров 2173 года назад
#pythonprogramming #LearnProgramming #deeplearning In this video, I have used MNIST dataset and write a regression and classification models to decide between regression and classification. In regression, the model predicts scalar quantity for each MNIST input image, while in classification, the model predicts labels. To process the dataset in classification, I have converted scalar quantity of...
GridSearchCV - Tune Hyperparameters in Classification on MNIST Dataset - Python | Part 02
Просмотров 2933 года назад
#pythonprogramming #LearnProgramming #DeepLearning In this video, you will learn how we can use GridSeachCV function to search for best training parameters based on best mean scores. Before you watchi this video, I recommend to watch the video below first: An Experiment on MNIST Dataset To Decide Between Regression and Classification - Python | Part 01 ruclips.net/video/f5cv5dTywCM/видео.html M...
Prepare Features and Save in CSV File - Python | Part 01
Просмотров 9853 года назад
In this video, you will learn how to Prepare Features and Save in CSV File in Python. In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor.
Prepare Features and Save in CSV File - Python | Part 02
Просмотров 8063 года назад
In this video, you will learn how to extract Principal Component Features of MNIST images and save these features into csv. Before you watch this video, I recommend to watch the video below first: 1: Load and Read MNIST Images in Python ruclips.net/video/5d3QaDsx67s/видео.html In this video, the workspace includes windows 10 and Anaconda with Spyder as Python editor. MNIST Dataset from Keras: k...
Save a trained model using ModelCheckpoint in Keras - Python
Просмотров 5603 года назад
Save a trained model using ModelCheckpoint in Keras - Python
Train a Deep Autoencoder Network on MNIST dataset in Keras and Record duration of Training - Python
Просмотров 663 года назад
Train a Deep Autoencoder Network on MNIST dataset in Keras and Record duration of Training - Python
Write a Deep Autoencoder Network in Keras - Python
Просмотров 593 года назад
Write a Deep Autoencoder Network in Keras - Python
Read MNIST Images Based on Wave and Zigzag Orders in Python
Просмотров 1603 года назад
Read MNIST Images Based on Wave and Zigzag Orders in Python
Load and Read MNIST Images in Python
Просмотров 5 тыс.3 года назад
Load and Read MNIST Images in Python
Access an array in zig-zag order and write the output in .mat file in Python
Просмотров 883 года назад
Access an array in zig-zag order and write the output in .mat file in Python
Access an array in wave order and write the output in .mat file in Python
Просмотров 1233 года назад
Access an array in wave order and write the output in .mat file in Python
How to visualize output filters from a pretrained model VGG16 in Keras
Просмотров 4123 года назад
How to visualize output filters from a pretrained model VGG16 in Keras
How to convert input shapes between different layers in Keras - Python
Просмотров 1 тыс.3 года назад
How to convert input shapes between different layers in Keras - Python
How to write a downsampled video in Python - Part03
Просмотров 663 года назад
How to write a downsampled video in Python - Part03
Downsampling - How to select a specific number of frames from a video in Python - Part02
Просмотров 1493 года назад
Downsampling - How to select a specific number of frames from a video in Python - Part02
How to read .mp4 video and write frame images using skvideo.io library in Python - Part01
Просмотров 5283 года назад
How to read .mp4 video and write frame images using skvideo.io library in Python - Part01
While reshaping a model, how do we determine which index holds the originality?
When reshaping a model, the "originality" of an index depends on the type of data and what needs to be preserved. For example, in image data (shape: (height, width, channels)), you might want to keep the spatial relationships between pixels, so you would carefully decide how height, width, and channels are handled. In sequence data (e.g., time-series with shape (time_steps, features)), preserving the time step index ensures the sequence order is maintained. In my code, the Reshape layer converts spatial data into a sequence by collapsing the spatial dimensions (height, width) into -1 while keeping the last dimension (channels). Because, the LSTM layer processes data sequentially, so the index holding the "originality" depends on whether you're working with images, sequences, or other types of data.
😅
can we do this using pytorch?
Yes, absolutely.
concise, unlike many other youtube tutorials
Thank you so much. I am glad I could help.
How to use these features for object detection or othe processing.please share the next video link
Thank you for reaching out. Very soon, I will upload a video regarding your comment.
that was a great presentation
What if i have 2 varibles input and 1 variable output, how i can make a prediction of these constanta?
Does the following help? import numpy as np from scipy.optimize import curve_fit # Sample data (replace with your actual data) x1 = np.array([1, 2, 3, 4, 5]) x2 = np.array([2, 3, 5, 7, 1]) y = np.array([4, 9, 25, 49, 9]) # Define your fitting function (replace with your desired function) def func(x1, x2, a, b, c): return a * x1**2 + b * x2**2 + c # Perform curve fitting popt, pcov = curve_fit(func, np.c_[x1, x2], y) # Print the fitted constants print("Fitted constants:") print("a =", popt[0]) print("b =", popt[1]) print("c =", popt[2]) # Generate data for prediction (adjust the range as needed) x1_pred = np.linspace(0, 6, 100) # 100 points between 0 and 6 x2_pred = np.linspace(0, 2, 100) # 100 points between 0 and 2 # Predict y values using the fitted constants y_pred = func(x1_pred, x2_pred, *popt) # Plot the data and the fitted curve (plotting code not shown here)
thank you so much, the presentation is so helpful and clear, it helps me a lot.
Glad it was helpful!
Hey.... This video was very helpful. Can you please make a video on DataSets. Like describing all the methods of using DataSet such as from online source, from your storage, and especially the ones in csv format..
Or at least provide a better source to learn about this.
Great video for making image patches
This code was helpful for me. Dhanywaad Devi Ji
Nice Presentation!
Glad you liked it!
Thank you so much. This helped me a lot😊
I'm so glad!
Hi i have an open question : if i augment my image data with this library, is there a way to also apply them to the labels ? For example if i do a rotatation or a flip , i also have to modify the bounding boxes. Can this library do this?
Hi, thank you for reaching out. I suggest please read the documentation of the library to find out the answer.
Please mam show that how to save each image patch in a folder.
I will make a video on this topic very soon. Stay tuned!
can you share your notebook ?
i need this code.can you please share this?
Sure. I will launch my GitHub repository very soon.
Thanks ! for the details explanation. Its really useful
Glad it helped!
import os import cv2 import numpy as np import skvideo skvideo.setFFmpegPath('anaconda3/envs/test/bin/') import skvideo.io print ("hi") # caminho para o vídeo de entrada path2video = (r"C:\Users\alif\Desktop\Optical_Flow\VIDEOS/Tudo_bem_3.mp4") outputPath = (r"C:\Users\alif\Desktop\Optical_Flow\OptFlowMaps") # read the video to create frames-generator videogen = skvideo.io.vread(path2video) print(f'shape of video {path2video} = {videogen.shape}') # reading first frame as first and previous frames # New image (array) filled with zeros in same dimensions as input frame (3 chanels) frame_ind = 0 hsv = np.zeros_like(videogen[frame_ind]) # conver input frame from RGB to gray prevF = cv2.cvtColor(videogen[frame_ind], cv2.COLOR_RGB2GRAY) print("Reading frames...") # range com a largura de videogen for index in range(len(videogen)): # printando a informação de qual frame está sendo processado por iteração print(f"{index}/{len(videogen)}") # conver input frame from RGB to gray nextF = cv2.cvtColor(videogen[index], cv2.COLOR_RGB2GRAY) # maintain similar size of next anf previous frames; shape[1] é a width, e shape[2] é a high dim = (prevF.shape[1], prevF.shape[0]) nextF = cv2.resize(nextF, dim) #computing optical flow map by Farneback method. Returns 2D vector of size input frame # cv.calcOpticalFlowFarneback(prev, next, flow, pyr_scale, levels, winsize, iterations, poly_n, poly_sigma, flags ) flow = cv2.calcOpticalFlowFarneback(prevF, # prev nextF, # next None, # flow 0.5, # pyr_sacel 3, # levels 15, # winsize 3, # iterations 5, # poly_n 1.2, # poly_sigma 0) # flags # Computando a magnitude e o ângulo dos vetores 2D mag, ang = cv2.cartToPolar(flow[..., 0], flow[..., 1]) # Setting hue image de acordo com o a direção do fluxo óptico hsv[..., 0] = (((ang * 180) / (np.pi)) / 2) # setting hue image intensities de acordo com a magnitude do fluxo óptico normalizado hsv[..., 2] = cv2.normalize(mag, None, 0, 255, cv2.NORM_MINMAX) # Convertendo hue image from HSV to BGR format bgr = cv2.cvtColor(hsv, cv2.COLOR_HSV2BGR) # fluxo óptico é salvo com o nome do frame dest = outputPath + str(index) + ".png" cv2.imwrite(dest, bgr) # Fluxo óptico se torna o frame anterior e loop se resume no novo nextF prevF = nextF
very useful video..if you could also show how to save each image patch in a folder that would be great
Excellent video! It helped me a lot :3
Glad it helped!
It helps me a lot in my project
I am glad it helped you. This is what I want, to help research students do small tasks so that their big projects do not stop. :)
how to combine or concatenate these features ??
I suggest using these features in network as a separate layer, and then merge them. I will make short video on that, very soon.
excelente video, muchas gracias, pude resolver varios problemas de "RIFE"
i know the pixel width in meters. now i have a sequence of point-symmetrical images. i want to measure a speed field. #skyimager
Nice work. Would you please share your source code?
Very soon, I will share the code on github.
I need code
Will you able to upload code ?
I have no plan to upload the code any soon. But I am thinking about it.
So, stay tuned. :)
I need code
Thank you for the comment. Every line of code is in the video, you can write from it line by line in your Python editor.
hi kindly share ur email, i am training model VGG16 but getting low validation accuracy
Thank you for reaching out. You can find email address in about section.
Great work
Thank you! Cheers!