please little slow and explain little more with code step by step, its will be better for your channel. you know but listener dont know so much... but anyway , well done, best of luck
Yes. Without any doubt. In the introduction lecture I have credited coursera for all of my learning material and all of the videos in the playlist. I will add in the description of each video too.
Hi... I have used this code in Google colab and getting resource exhausted error .. if I change GPU to TPU in colab , I solved that error but when executing model.fit code, I am getting "conv3DcustombackpropFilterOp only supports NHWC in data format" error.. How can I solve this .. kindly help me..
Dear Wiqaas, Greetings!! Thank you very much for sharing video. Outstanding work. Could you please let me know how I perform Frank's sign segementation and detection?
you talk to fast, we are not robots that have to understand everything. :( indian videos are all like that. That is the reason why i skip indian youtubers
I have label file(nrrd format) and original file(nrrd format) of CT scan for every patient in my dataset. How to proceed for detection of cancer using deep learning i.e. in train-test split which file i should assign for X_train and which i should assign for Y_train??. Can anyone please help?.
thanks, what is the exact Brats data you are using? I mean, what is the year of Brats data? How did you get 4d array after loading? I have 4 seperate nii.gz files for 4 modalities (flair, t1, t1w, t2) and then prepared 3d arrays manually, leaving out 1 modality. or maybe in your Brats data all MRI images saved as (x,y,z, sequence)??
@@guitarec4105 Found it, it's a dict with the name of the .h5 data. Saved the data and labels into a .h5 file and a dict with the names. Then It ran just fine. Thanks @wiqaaas for this, this was so usefull. please keep doing this.
InvalidArgumentError: Conv3DBackpropInputOpV2 only supports NDHWC on the CPU. [[node gradient_tape/model/conv3d_14/Conv3D/Conv3DBackpropInputV2 (defined at :11) ]] [Op:__inference_train_function_6120] can you give me a solution of this problem please???
Can you please provide the link for dataset
Can you please share the details about the dataset?
Excellent explanation ... thank you so much for such nice video.. keep it up 👍👏
bro, please let me know, how did you preprocess the data to h5 format.
Can i use this algorithm for any prostate cancer segmentation
please little slow and explain little more with code step by step, its will be better for your channel. you know but listener dont know so much... but anyway , well done, best of luck
Please make video for elaborate coding
nhi denge
How did you preprocess the dataset into patches and save it in h5py format??
Sir kindly share the its code and datset for understanding
Thank you very much for sharing video.
awesome explanation of U-net!! very informative video. Thank you very much.
Can the same approach work for disease detection in crops?
Salam dear Waqas, Can you please share code of Brain Tumor Segmentation, it does not downloaded from Github
github.com/wiqaaas/youtube/blob/master/Deep_Learning_Using_Tensorflow/Image_Segmentation_using_U-Net/Image%20Segmentation%20using%20U-Net%20for%20MRI%20(3D%20Images).ipynb
@@medy2366 config.json file or dataset ? any idea
Can we implement this code in colab??
Please credit Coursera AI for Medical Diagnosis
for the material thanks
Yes. Without any doubt. In the introduction lecture I have credited coursera for all of my learning material and all of the videos in the playlist. I will add in the description of each video too.
Hi... I have used this code in Google colab and getting resource exhausted error .. if I change GPU to TPU in colab , I solved that error but when executing model.fit code, I am getting "conv3DcustombackpropFilterOp only supports NHWC in data format" error..
How can I solve this .. kindly help me..
Can we implement this code in colab?????
hi i am not able to downlaod code can you please share
@@abeerobeid354 Yes
@@anonymouss_96 Share your mail id
@@gloryprecious1133 you didn't share please share
also, in 6:02 you were explaining x,y,z labels. are labels saved as 3d or 4d arrays?
So X.shape is (4, 160, 160, 16), whereas y.shape is (3, 160, 160,16). Why they are not 4 or 3 in both cases?
Dear Wiqaas,
Greetings!!
Thank you very much for sharing video. Outstanding work. Could you please let me know how I perform Frank's sign segementation and detection?
you talk to fast, we are not robots that have to understand everything. :( indian videos are all like that.
That is the reason why i skip indian youtubers
U talking too fast cant understand anything
worst explanation i have seen bro no logic you just copied from another site man
can you run the model training? i follow but got error in google colab
Can plz provide the config.json
Iam not able to proceed because iam stuck there
Anyone here applied Fastsurfer CNN for MRI segmentation task based on GOOGLE Colab environement i need some help please 🙏🙏
sir, your explanation is awesome. could you please share the code with me.
quality and stand work. but your are doing so in hurry. please slowly slowly explain
Nice work, which dataset you r using
Quality stuff. Keep it up!
what is config.json file ?
I have label file(nrrd format) and original file(nrrd format) of CT scan for every patient in my dataset. How to proceed for detection of cancer using deep learning i.e. in train-test split which file i should assign for X_train and which i should assign for Y_train??. Can anyone please help?.
Is't it Semantic Segmentation? Usually, in detection, we need to draw the bounding box.
thanks, what is the exact Brats data you are using? I mean, what is the year of Brats data? How did you get 4d array after loading? I have 4 seperate nii.gz files for 4 modalities (flair, t1, t1w, t2) and then prepared 3d arrays manually, leaving out 1 modality. or maybe in your Brats data all MRI images saved as (x,y,z, sequence)??
what are you loading in the config.json file ?
yes
@@guitarec4105 Found it, it's a dict with the name of the .h5 data.
Saved the data and labels into a .h5 file and a dict with the names.
Then It ran just fine.
Thanks @wiqaaas for this, this was so usefull.
please keep doing this.
@@maximilianomedia can we get in contact , I want to know more about how you did it
For sure! send me an email to maxchavezvargas@gmail.com
@@maximilianomedia Sir I contact you please help
i can't find the code in the given link can anyone help me out?
@@imtiazahmadkhan833 ruclips.net/user/redirect?event=comments&redir_token=QUFFLUhqbmduUFVNY0Q3aGM2dFJUc3E2dEN3X2t4QXZiQXxBQ3Jtc0tsazRncHllYW8xcHBVQVRSTll5NDlKZFZHRTdsTHI2SURld3NjY2tNLURJdXlaQ25SanE0Q0FJSGI5dGQxMThnNml2QjNLa1YyZC1JUnc2UExGRnFUdVVFTE1uaGNTTGljdThqdHN1ZFJlMW16ejFGVQ&q=https%3A%2F%2Fgithub.com%2Fwiqaaas%2Fyoutube%2Fblob%2Fmaster%2FDeep_Learning_Using_Tensorflow%2FImage_Segmentation_using_U-Net%2FImage%2520Segmentation%2520using%2520U-Net%2520for%2520MRI%2520%283D%2520Images%29.ipynb&stzid=UgxtQjaBeS8DuCtGJyJ4AaABAg.9JSvTkBKzkb9LywSjkoGQl
good
Very nice explanation. Good concept explanation
InvalidArgumentError: Conv3DBackpropInputOpV2 only supports NDHWC on the CPU.
[[node gradient_tape/model/conv3d_14/Conv3D/Conv3DBackpropInputV2 (defined at :11) ]] [Op:__inference_train_function_6120]
can you give me a solution of this problem please???