Signed my first contract as a python programmer last Friday. I learnt it all here, Tensorflow, opencv, django, flask....Thanks man. I'll surely pay my gratitude in the future.
I just wanna say that, I started off with your tutorials to get into Machine Learning and boy have I come a far way from when I started your tutorials gave me just what I needed to study on my own and learn these things (also hey I ditched keras/tf for pytorch seems a lot more efficient honestly) but thanks and congratulations on 1 million
I like how you're putting out really long videos with huge amount of information to follow, even though the rate at which you're uploading is slow. It suffices. Thank you so much, dude! You've been my inspiration since I started programming!
@@sentdex I noticed that at the time you divided it twice but I am shocked how you quickly guessed the issue after going that much further in the code. Bravo!
It's actually amazing to see a tutorial maker making mistakes. That way you can learn what errors you might encounter and how to tackle them. Never thought of it that way.
i had this video in my watch-later list since you uploaded it - because i didnt knew what autoencoders where and they sounded pretty hard to understand but man are you good at explaining. i learned python from your channel before went to university - and it everything super easy for me - since i already knew how to programm - thanks to you
Well done! For the visualization, you need to use the squeeze() function in Colab ae_out = autoencoder.predict([ X_test[0].reshape(-1, 28, 28, 1) ]) img = ae_out[0] # predict is done on a vector, and returns a vector, even if its just 1 element, so we still need to grab the 0th plt.imshow(tf.squeeze(ae_out[0]), cmap="gray")
Another awesome aspect of autoencoders is, if you take the decoder part and give it a vector of size that matches it's input, you now have a generative model which is called variational autoencoders that you can use to generate images just like GANs.
Amazing tutorial! Thank you very much for providing so inspiring and high-quality information. Indeed, your machine learning and python tutorial series helped me a lot when I changed my career towards data science. I am wondering if you are planning to do a new video about variational autoencoders. That would be awesome to see!
I'd be so excited if you were to start a series on chess engines. It would be fun to see how you would go about this, even if the engine wouldn't be that strong!
That division by 255 was a tricky guess. I could not understand what's wrong with your code. However, losses in the ballpark of e-7 were a good hint. Thanks for that error. It's always nice to learn by other's errors. Especially when you can watch them at 2x speed. Thanks for your work.
At 20:43, Im getting an error that says "Error when checking input: expected img to have 4 dimensions, but got an array with shape (60000, 28, 28)" Any suggestions?
I think you have to reshape the image using numpy with the fourth dimension as 1 so the new shape will be (6000, 28, 28, 1). I am no master but I hope this helps
@@codingmadeeasy3126 YES! Before fitting the data to the model, I used: import numpy as np np_array=np.asarray(x_train) print(np_array.shape) x_train = np_array.reshape(60000, 28, 28, 1) print(x_train.shape) ...and it worked! A bit of a headache to figure out, but I did it! Thanks
Can someone explain what he means by flattening it before hand? at 13:50 i didn't get it.. Like what does he mean before feeding into the neural network. I thought all the steps are part of the network.
Is there any reason why you use ReLu over Sigmoid as the output activation function of the decoder? Cause you want values between 0.0 and 1.0 as the output. Or maybe use a capped ReLu so the model can easily learn complete black and white.
I wonder whether one can construct an artificial visual center using an autoencoder. Then feed the decoder electronic noise with a component of quantum event randomness and watch it create visuals, with some sort of meaning, related to the thoughts and current life situation of the spectators of the created images. In a statistical significant way?
Given an average active vocabulary of 20,000 words, 5 words would result in 100,000 combinations which equate to a little over 65536 (2^16). Therefore allowing the memory of 256 8bit(255) values. Thes 256 values could be autoencoder from/to a photo.
If someone curious how to run decoder only on randomly generated seed here is the code to add after autoencoder.summary() : encoded_input = keras.Input(shape=(9)) deco = autoencoder.layers[-2](encoded_input) deco = autoencoder.layers[-1](deco) decoder = keras.Model(encoded_input, deco)
after fitting try running decoder-only like this : img_seed=np.random.rand(9) new_img = decoder.predict([img_seed.reshape(-1,9)]) plt.imshow(new_img.reshape(28,28), cmap="gray")
It seems to me that your autoencoder is a generative model ,it took a number 2 example and was able recognize it and then produced a simular example but slightly different , the numbers you used are from the test data. thanks for the demo
If you want to use the auto encoder as a generative model you can change it to an variational auto encoder and then it wil properly be generativ ( might be a dump comment i haven’t watched the vid )
What would happen if you gave it an image that it was not trained to encode/decode? Not just added noise but completely different image, like a smiley face? Would the output resemble a digit? would it resemble a smiley? would it be random?
Thank you so much ! I was ready to abandon MIT 6S191 because of the autoencoder lesson, but you made it clear and (almost) simple. Now we human, go from the 384 features images to...1 (a digit). Can a NN discover the concept of a digit by itself ? Maybe not [0..9] but an array of numbers ? Thanks for your great book and your videos
Finally the one hole is filled with this one, now we want All About Autoencoder playlist explaining Sparse Autoencoder, Contractive Autoencoder and many ... Lol just kidding Great Tutorial, and 49 mins.. what a short tutorial ..
its kinda cool but we can actually compess the data in a single value as it has 255 possible values and there are only 9 classes tho it would need slightly bigger network also love your videos. Edit: probably then you can explore the latent space like manual change the single encoded value to see the network generate numbers on itself.
Hey Sentdex, was just wondering, when you print out ae_out using plt.imshow, I get an error "invalid shape(28,28,1) for image data". Have you got any idea why this would be
Over night thoughts: It struck me that what might be happening in this case might not be as complex as I originally thought. Thinking Bits now, how much data is there in the mnist images ? 784 *8 =6272 bits and they were squashed to 64 * 9 = 576 . Still impressive . But a simple binary slice would have reduced the image data to 784bits and one could argue (as harrison did ) that the mnist image size is overly generous and only a small reduction to 23*23 would drop the bit count to below the 576 of the output. So, are we seeing a simple rearrangement of the input bits (after trimming & slicing ) to something that looks like 9 floating point numbers?
Its possible that at the super compressed lvls that yes, some of that is at play, but auto encoders can compress down far more unique imagery/data than this too. Certainly something worthy of peaking into though!
I'm kinda lost, now how do i seperate encoder and decoder. After training the autoencoder, the encoder and decoder doesn't seem to carry those weights and not working well. Can anyone tell the correct way to do this i don't know?
i still have that problem you had at around 27min, after running the cell plt.imshow(ae_out, cmap="gray") TypeError: Invalid shape (1, 28, 28, 1) for image data
@@username42 That's strange. I'd guess you either don't have that [0] in the end of a line, or your error message is a bit different. Anyway, try replacing the line *plt.imshow(ae_out, cmap="gray")* with plt.imshow(ae_out.reshape(28, 28), cmap="gray") That should work.
thanks for the tutorial, i have a question for you: do you think PCA is better than autoencoders for feature selection regarding classification problems? or how it differs from PCA?
Well, I guess you know why (28, 28, 1), right? But the question is what that -1 was for. Basically, -1 means that numpy has to figure out that number by itself. That's not hard to do since the total number of elements has to stay the same. Why would you need to use such notation? To by able to pass a more than one digit to encoder. Say, you want pass 5 digits to encoder. Instead of x_test[0] you'd type x_test[:5] But now you need to reshape it not to array of size (1, 28, 28, 1), but to array (5, 28, 28, 1). So, instead of changing reshape size every time, you just type -1 and numpy will guess that thing for you.
I know what -1 does, I'm not able to grasp why 4 dimensions is being used? Isn't it 28x28 pixel image so its 28,28,1.. why is the extra first number used?
@@EshwarNorthEast Ah, that's easy. All of neural networks are learned in batches. Simply put, there's no use to feed a network one image at a time. It is much better to give it a bunch of images (or whatever data it recieves) and process them all at once. In the case of training, network's weights are updated after every batch, not after every image. In case of prediction (i.e. using trained network for it's purpose), it also gets and spits data in batches. Not that you can not design a network that works with single images at a time, but now all APIs are designed so, that you put and get only batches. That's why you always get a batch, even if it consists of a single image. Hence, you need to extract that image to use it. Also, if you need to put a single image into predict, you still need to reshape it into a batch consisting of a single image.
No matter what I do, I can't get the 'ae_out' cell to work. I've even copied and pasted directly from your site - still. no luck! It says "TypeError: Invalid shape (28, 28, 1) for image data"
Since quantopian is brought by Robinhood and Zipline is not maintained any more , will you do any new follow up for python for finances. Also if will don't anything only specific to USA , I'm from India so I have not access lot of tools us specific . stay awesome ♥️
If we reduced the mnist images (say to 14x14) then trained a second encoder to match those images to your bottleneck values, could we then attach that to your decoder to upscale the images back the original 28x28?
Any chance you can make a video on installing Tensorflow 2.4 and Keras along with some method to identify if your GPU is being utilized and, if it is underutilized, how to optimize this. I have seen you use Tensorflow in some of your videos, but it is so confusing for a person that is getting into this, given I am using an RTX 3090.
Please have this tutorial series lead into auto-encoding pictures! then modifying the picures 'compressed' space manually, to produce different outputs, or tackle larger dataset w/ Convo2Ds e.g. cifar10
It should support TF, just maybe not GPU. This tutorial could be done off GPU. Most of the content I do really just requires high end GPUs where TF would work though.
How do I use decoder only? Let's say i plug in encoder output to it, and will be expecting it to give decoded results. If someone can help me out it'd be great.
# This is our encoded (32-dimensional) input encoded_input = keras.Input(shape=(60,)) # Retrieve the last layer of the autoencoder model decoder_layer = autoencoder.layers[-1] # Create the decoder model decoder = keras.Model(encoded_input, decoder_layer(encoded_input))
i would like to see data moving from encoder output to the final auto encoder output (image), that i didnt see in the video....ur just passing the SAME image from input layer all the way to the output layer (which i know is compressing data in the middle because ur using less neurons), it would have been beneficial if u stored encoder output on the file system, then pulled it out and passed to the neural network to get the ORIGINAL image.
This is why I keep up with RUclips. This scale of Quality information is what the makers of internet envisioned. Keep up the good work.
well said!!
Signed my first contract as a python programmer last Friday. I learnt it all here, Tensorflow, opencv, django, flask....Thanks man. I'll surely pay my gratitude in the future.
Good job,congrats buddy!
I just wanna say that, I started off with your tutorials to get into Machine Learning and boy have I come a far way from when I started your tutorials gave me just what I needed to study on my own and learn these things (also hey I ditched keras/tf for pytorch seems a lot more efficient honestly) but thanks and congratulations on 1 million
I like how you're putting out really long videos with huge amount of information to follow, even though the rate at which you're uploading is slow. It suffices. Thank you so much, dude! You've been my inspiration since I started programming!
Man, you find the perfect topics for tutorials!
Hah, it's always just things I am curious about personally, seems to work out :D
Congrats sentdex for 1 mil! At around 7:15 you divided x_train by 255.0 for the second time btw, so the values were between 0 and 0.000015
I eventually figured it out down the line :P
@@sentdex I noticed that at the time you divided it twice but I am shocked how you quickly guessed the issue after going that much further in the code. Bravo!
I really like that you explain every reasoning, for someone who is not that great at understanding things first time this is very helpful.
It's actually amazing to see a tutorial maker making mistakes. That way you can learn what errors you might encounter and how to tackle them. Never thought of it that way.
i had this video in my watch-later list since you uploaded it - because i didnt knew what autoencoders where and they sounded pretty hard to understand
but man are you good at explaining.
i learned python from your channel before went to university - and it everything super easy for me - since i already knew how to programm - thanks to you
jaw droped when you add noise to the image. this is amazing
Thanks for this Video. I would love to see more of these where you explain concepts like autoencoders and transformers visually. Really helpful.
Well done!
For the visualization, you need to use the squeeze() function in Colab
ae_out = autoencoder.predict([ X_test[0].reshape(-1, 28, 28, 1) ])
img = ae_out[0] # predict is done on a vector, and returns a vector, even if its just 1 element, so we still need to grab the 0th
plt.imshow(tf.squeeze(ae_out[0]), cmap="gray")
Another awesome aspect of autoencoders is, if you take the decoder part and give it a vector of size that matches it's input, you now have a generative model which is called variational autoencoders that you can use to generate images just like GANs.
How do I take out the decoder and use it?
14:28 Correction: "to map input to output"
7:22 its because x_train is a numpy array and /= isn't supported with numpy (it is supported with regular python btw).
Amazing tutorial! Thank you very much for providing so inspiring and high-quality information. Indeed, your machine learning and python tutorial series helped me a lot when I changed my career towards data science.
I am wondering if you are planning to do a new video about variational autoencoders. That would be awesome to see!
One Million - congratulations! You should get yourself a fancy mug.
Congratulations on 1 million subscribers!
In order to avoid the black dots(missing pixels ), just remove relu from last layer of the decoder. It worked for me,
You’re a legend Sentdex! 🙌
Awesome content, thank you for sharing it your way 😃
You are the man! Always have been, always will be
That's very cool indeed. Would love to see you using it on something harder than mnist next.
Crazy magic, Sentdex is a magician!
I'd be so excited if you were to start a series on chess engines. It would be fun to see how you would go about this, even if the engine wouldn't be that strong!
That division by 255 was a tricky guess. I could not understand what's wrong with your code.
However, losses in the ballpark of e-7 were a good hint. Thanks for that error. It's always nice to learn by other's errors. Especially when you can watch them at 2x speed.
Thanks for your work.
At 20:43, Im getting an error that says
"Error when checking input: expected img to have 4 dimensions, but got an array with shape (60000, 28, 28)"
Any suggestions?
I think you have to reshape the image using numpy with the fourth dimension as 1 so the new shape will be (6000, 28, 28, 1). I am no master but I hope this helps
@@codingmadeeasy3126 YES! Before fitting the data to the model, I used:
import numpy as np
np_array=np.asarray(x_train)
print(np_array.shape)
x_train = np_array.reshape(60000, 28, 28, 1)
print(x_train.shape)
...and it worked! A bit of a headache to figure out, but I did it! Thanks
@@dt28469 Glad I could help
Hi sentdex,
Your videos are excellent.. Appreciate your hard work and time to prepare such wonderful contents..
haven't caught up with your videos lately dang it YT, but holy shit didn't see you hit 1M subs CONGRATULATIONS
Thanks :D
Thanks for the high quality content!!!
Congrats on a million subs!!!!!!!!!!!!!!!!!!!!!!!!!!
Really,U r making everything very easy.Thank u very much.
And may i add that I too am trying to kick my fancy cup addiction, good to see you are keeping on the straight and narrow ;-)
@@s4br3 it's a time traveller, some of sentdex friends
@@s4br3 channel members get videos early, can recommend a membership.
This is gold. Nice video man!
Can someone explain what he means by flattening it before hand? at 13:50 i didn't get it.. Like what does he mean before feeding into the neural network. I thought all the steps are part of the network.
i love your videos
they are informative and motivational
This was sooo fun thanks!
Is there any reason why you use ReLu over Sigmoid as the output activation function of the decoder? Cause you want values between 0.0 and 1.0 as the output. Or maybe use a capped ReLu so the model can easily learn complete black and white.
I wonder whether one can construct an artificial visual center using an autoencoder.
Then feed the decoder electronic noise with a component of quantum event randomness and watch it create visuals, with some sort of meaning, related to the thoughts and current life situation of the spectators of the created images.
In a statistical significant way?
27:11 Technically it was done twice on x_train but once on x_test
What would be really cool, is to make an autoencoder that encodes to a string of words that can be remembered. And then decoded back into an image.
Given an average active vocabulary of 20,000 words, 5 words would result in 100,000 combinations which equate to a little over 65536 (2^16). Therefore allowing the memory of 256 8bit(255) values. Thes 256 values could be autoencoder from/to a photo.
If someone curious how to run decoder only on randomly generated seed here is the code to add after autoencoder.summary() :
encoded_input = keras.Input(shape=(9))
deco = autoencoder.layers[-2](encoded_input)
deco = autoencoder.layers[-1](deco)
decoder = keras.Model(encoded_input, deco)
after fitting try running decoder-only like this :
img_seed=np.random.rand(9)
new_img = decoder.predict([img_seed.reshape(-1,9)])
plt.imshow(new_img.reshape(28,28), cmap="gray")
27:00 you can avoid this kind of error in notebook by doing safer renaming e.g. x_train_rgb = x_train / 255.0
It seems to me that your autoencoder is a generative model ,it took a number 2 example and was able recognize it and then produced a simular example but slightly different , the numbers you used are from the test data. thanks for the demo
If you want to use the auto encoder as a generative model you can change it to an variational auto encoder and then it wil properly be generativ
( might be a dump comment i haven’t watched the vid )
a video about variational autoencoders would be nice!
Awesome video! Could you please upload some video on transformer that you were talking about? Thank you very much.
What would happen if you gave it an image that it was not trained to encode/decode? Not just added noise but completely different image, like a smiley face? Would the output resemble a digit? would it resemble a smiley? would it be random?
Try it and let us know :D
wow, this is super cool!
Intersting tutorial which love to learn by doing, thanks a lot Justin Gaethje 😬
Always fascinating 😁😁
For this person I always wait to see
Thank you so much ! I was ready to abandon MIT 6S191 because of the autoencoder lesson, but you made it clear and (almost) simple. Now we human, go from the 384 features images to...1 (a digit). Can a NN discover the concept of a digit by itself ? Maybe not [0..9] but an array of numbers ? Thanks for your great book and your videos
Finally the one hole is filled with this one, now we want All About Autoencoder playlist explaining Sparse Autoencoder, Contractive Autoencoder and many ...
Lol just kidding
Great Tutorial, and 49 mins.. what a short tutorial ..
1 mil . 🎉🎉🔥
its kinda cool but we can actually compess the data in a single value as it has 255 possible values and there are only 9 classes tho it would need slightly bigger network also love your videos.
Edit: probably then you can explore the latent space like manual change the single encoded value to see the network generate numbers on itself.
If you have your own dataset how do you load it? You can not use the load_data() command from MNIST comand here to load an excel or CSV file
Question: x_train[0].shape = (28,28) why would ur input layer shape be (28,28,1)? this is throwing me an error
Hey Sentdex, was just wondering, when you print out ae_out using plt.imshow, I get an error "invalid shape(28,28,1) for image data". Have you got any idea why this would be
Same issue any solutions?
try replacing the line
plt.imshow(ae_out, cmap="gray")
with
plt.imshow(ae_out.reshape(28, 28), cmap="gray")
Thank you @sentdex
This was great! and not long at all.
Please make more machine learning tutorials with Keras ~~ Thanks
Over night thoughts: It struck me that what might be happening in this case might not be as complex as I originally thought.
Thinking Bits now, how much data is there in the mnist images ? 784 *8 =6272 bits
and they were squashed to 64 * 9 = 576 . Still impressive .
But a simple binary slice would have reduced the image data to 784bits and one could argue (as harrison did ) that the mnist image size is overly generous and only a small reduction to 23*23 would drop the bit count to below the 576 of the output.
So, are we seeing a simple rearrangement of the input bits (after trimming & slicing ) to something that looks like 9 floating point numbers?
Its possible that at the super compressed lvls that yes, some of that is at play, but auto encoders can compress down far more unique imagery/data than this too.
Certainly something worthy of peaking into though!
what does it means when someone write for example Flatten(something)(encoder_input) what is the purpose of the right parenthesis part?
What is the bit depth of the 9 intermediate values?
Is it possible to overfit till 100 percent accuracy. If we achieve that we can use it for video compression.
I'm kinda lost, now how do i seperate encoder and decoder. After training the autoencoder, the encoder and decoder doesn't seem to carry those weights and not working well. Can anyone tell the correct way to do this i don't know?
Thank you so much.
can you make a tutorial on anomaly detection in imae using auto encoder?
i still have that problem you had at around 27min, after running the cell plt.imshow(ae_out, cmap="gray")
TypeError: Invalid shape (1, 28, 28, 1) for image data
If you haven't solved yet, here's a hint. Pay attention to *[0]* part in the line where ae_out gets initialized.
@@DimaZheludko could not solved yet , also followed the text tutorial and watch the video several times :/
@@username42 Show me your line where you evaluate ae_out
ae_out = .... ?
@@DimaZheludko as same as in the video ae_out = autoencoder.predict([x_test[1].reshape(-1, 28, 28, 1)])[0]
@@username42 That's strange. I'd guess you either don't have that [0] in the end of a line, or your error message is a bit different. Anyway, try replacing the line
*plt.imshow(ae_out, cmap="gray")*
with
plt.imshow(ae_out.reshape(28, 28), cmap="gray")
That should work.
Is it possible to use a raw dataset as input in this example??
thanks for the tutorial, i have a question for you: do you think PCA is better than autoencoders for feature selection regarding classification problems? or how it differs from PCA?
Had same exact question.
@@Nughug2 yep but he does not answer tho :/
had same exact equation
@@M.zatary did u find the answer?
Thank you so much for your great efforts,
Sir, I need to adjust these AE with my own Datsset(IR images) but I find a dimensionality problems!
Try to work on patches of the whole image. This works well and even artificially enlarges your dataset.
@@MaB1235813 I didn't understand you sir !
Can you attach the code for CNN based Autoencoder too? I am stuck on an error which I am unable to identify.
It is where you take a bottle of water and your ticket for the trip. Let's go
you just know things are going bad when he doesn't sip from a dragon spitting fire shaped mug or something ...
I am wondering if in convolutional eutoencoders shouldn't you use Conv2DTranspose layers, instead of Conv2D? maybe that is the reason for error?
Why is reshape -1,28,28,1 being used for encoder predict, can someone explain please?
Well, I guess you know why (28, 28, 1), right? But the question is what that -1 was for.
Basically, -1 means that numpy has to figure out that number by itself. That's not hard to do since the total number of elements has to stay the same.
Why would you need to use such notation? To by able to pass a more than one digit to encoder. Say, you want pass 5 digits to encoder. Instead of x_test[0] you'd type x_test[:5]
But now you need to reshape it not to array of size (1, 28, 28, 1), but to array (5, 28, 28, 1). So, instead of changing reshape size every time, you just type -1 and numpy will guess that thing for you.
I know what -1 does, I'm not able to grasp why 4 dimensions is being used? Isn't it 28x28 pixel image so its 28,28,1.. why is the extra first number used?
@@EshwarNorthEast Ah, that's easy.
All of neural networks are learned in batches. Simply put, there's no use to feed a network one image at a time. It is much better to give it a bunch of images (or whatever data it recieves) and process them all at once.
In the case of training, network's weights are updated after every batch, not after every image.
In case of prediction (i.e. using trained network for it's purpose), it also gets and spits data in batches. Not that you can not design a network that works with single images at a time, but now all APIs are designed so, that you put and get only batches.
That's why you always get a batch, even if it consists of a single image. Hence, you need to extract that image to use it. Also, if you need to put a single image into predict, you still need to reshape it into a batch consisting of a single image.
@@DimaZheludko thanks a lot! I'm a noob to AI, so didnt make sense. Now I got it. You are god!
Very great video! Thanks a lot!! Did you upload the cats vs. dogs code as well? I would be very interested to have a look! (=
No matter what I do, I can't get the 'ae_out' cell to work. I've even copied and pasted directly from your site - still. no luck! It says "TypeError: Invalid shape (28, 28, 1) for image data"
iirc encoders require it to be 1-d, maybe try flatten()? (may be a dumb comment havent watched the vid and did encoders a while ago :P)
Since quantopian is brought by Robinhood and Zipline is not maintained any more , will you do any new follow up for python for finances. Also if will don't anything only specific to USA , I'm from India so I have not access lot of tools us specific . stay awesome ♥️
Not sure, nothing planned atm, but possibly again one day
"Cannot think of a way to shrink ten digits into 9 values"
Me: counts in binary with 4 values
Me: counts in decimal with 1 value
*The gods have spoken.*
If we reduced the mnist images (say to 14x14) then trained a second encoder to match those images to your bottleneck values, could we then attach that to your decoder to upscale the images back the original 28x28?
sir after autoencoder, how can i used genetic algorithm technique for selecting best feature subset?
Which notebook environment is this? (looks much cleaner and more convenient than jupyter)
Man, hope i can be like you :D
Great video! Could you share the code of the convolutional network as well?
Any chance you can make a video on installing Tensorflow 2.4 and Keras along with some method to identify if your GPU is being utilized and, if it is underutilized, how to optimize this. I have seen you use Tensorflow in some of your videos, but it is so confusing for a person that is getting into this, given I am using an RTX 3090.
A video per chapter of the book nnfs would be great....
Some of the chapters would be waaaaay too long for a single video. Plan to do more nnfs videos, but busy with other things atm
lol.... man, I loved this!
Please have this tutorial series lead into auto-encoding pictures! then modifying the picures 'compressed' space manually, to produce different outputs, or tackle larger dataset w/ Convo2Ds e.g. cifar10
Can you make thoose thing without keras and tensorflow, because my pc don't support tensorflow at all :(
It should support TF, just maybe not GPU. This tutorial could be done off GPU. Most of the content I do really just requires high end GPUs where TF would work though.
Recently im working with low end PC's with 6gb of ddr2, Pentium dual core, with no GPU :)
@@alvinsetyapranata3928 Try using google colab
Quick question: Is it possible to use subscript to call another python script but execute it with a different python version?
Sorry to interrupt is this jupyter notebooks just dark mode on? (can't tell I'm watching from my phone)
what is the software that you're using to run this autencoder ?
How do I use decoder only? Let's say i plug in encoder output to it, and will be expecting it to give decoded results. If someone can help me out it'd be great.
# This is our encoded (32-dimensional) input
encoded_input = keras.Input(shape=(60,))
# Retrieve the last layer of the autoencoder model
decoder_layer = autoencoder.layers[-1]
# Create the decoder model
decoder = keras.Model(encoded_input, decoder_layer(encoded_input))
i would like to see data moving from encoder output to the final auto encoder output (image), that i didnt see in the video....ur just passing the SAME image from input layer all the way to the output layer (which i know is compressing data in the middle because ur using less neurons), it would have been beneficial if u stored encoder output on the file system, then pulled it out and passed to the neural network to get the ORIGINAL image.
Hello, where can I find the code for RGB image autoencoder?
Guys what IDE is this cuz im working with pycharm and i cant do anything
Hi sent. I get the error when I want to
plt. Imshow(ae_out, cmap='gray'). The error was invalid shape (28,28,1) for image data. How I solving this?
just reshape it - plt.Imshow(ae_out.reshape(28,28), cmap='gray')