Autoencoders in Python with Tensorflow/Keras

Поделиться
HTML-код
  • Опубликовано: 3 фев 2025

Комментарии • 195

  • @yatharthshah
    @yatharthshah 3 года назад +99

    This is why I keep up with RUclips. This scale of Quality information is what the makers of internet envisioned. Keep up the good work.

  • @francistembo650
    @francistembo650 3 года назад +59

    Signed my first contract as a python programmer last Friday. I learnt it all here, Tensorflow, opencv, django, flask....Thanks man. I'll surely pay my gratitude in the future.

  • @EastSideGameGuy
    @EastSideGameGuy 3 года назад +19

    I just wanna say that, I started off with your tutorials to get into Machine Learning and boy have I come a far way from when I started your tutorials gave me just what I needed to study on my own and learn these things (also hey I ditched keras/tf for pytorch seems a lot more efficient honestly) but thanks and congratulations on 1 million

  • @hemanthkotagiri8865
    @hemanthkotagiri8865 3 года назад +3

    I like how you're putting out really long videos with huge amount of information to follow, even though the rate at which you're uploading is slow. It suffices. Thank you so much, dude! You've been my inspiration since I started programming!

  • @python1108
    @python1108 3 года назад +21

    Man, you find the perfect topics for tutorials!

    • @sentdex
      @sentdex  3 года назад +9

      Hah, it's always just things I am curious about personally, seems to work out :D

  • @ElksuGuitar
    @ElksuGuitar 3 года назад +14

    Congrats sentdex for 1 mil! At around 7:15 you divided x_train by 255.0 for the second time btw, so the values were between 0 and 0.000015

    • @sentdex
      @sentdex  3 года назад +6

      I eventually figured it out down the line :P

    • @FluxFloyd
      @FluxFloyd 3 года назад +4

      @@sentdex I noticed that at the time you divided it twice but I am shocked how you quickly guessed the issue after going that much further in the code. Bravo!

  • @hola-kx1gn
    @hola-kx1gn 3 года назад +3

    I really like that you explain every reasoning, for someone who is not that great at understanding things first time this is very helpful.

  • @killer-whale864
    @killer-whale864 2 года назад

    It's actually amazing to see a tutorial maker making mistakes. That way you can learn what errors you might encounter and how to tackle them. Never thought of it that way.

  • @Fruchtkotzekiddy
    @Fruchtkotzekiddy 3 года назад

    i had this video in my watch-later list since you uploaded it - because i didnt knew what autoencoders where and they sounded pretty hard to understand
    but man are you good at explaining.
    i learned python from your channel before went to university - and it everything super easy for me - since i already knew how to programm - thanks to you

  • @kevinshen3221
    @kevinshen3221 2 года назад

    jaw droped when you add noise to the image. this is amazing

  • @CaptJeanPicard
    @CaptJeanPicard 3 года назад

    Thanks for this Video. I would love to see more of these where you explain concepts like autoencoders and transformers visually. Really helpful.

  • @iantimmons3342
    @iantimmons3342 2 года назад

    Well done!
    For the visualization, you need to use the squeeze() function in Colab
    ae_out = autoencoder.predict([ X_test[0].reshape(-1, 28, 28, 1) ])
    img = ae_out[0] # predict is done on a vector, and returns a vector, even if its just 1 element, so we still need to grab the 0th
    plt.imshow(tf.squeeze(ae_out[0]), cmap="gray")

  • @mohammednasir7993
    @mohammednasir7993 3 года назад +1

    Another awesome aspect of autoencoders is, if you take the decoder part and give it a vector of size that matches it's input, you now have a generative model which is called variational autoencoders that you can use to generate images just like GANs.

  • @Techbin-yn6wr
    @Techbin-yn6wr Год назад +1

    14:28 Correction: "to map input to output"

  • @hackercop
    @hackercop 3 года назад +1

    7:22 its because x_train is a numpy array and /= isn't supported with numpy (it is supported with regular python btw).

  • @MaxJr82
    @MaxJr82 3 года назад +2

    Amazing tutorial! Thank you very much for providing so inspiring and high-quality information. Indeed, your machine learning and python tutorial series helped me a lot when I changed my career towards data science.
    I am wondering if you are planning to do a new video about variational autoencoders. That would be awesome to see!

  • @adempc
    @adempc 3 года назад

    One Million - congratulations! You should get yourself a fancy mug.

  • @Alaska-mk4ok
    @Alaska-mk4ok 3 года назад

    Congratulations on 1 million subscribers!

  • @samimuzzamil7267
    @samimuzzamil7267 2 года назад

    In order to avoid the black dots(missing pixels ), just remove relu from last layer of the decoder. It worked for me,

  • @ahkb
    @ahkb 3 года назад +3

    You’re a legend Sentdex! 🙌

  • @GeorgeZoto
    @GeorgeZoto 2 года назад

    Awesome content, thank you for sharing it your way 😃

  • @pearcejarrett458
    @pearcejarrett458 3 года назад +1

    You are the man! Always have been, always will be

  • @pafnutiytheartist
    @pafnutiytheartist 3 года назад +1

    That's very cool indeed. Would love to see you using it on something harder than mnist next.

  • @julianray6802
    @julianray6802 3 года назад

    Crazy magic, Sentdex is a magician!

  • @fiNitEarth
    @fiNitEarth 3 года назад +6

    I'd be so excited if you were to start a series on chess engines. It would be fun to see how you would go about this, even if the engine wouldn't be that strong!

  • @DimaZheludko
    @DimaZheludko 3 года назад

    That division by 255 was a tricky guess. I could not understand what's wrong with your code.
    However, losses in the ballpark of e-7 were a good hint. Thanks for that error. It's always nice to learn by other's errors. Especially when you can watch them at 2x speed.
    Thanks for your work.

  • @dt28469
    @dt28469 3 года назад +1

    At 20:43, Im getting an error that says
    "Error when checking input: expected img to have 4 dimensions, but got an array with shape (60000, 28, 28)"
    Any suggestions?

    • @codingmadeeasy3126
      @codingmadeeasy3126 3 года назад +2

      I think you have to reshape the image using numpy with the fourth dimension as 1 so the new shape will be (6000, 28, 28, 1). I am no master but I hope this helps

    • @dt28469
      @dt28469 3 года назад +2

      @@codingmadeeasy3126 YES! Before fitting the data to the model, I used:
      import numpy as np
      np_array=np.asarray(x_train)
      print(np_array.shape)
      x_train = np_array.reshape(60000, 28, 28, 1)
      print(x_train.shape)
      ...and it worked! A bit of a headache to figure out, but I did it! Thanks

    • @codingmadeeasy3126
      @codingmadeeasy3126 3 года назад +1

      @@dt28469 Glad I could help

  • @ArunKumar-fv6uw
    @ArunKumar-fv6uw 3 года назад

    Hi sentdex,
    Your videos are excellent.. Appreciate your hard work and time to prepare such wonderful contents..

  • @tushardharia98
    @tushardharia98 3 года назад

    haven't caught up with your videos lately dang it YT, but holy shit didn't see you hit 1M subs CONGRATULATIONS

  • @Zoronoa01
    @Zoronoa01 3 года назад

    Thanks for the high quality content!!!

  • @aaditmaheshwari4080
    @aaditmaheshwari4080 3 года назад

    Congrats on a million subs!!!!!!!!!!!!!!!!!!!!!!!!!!

  • @farheenakhatib7540
    @farheenakhatib7540 3 года назад

    Really,U r making everything very easy.Thank u very much.

  • @fuba44
    @fuba44 3 года назад +3

    And may i add that I too am trying to kick my fancy cup addiction, good to see you are keeping on the straight and narrow ;-)

    • @moazalhomsi
      @moazalhomsi 3 года назад

      @@s4br3 it's a time traveller, some of sentdex friends

    • @fuba44
      @fuba44 3 года назад

      @@s4br3 channel members get videos early, can recommend a membership.

  • @jasonclement6305
    @jasonclement6305 3 года назад

    This is gold. Nice video man!

  • @2596mr
    @2596mr Год назад

    Can someone explain what he means by flattening it before hand? at 13:50 i didn't get it.. Like what does he mean before feeding into the neural network. I thought all the steps are part of the network.

  • @aminghafoori6496
    @aminghafoori6496 3 года назад

    i love your videos
    they are informative and motivational

  • @hackercop
    @hackercop 3 года назад

    This was sooo fun thanks!

  • @StadtAffeLP
    @StadtAffeLP 3 года назад +2

    Is there any reason why you use ReLu over Sigmoid as the output activation function of the decoder? Cause you want values between 0.0 and 1.0 as the output. Or maybe use a capped ReLu so the model can easily learn complete black and white.

  • @jensklausen2449
    @jensklausen2449 2 года назад

    I wonder whether one can construct an artificial visual center using an autoencoder.
    Then feed the decoder electronic noise with a component of quantum event randomness and watch it create visuals, with some sort of meaning, related to the thoughts and current life situation of the spectators of the created images.
    In a statistical significant way?

  • @DarkRedman31
    @DarkRedman31 3 года назад

    27:11 Technically it was done twice on x_train but once on x_test

  • @dfgaJK
    @dfgaJK 3 года назад +1

    What would be really cool, is to make an autoencoder that encodes to a string of words that can be remembered. And then decoded back into an image.

    • @dfgaJK
      @dfgaJK 3 года назад +1

      Given an average active vocabulary of 20,000 words, 5 words would result in 100,000 combinations which equate to a little over 65536 (2^16). Therefore allowing the memory of 256 8bit(255) values. Thes 256 values could be autoencoder from/to a photo.

  • @YurickMegaloss
    @YurickMegaloss 3 года назад

    If someone curious how to run decoder only on randomly generated seed here is the code to add after autoencoder.summary() :
    encoded_input = keras.Input(shape=(9))
    deco = autoencoder.layers[-2](encoded_input)
    deco = autoencoder.layers[-1](deco)
    decoder = keras.Model(encoded_input, deco)

    • @YurickMegaloss
      @YurickMegaloss 3 года назад

      after fitting try running decoder-only like this :
      img_seed=np.random.rand(9)
      new_img = decoder.predict([img_seed.reshape(-1,9)])
      plt.imshow(new_img.reshape(28,28), cmap="gray")

  • @alexzan1858
    @alexzan1858 3 года назад

    27:00 you can avoid this kind of error in notebook by doing safer renaming e.g. x_train_rgb = x_train / 255.0

  • @tomoki-v6o
    @tomoki-v6o 3 года назад +1

    It seems to me that your autoencoder is a generative model ,it took a number 2 example and was able recognize it and then produced a simular example but slightly different , the numbers you used are from the test data. thanks for the demo

    • @jeff._.6262
      @jeff._.6262 3 года назад +1

      If you want to use the auto encoder as a generative model you can change it to an variational auto encoder and then it wil properly be generativ
      ( might be a dump comment i haven’t watched the vid )

  • @patrickb.5761
    @patrickb.5761 3 года назад

    a video about variational autoencoders would be nice!

  • @bhavinmoriya9216
    @bhavinmoriya9216 3 года назад

    Awesome video! Could you please upload some video on transformer that you were talking about? Thank you very much.

  • @sirdumb1592
    @sirdumb1592 3 года назад +1

    What would happen if you gave it an image that it was not trained to encode/decode? Not just added noise but completely different image, like a smiley face? Would the output resemble a digit? would it resemble a smiley? would it be random?

    • @sentdex
      @sentdex  3 года назад

      Try it and let us know :D

  • @nawapansuntorachai6297
    @nawapansuntorachai6297 3 года назад

    wow, this is super cool!

  • @nanodynamics5203
    @nanodynamics5203 2 месяца назад

    Intersting tutorial which love to learn by doing, thanks a lot Justin Gaethje 😬

  • @chiragmalaviya2430
    @chiragmalaviya2430 3 года назад

    Always fascinating 😁😁

  • @youtubevanced5019
    @youtubevanced5019 3 года назад

    For this person I always wait to see

  • @Kaltotun
    @Kaltotun 3 года назад

    Thank you so much ! I was ready to abandon MIT 6S191 because of the autoencoder lesson, but you made it clear and (almost) simple. Now we human, go from the 384 features images to...1 (a digit). Can a NN discover the concept of a digit by itself ? Maybe not [0..9] but an array of numbers ? Thanks for your great book and your videos

  • @Hotheaddragon
    @Hotheaddragon 3 года назад

    Finally the one hole is filled with this one, now we want All About Autoencoder playlist explaining Sparse Autoencoder, Contractive Autoencoder and many ...
    Lol just kidding
    Great Tutorial, and 49 mins.. what a short tutorial ..

  • @jugalnaik
    @jugalnaik 3 года назад

    1 mil . 🎉🎉🔥

  • @manthanpatil6410
    @manthanpatil6410 3 года назад

    its kinda cool but we can actually compess the data in a single value as it has 255 possible values and there are only 9 classes tho it would need slightly bigger network also love your videos.
    Edit: probably then you can explore the latent space like manual change the single encoded value to see the network generate numbers on itself.

  • @tonycardinal413
    @tonycardinal413 6 месяцев назад

    If you have your own dataset how do you load it? You can not use the load_data() command from MNIST comand here to load an excel or CSV file

  • @moodiali7324
    @moodiali7324 3 года назад +1

    Question: x_train[0].shape = (28,28) why would ur input layer shape be (28,28,1)? this is throwing me an error

  • @TheRhydian98
    @TheRhydian98 3 года назад +1

    Hey Sentdex, was just wondering, when you print out ae_out using plt.imshow, I get an error "invalid shape(28,28,1) for image data". Have you got any idea why this would be

    • @yousufshaikh8529
      @yousufshaikh8529 2 года назад

      Same issue any solutions?

    • @gokcemanap8756
      @gokcemanap8756 2 года назад

      try replacing the line
      plt.imshow(ae_out, cmap="gray")
      with
      plt.imshow(ae_out.reshape(28, 28), cmap="gray")

  • @pw7225
    @pw7225 3 года назад

    Thank you @sentdex

  • @fuba44
    @fuba44 3 года назад

    This was great! and not long at all.

  • @KevinskyChen
    @KevinskyChen 3 года назад

    Please make more machine learning tutorials with Keras ~~ Thanks

  • @wktodd
    @wktodd 3 года назад

    Over night thoughts: It struck me that what might be happening in this case might not be as complex as I originally thought.
    Thinking Bits now, how much data is there in the mnist images ? 784 *8 =6272 bits
    and they were squashed to 64 * 9 = 576 . Still impressive .
    But a simple binary slice would have reduced the image data to 784bits and one could argue (as harrison did ) that the mnist image size is overly generous and only a small reduction to 23*23 would drop the bit count to below the 576 of the output.
    So, are we seeing a simple rearrangement of the input bits (after trimming & slicing ) to something that looks like 9 floating point numbers?

    • @sentdex
      @sentdex  3 года назад

      Its possible that at the super compressed lvls that yes, some of that is at play, but auto encoders can compress down far more unique imagery/data than this too.
      Certainly something worthy of peaking into though!

  • @aquienleimporta9096
    @aquienleimporta9096 3 года назад

    what does it means when someone write for example Flatten(something)(encoder_input) what is the purpose of the right parenthesis part?

  • @dfgaJK
    @dfgaJK 3 года назад +1

    What is the bit depth of the 9 intermediate values?

  • @ayushnayak6138
    @ayushnayak6138 2 года назад

    Is it possible to overfit till 100 percent accuracy. If we achieve that we can use it for video compression.

  • @sripranav
    @sripranav 3 года назад

    I'm kinda lost, now how do i seperate encoder and decoder. After training the autoencoder, the encoder and decoder doesn't seem to carry those weights and not working well. Can anyone tell the correct way to do this i don't know?

  • @woojay
    @woojay 7 месяцев назад

    Thank you so much.

  • @NomanMustafascientist
    @NomanMustafascientist 3 года назад

    can you make a tutorial on anomaly detection in imae using auto encoder?

  • @username42
    @username42 3 года назад +1

    i still have that problem you had at around 27min, after running the cell plt.imshow(ae_out, cmap="gray")
    TypeError: Invalid shape (1, 28, 28, 1) for image data

    • @DimaZheludko
      @DimaZheludko 3 года назад +1

      If you haven't solved yet, here's a hint. Pay attention to *[0]* part in the line where ae_out gets initialized.

    • @username42
      @username42 3 года назад

      @@DimaZheludko could not solved yet , also followed the text tutorial and watch the video several times :/

    • @DimaZheludko
      @DimaZheludko 3 года назад +1

      @@username42 Show me your line where you evaluate ae_out
      ae_out = .... ?

    • @username42
      @username42 3 года назад

      @@DimaZheludko as same as in the video ae_out = autoencoder.predict([x_test[1].reshape(-1, 28, 28, 1)])[0]

    • @DimaZheludko
      @DimaZheludko 3 года назад +1

      @@username42 That's strange. I'd guess you either don't have that [0] in the end of a line, or your error message is a bit different. Anyway, try replacing the line
      *plt.imshow(ae_out, cmap="gray")*
      with
      plt.imshow(ae_out.reshape(28, 28), cmap="gray")
      That should work.

  • @alabaccar2179
    @alabaccar2179 2 года назад

    Is it possible to use a raw dataset as input in this example??

  • @username42
    @username42 3 года назад +1

    thanks for the tutorial, i have a question for you: do you think PCA is better than autoencoders for feature selection regarding classification problems? or how it differs from PCA?

    • @Nughug2
      @Nughug2 3 года назад +1

      Had same exact question.

    • @username42
      @username42 3 года назад +1

      @@Nughug2 yep but he does not answer tho :/

    • @M.zatary
      @M.zatary 2 года назад +1

      had same exact equation

    • @username42
      @username42 2 года назад

      @@M.zatary did u find the answer?

  • @belhafsiabdeldjalil5739
    @belhafsiabdeldjalil5739 3 года назад

    Thank you so much for your great efforts,
    Sir, I need to adjust these AE with my own Datsset(IR images) but I find a dimensionality problems!

    • @MaB1235813
      @MaB1235813 3 года назад +1

      Try to work on patches of the whole image. This works well and even artificially enlarges your dataset.

    • @belhafsiabdeldjalil5739
      @belhafsiabdeldjalil5739 3 года назад

      @@MaB1235813 I didn't understand you sir !

  • @aryanirvaan
    @aryanirvaan 2 года назад

    Can you attach the code for CNN based Autoencoder too? I am stuck on an error which I am unable to identify.

  • @acopernic
    @acopernic 3 года назад

    It is where you take a bottle of water and your ticket for the trip. Let's go

  • @FMH201
    @FMH201 3 года назад

    you just know things are going bad when he doesn't sip from a dragon spitting fire shaped mug or something ...

  • @grzegorzkozinski2308
    @grzegorzkozinski2308 3 года назад

    I am wondering if in convolutional eutoencoders shouldn't you use Conv2DTranspose layers, instead of Conv2D? maybe that is the reason for error?

  • @EshwarNorthEast
    @EshwarNorthEast 3 года назад

    Why is reshape -1,28,28,1 being used for encoder predict, can someone explain please?

    • @DimaZheludko
      @DimaZheludko 3 года назад

      Well, I guess you know why (28, 28, 1), right? But the question is what that -1 was for.
      Basically, -1 means that numpy has to figure out that number by itself. That's not hard to do since the total number of elements has to stay the same.
      Why would you need to use such notation? To by able to pass a more than one digit to encoder. Say, you want pass 5 digits to encoder. Instead of x_test[0] you'd type x_test[:5]
      But now you need to reshape it not to array of size (1, 28, 28, 1), but to array (5, 28, 28, 1). So, instead of changing reshape size every time, you just type -1 and numpy will guess that thing for you.

    • @EshwarNorthEast
      @EshwarNorthEast 3 года назад

      I know what -1 does, I'm not able to grasp why 4 dimensions is being used? Isn't it 28x28 pixel image so its 28,28,1.. why is the extra first number used?

    • @DimaZheludko
      @DimaZheludko 3 года назад +1

      @@EshwarNorthEast Ah, that's easy.
      All of neural networks are learned in batches. Simply put, there's no use to feed a network one image at a time. It is much better to give it a bunch of images (or whatever data it recieves) and process them all at once.
      In the case of training, network's weights are updated after every batch, not after every image.
      In case of prediction (i.e. using trained network for it's purpose), it also gets and spits data in batches. Not that you can not design a network that works with single images at a time, but now all APIs are designed so, that you put and get only batches.
      That's why you always get a batch, even if it consists of a single image. Hence, you need to extract that image to use it. Also, if you need to put a single image into predict, you still need to reshape it into a batch consisting of a single image.

    • @EshwarNorthEast
      @EshwarNorthEast 3 года назад +1

      @@DimaZheludko thanks a lot! I'm a noob to AI, so didnt make sense. Now I got it. You are god!

  • @lunabaudelaire6198
    @lunabaudelaire6198 3 года назад

    Very great video! Thanks a lot!! Did you upload the cats vs. dogs code as well? I would be very interested to have a look! (=

  • @theweekdaysofficial
    @theweekdaysofficial 3 года назад +1

    No matter what I do, I can't get the 'ae_out' cell to work. I've even copied and pasted directly from your site - still. no luck! It says "TypeError: Invalid shape (28, 28, 1) for image data"

    • @SiddharthMishraLastOne
      @SiddharthMishraLastOne 3 года назад

      iirc encoders require it to be 1-d, maybe try flatten()? (may be a dumb comment havent watched the vid and did encoders a while ago :P)

  • @MadlipzMarathi
    @MadlipzMarathi 3 года назад

    Since quantopian is brought by Robinhood and Zipline is not maintained any more , will you do any new follow up for python for finances. Also if will don't anything only specific to USA , I'm from India so I have not access lot of tools us specific . stay awesome ♥️

    • @sentdex
      @sentdex  3 года назад +1

      Not sure, nothing planned atm, but possibly again one day

  • @VonUndZuCaesar
    @VonUndZuCaesar 3 года назад +4

    "Cannot think of a way to shrink ten digits into 9 values"
    Me: counts in binary with 4 values

    • @israelRaizer
      @israelRaizer 3 года назад +1

      Me: counts in decimal with 1 value

  • @neillunavat
    @neillunavat 3 года назад

    *The gods have spoken.*

  • @jcorpac
    @jcorpac 3 года назад

    If we reduced the mnist images (say to 14x14) then trained a second encoder to match those images to your bottleneck values, could we then attach that to your decoder to upscale the images back the original 28x28?

  • @bilalahmed9705
    @bilalahmed9705 3 года назад

    sir after autoencoder, how can i used genetic algorithm technique for selecting best feature subset?

  • @YogyaModi
    @YogyaModi 3 года назад

    Which notebook environment is this? (looks much cleaner and more convenient than jupyter)

  • @alvinsetyapranata3928
    @alvinsetyapranata3928 3 года назад +1

    Man, hope i can be like you :D

  • @auraSinhue
    @auraSinhue 3 года назад

    Great video! Could you share the code of the convolutional network as well?

  • @syedalamdar100
    @syedalamdar100 3 года назад

    Any chance you can make a video on installing Tensorflow 2.4 and Keras along with some method to identify if your GPU is being utilized and, if it is underutilized, how to optimize this. I have seen you use Tensorflow in some of your videos, but it is so confusing for a person that is getting into this, given I am using an RTX 3090.

  • @rpraver1
    @rpraver1 3 года назад

    A video per chapter of the book nnfs would be great....

    • @sentdex
      @sentdex  3 года назад

      Some of the chapters would be waaaaay too long for a single video. Plan to do more nnfs videos, but busy with other things atm

  • @kdocki
    @kdocki 3 года назад

    lol.... man, I loved this!

  • @aloufin
    @aloufin 3 года назад

    Please have this tutorial series lead into auto-encoding pictures! then modifying the picures 'compressed' space manually, to produce different outputs, or tackle larger dataset w/ Convo2Ds e.g. cifar10

  • @alvinsetyapranata3928
    @alvinsetyapranata3928 3 года назад +1

    Can you make thoose thing without keras and tensorflow, because my pc don't support tensorflow at all :(

    • @sentdex
      @sentdex  3 года назад +1

      It should support TF, just maybe not GPU. This tutorial could be done off GPU. Most of the content I do really just requires high end GPUs where TF would work though.

    • @alvinsetyapranata3928
      @alvinsetyapranata3928 3 года назад

      Recently im working with low end PC's with 6gb of ddr2, Pentium dual core, with no GPU :)

    • @lakizmaj5679
      @lakizmaj5679 3 года назад +1

      @@alvinsetyapranata3928 Try using google colab

  • @Chretze
    @Chretze 3 года назад

    Quick question: Is it possible to use subscript to call another python script but execute it with a different python version?

  • @iskrabesamrtna
    @iskrabesamrtna 3 года назад

    Sorry to interrupt is this jupyter notebooks just dark mode on? (can't tell I'm watching from my phone)

  • @muhdfarkhan5511
    @muhdfarkhan5511 3 года назад

    what is the software that you're using to run this autencoder ?

  • @raghulkannan6881
    @raghulkannan6881 3 года назад

    How do I use decoder only? Let's say i plug in encoder output to it, and will be expecting it to give decoded results. If someone can help me out it'd be great.

    • @raghulkannan6881
      @raghulkannan6881 3 года назад

      # This is our encoded (32-dimensional) input
      encoded_input = keras.Input(shape=(60,))
      # Retrieve the last layer of the autoencoder model
      decoder_layer = autoencoder.layers[-1]
      # Create the decoder model
      decoder = keras.Model(encoded_input, decoder_layer(encoded_input))

  • @moodiali7324
    @moodiali7324 3 года назад

    i would like to see data moving from encoder output to the final auto encoder output (image), that i didnt see in the video....ur just passing the SAME image from input layer all the way to the output layer (which i know is compressing data in the middle because ur using less neurons), it would have been beneficial if u stored encoder output on the file system, then pulled it out and passed to the neural network to get the ORIGINAL image.

  • @martynasvenckus423
    @martynasvenckus423 3 года назад

    Hello, where can I find the code for RGB image autoencoder?

  • @mohamadavatefi7603
    @mohamadavatefi7603 2 года назад

    Guys what IDE is this cuz im working with pycharm and i cant do anything

  • @joanitoagililopo8868
    @joanitoagililopo8868 3 года назад

    Hi sent. I get the error when I want to
    plt. Imshow(ae_out, cmap='gray'). The error was invalid shape (28,28,1) for image data. How I solving this?

    • @YurickMegaloss
      @YurickMegaloss 3 года назад

      just reshape it - plt.Imshow(ae_out.reshape(28,28), cmap='gray')